ACCUMULO-4528 tool.sh is now 'accumulo-util hadoop-jar'

* Added note in README about copying jar to accumulo/lib/ext
diff --git a/README.md b/README.md
index 63a02d8..c4450eb 100644
--- a/README.md
+++ b/README.md
@@ -41,7 +41,12 @@
         cp examples.conf.template examples.conf
         nano examples.conf
 
-5. Each Accumulo example has its own documentation and instructions for running the example which
+5. The examples have some custom iterators that need to be executed by Accumulo tablet servers.
+   Make them available by copying the accumulo-examples.jar to Accumulo's `lib/ext` directory.
+
+        cp target/accumulo-examples-X.Y.Z.jar /path/accumulo/lib/ext/
+
+6. Each Accumulo example has its own documentation and instructions for running the example which
    are linked to below.
 
 When running the examples, remember the tips below:
@@ -50,9 +55,8 @@
   The `runex` command is a simple wrapper around the Maven Exec plugin.
 * Commands intended to be run in bash are prefixed by '$' and should be run from the root of this
   repository.
-* Several examples use the `accumulo` and `tool.sh` commands which are expected to be on your 
-  `PATH`. These commands are found in the `bin/` and `contrib/` directories of your Accumulo
-  installation.
+* Several examples use the `accumulo` and `accumulo-util` commands which are expected to be on your 
+  `PATH`. These commands are found in the `bin/` directory of your Accumulo installation.
 * Commands intended to be run in the Accumulo shell are prefixed by '>'.
 
 ## Available Examples
diff --git a/docs/bulkIngest.md b/docs/bulkIngest.md
index 614bde4..22bf07c 100644
--- a/docs/bulkIngest.md
+++ b/docs/bulkIngest.md
@@ -27,7 +27,7 @@
     $ ARGS="-i instance -z zookeepers -u username -p password"
     $ accumulo $PKG.SetupTable $ARGS -t test_bulk row_00000333 row_00000666
     $ accumulo $PKG.GenerateTestData --start-row 0 --count 1000 --output bulk/test_1.txt
-    $ tool.sh target/accumulo-examples.jar $PKG.BulkIngestExample $ARGS -t test_bulk --inputDir bulk --workDir tmp/bulkWork
+    $ accumulo-util hadoop-jar target/accumulo-examples.jar $PKG.BulkIngestExample $ARGS -t test_bulk --inputDir bulk --workDir tmp/bulkWork
     $ accumulo $PKG.VerifyIngest $ARGS -t test_bulk --start-row 0 --count 1000
 
 For a high level discussion of bulk ingest, see the docs dir.
diff --git a/docs/filedata.md b/docs/filedata.md
index 84311d2..aacd86e 100644
--- a/docs/filedata.md
+++ b/docs/filedata.md
@@ -40,7 +40,7 @@
 
 Run the CharacterHistogram MapReduce to add some information about the file.
 
-    $ tool.sh target/accumulo-examples.jar org.apache.accumulo.examples.filedata.CharacterHistogram -c ./examples.conf -t dataTable --auths exampleVis --vis exampleVis
+    $ accumulo-util hadoop-jar target/accumulo-examples.jar org.apache.accumulo.examples.filedata.CharacterHistogram -c ./examples.conf -t dataTable --auths exampleVis --vis exampleVis
 
 Scan again to see the histogram stored in the 'info' column family.
 
diff --git a/docs/mapred.md b/docs/mapred.md
index 2768a5d..4a453e1 100644
--- a/docs/mapred.md
+++ b/docs/mapred.md
@@ -50,7 +50,7 @@
 
 After creating the table, run the word count map reduce job.
 
-    $ tool.sh target/accumulo-examples.jar org.apache.accumulo.examples.mapreduce.WordCount -i instance -z zookeepers  --input /user/username/wc -t wordCount -u username -p password
+    $ accumulo-util hadoop-jar target/accumulo-examples.jar org.apache.accumulo.examples.mapreduce.WordCount -i instance -z zookeepers  --input /user/username/wc -t wordCount -u username -p password
 
     11/02/07 18:20:11 INFO input.FileInputFormat: Total input paths to process : 1
     11/02/07 18:20:12 INFO mapred.JobClient: Running job: job_201102071740_0003
@@ -134,14 +134,14 @@
 the basic WordCount example by calling the same command as explained above
 except replacing the password with the token file (rather than -p, use -tf).
 
-  $ tool.sh target/accumulo-examples.jar org.apache.accumulo.examples.mapreduce.WordCount -i instance -z zookeepers  --input /user/username/wc -t wordCount -u username -tf tokenfile
+  $ accumulo-util hadoop-jar target/accumulo-examples.jar org.apache.accumulo.examples.mapreduce.WordCount -i instance -z zookeepers  --input /user/username/wc -t wordCount -u username -tf tokenfile
 
 In the above examples, username was 'root' and tokenfile was 'root.pw'
 
 However, if you don't want to use the Opts class to parse arguments,
 the TokenFileWordCount is an example of using the token file manually.
 
-  $ tool.sh target/accumulo-examples.jar org.apache.accumulo.examples.mapreduce.TokenFileWordCount instance zookeepers username tokenfile /user/username/wc wordCount
+  $ accumulo-util hadoop-jar target/accumulo-examples.jar org.apache.accumulo.examples.mapreduce.TokenFileWordCount instance zookeepers username tokenfile /user/username/wc wordCount
 
 The results should be the same as the WordCount example except that the
 authentication token was not stored in the configuration. It was instead
diff --git a/docs/regex.md b/docs/regex.md
index a53ec25..f5b0e2e 100644
--- a/docs/regex.md
+++ b/docs/regex.md
@@ -41,7 +41,7 @@
 
 The following will search for any rows in the input table that starts with "dog":
 
-    $ tool.sh target/accumulo-examples.jar org.apache.accumulo.examples.mapreduce.RegexExample -u user -p passwd -i instance -t input --rowRegex 'dog.*' --output /tmp/output
+    $ accumulo-util hadoop-jar target/accumulo-examples.jar org.apache.accumulo.examples.mapreduce.RegexExample -u user -p passwd -i instance -t input --rowRegex 'dog.*' --output /tmp/output
 
     $ hadoop fs -ls /tmp/output
     Found 3 items
diff --git a/docs/rowhash.md b/docs/rowhash.md
index 57a5383..893bf1d 100644
--- a/docs/rowhash.md
+++ b/docs/rowhash.md
@@ -38,7 +38,7 @@
 The RowHash class will insert a hash for each row in the database if it contains a
 specified colum. Here's how you run the map/reduce job
 
-    $ tool.sh target/accumulo-examples.jar org.apache.accumulo.examples.mapreduce.RowHash -u user -p passwd -i instance -t input --column cf:cq
+    $ accumulo-util hadoop-jar target/accumulo-examples.jar org.apache.accumulo.examples.mapreduce.RowHash -u user -p passwd -i instance -t input --column cf:cq
 
 Now we can scan the table and see the hashes:
 
diff --git a/docs/tabletofile.md b/docs/tabletofile.md
index 20ba930..6e83a6f 100644
--- a/docs/tabletofile.md
+++ b/docs/tabletofile.md
@@ -40,7 +40,7 @@
 
 The following will extract the rows containing the column "cf:cq":
 
-    $ tool.sh target/accumulo-examples.jar org.apache.accumulo.examples.mapreduce.TableToFile -c ./examples.conf -t input --columns cf:cq --output /tmp/output
+    $ accumulo-util hadoop-jar target/accumulo-examples.jar org.apache.accumulo.examples.mapreduce.TableToFile -c ./examples.conf -t input --columns cf:cq --output /tmp/output
 
     $ hadoop fs -ls /tmp/output
     -rw-r--r--   1 username supergroup          0 2013-01-10 14:44 /tmp/output/_SUCCESS
diff --git a/docs/terasort.md b/docs/terasort.md
index 65a4170..36038f6 100644
--- a/docs/terasort.md
+++ b/docs/terasort.md
@@ -22,7 +22,7 @@
 
 To run this example you run it with arguments describing the amount of data:
 
-    $ tool.sh target/accumulo-examples.jar org.apache.accumulo.examples.mapreduce.TeraSortIngest \
+    $ accumulo-util hadoop-jar target/accumulo-examples.jar org.apache.accumulo.examples.mapreduce.TeraSortIngest \
     -c ./examples.conf \
     --count 10 \
     --minKeySize 10 \