Fixes found when reviewing docs and testing against distro.

git-svn-id: https://svn.apache.org/repos/asf/incubator/knox/trunk@1461761 13f79535-47bb-0310-9956-ffa450edef68
diff --git a/pom.xml b/pom.xml
index 57273f8..60b9832 100644
--- a/pom.xml
+++ b/pom.xml
@@ -23,16 +23,18 @@
     <modelVersion>4.0.0</modelVersion>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>gateway-site</artifactId>
-    <version>0.0.0-SNAPSHOT</version>
+    <version>0.2.0-SNAPSHOT</version>
 
     <name>Apache Knox Gateway</name>
     <description>Knox is a gateway for Hadoop clusters.</description>
     <url>http://incubator.apache.org/knox</url>
 
     <properties>
+        <SDS>\$</SDS>
         <HHH>###</HHH>
         <HHHH>####</HHHH>
         <HHHHH>#####</HHHHH>
+        <gateway-version>0.2.0-SNAPSHOT</gateway-version>
     </properties>
 
     <licenses>
diff --git a/src/site/markdown/client.md.vm b/src/site/markdown/client.md.vm
index 2ee6f3e..4d4b1fe 100644
--- a/src/site/markdown/client.md.vm
+++ b/src/site/markdown/client.md.vm
@@ -64,14 +64,10 @@
 The shell can either be used interactively or to execute a script file.
 To simplify use, the distribution contains an embedded version of the Groovy shell.
 
-The shell can be run interactively.
+The shell can be run interactively.  Use the command `exit` to exit.
 
     java -jar bin/shell.jar
 
-The shell can also be used to execute a script by passing a single filename argument.
-
-    java -jar bin/shell.jar sample/SmokeTestJob.groovy
-
 When running interactively it may be helpful to reduce some of the output generated by the shell console.
 Use the following command in the interactive shell to reduce that output.
 This only needs to be done once as these preferences are persisted.
@@ -82,6 +78,10 @@
 Also when running interactively use the `exit` command to terminate the shell.
 Using `^C` to exit can sometimes leaves the parent shell in a problematic state.
 
+The shell can also be used to execute a script by passing a single filename argument.
+
+    java -jar bin/shell.jar samples/ExamplePutFile.groovy
+
 
 Examples
 --------
@@ -95,7 +95,12 @@
 The `knox:000>` in the example above is the prompt from the embedded Groovy console.
 If you output doesn't look like this you may need to set the verbosity and show-last-result preferences as described above in the Usage section.
 
-Without using some other tool to browse HDFS it is impossible to tell that that this command did anything.
+If you relieve an error `HTTP/1.1 403 Forbidden` it may be because that file already exists.
+Try deleting it with the following command and then try again.
+
+    knox:000> Hdfs.rm(hadoop).file("/tmp/example/README").now()
+
+Without using some other tool to browse HDFS it is hard to tell that that this command did anything.
 Execute this to get a bit more feedback.
 
     knox:000> println "Status=" + Hdfs.put( hadoop ).file( "README" ).to( "/tmp/example/README2" ).now().statusCode
@@ -133,7 +138,7 @@
 
 All of the commands above could have been combined into a script file and executed as a single line.
 
-    java -jar bin/shell.jar samples/Example.groovy
+    java -jar bin/shell.jar samples/ExamplePutFile.groovy
 
 This script file is available in the distribution but for convenience, this is the content.
 
@@ -153,6 +158,7 @@
     json = (new JsonSlurper()).parseText( text )
     println json.FileStatuses.FileStatus.pathSuffix
     hadoop.shutdown()
+    exit
 
 Notice the Hdfs.rm command.  This is included simply to ensure that the script can be rerun.
 Without this an error would result the second time it is run.
@@ -417,8 +423,8 @@
 Fortunately there is a very simple way to add classes and JARs to the shell classpath.
 The first time the shell is executed it will create a configuration file in the same directory as the JAR with the same base name and a `.cfg` extension.
 
-    bin/shell-${gateway-version}.jar
-    bin/shell-${gateway-version}.cfg
+    bin/shell.jar
+    bin/shell.cfg
 
 That file contains both the main class for the shell as well as a definition of the classpath.
 Currently that file will by default contain the following.
@@ -434,7 +440,7 @@
 The easiest way to add these to the shell is to compile them directory into the `ext` directory.
 *Note: This command depends upon having the Groovy compiler installed and available on the execution path.*
 
-    groovyc -d ext -cp bin/shell-${gateway-version}.jar samples/SampleService.groovy samples/SampleSimpleCommand.groovy samples/SampleComplexCommand.groovy
+    groovyc -d ext -cp bin/shell.jar samples/SampleService.groovy samples/SampleSimpleCommand.groovy samples/SampleComplexCommand.groovy
 
 These source files are available in the samples directory of the distribution but these are included here for convenience.
 
@@ -549,19 +555,22 @@
 Groovy
 ------
 The shell included in the distribution is basically an unmodified packaging of the Groovy shell.
+The distribution does however provide a wrapper that makes it very easy to setup the class path for the shell.
+In fact the JARs required to execute the DSL are included on the class path by default.
 Therefore these command are functionally equivalent if you have Groovy [installed][15].
+See below for a description of what is required for `{JARs required by the DSL from lib and dep}`
 
-    java -jar bin/shell.jar sample/SmokeTestJob.groovy
-    groovy -cp bin/shell.jar sample/SmokeTestJob.groovy
+    java -jar bin/shell.jar samples/ExamplePutFile.groovy
+    groovy -classpath {JARs required by the DSL from lib and dep} samples/ExamplePutFile.groovy
 
 The interactive shell isn't exactly equivalent.
-However the only difference is that the shell-${gateway-version}.jar automatically executes some additional imports that are useful for the KnoxShell DSL.
+However the only difference is that the shell.jar automatically executes some additional imports that are useful for the KnoxShell DSL.
 So these two sets of commands should be functionality equivalent.
 ***However there is currently a class loading issue that prevents the groovysh command from working propertly.***
 
     java -jar bin/shell.jar
 
-    groovysh -cp bin/shell-${gateway-version}.jar
+    groovysh -classpath {JARs required by the DSL from lib and dep}
     import org.apache.hadoop.gateway.shell.Hadoop
     import org.apache.hadoop.gateway.shell.hdfs.Hdfs
     import org.apache.hadoop.gateway.shell.job.Job
@@ -570,7 +579,7 @@
 
 Alternatively, you can use the Groovy Console which does not appear to have the same class loading issue.
 
-    groovyConsole -cp bin/shell.jar
+    groovyConsole -classpath {JARs required by the DSL from lib and dep}
 
     import org.apache.hadoop.gateway.shell.Hadoop
     import org.apache.hadoop.gateway.shell.hdfs.Hdfs
@@ -578,6 +587,24 @@
     import org.apache.hadoop.gateway.shell.workflow.Workflow
     import java.util.concurrent.TimeUnit
 
+The list of JARs currently required by the DSL is
+
+    lib/gateway-shell-${gateway-version}.jar
+    dep/httpclient-4.2.3.jar
+    dep/httpcore-4.2.2.jar
+    dep/commons-lang3-3.1.jar
+    dep/commons-codec-1.7.jar
+
+So on Linux/MacOS you would need this command
+
+    groovy -cp lib/gateway-shell-0.2.0-SNAPSHOT.jar:dep/httpclient-4.2.3.jar:dep/httpcore-4.2.2.jar:dep/commons-lang3-3.1.jar:dep/commons-codec-1.7.jar samples/ExamplePutFile.groovy
+
+and on Windows you would need this command
+
+    groovy -cp lib/gateway-shell-0.2.0-SNAPSHOT.jar;dep/httpclient-4.2.3.jar;dep/httpcore-4.2.2.jar;dep/commons-lang3-3.1.jar;dep/commons-codec-1.7.jar samples/ExamplePutFile.groovy
+
+The exact list of required JARs is likely to change from release to release so it is recommended that you utilize the wrapper `bin/shell.jar`.
+
 In addition because the DSL can be used via standard Groovy, the Groovy integrations in many popular IDEs (e.g. IntelliJ , Eclipse) can also be used.
 This makes it particularly nice to develop and execute scripts to interact with Hadoop.
 The code-completion feature in particular provides immense value.
diff --git a/src/site/markdown/examples.md.vm b/src/site/markdown/examples.md.vm
index 8408c25..6a34062 100644
--- a/src/site/markdown/examples.md.vm
+++ b/src/site/markdown/examples.md.vm
@@ -80,10 +80,6 @@
 
     java -jar bin/shell.jar samples/ExampleSubmitJob.groovy
 
-You can load the KnoxShell DSL script into the standard Groovy Console.
-
-    groovyConsole -cp bin/shell-${gateway-version}.jar samples/ExampleSubmitJob.groovy
-
 You can manually type in the KnoxShell DSL script into the "embedded" Groovy
 interpreter provided with the distribution.
 
@@ -131,12 +127,14 @@
     while( !done && count++ < 60 ) {
       sleep( 1000 )
       json = Job.queryStatus(hadoop).jobId(jobId).now().string
-      done = JsonPath.read( json, "\$.status.jobComplete" )
+      done = JsonPath.read( json, "${SDS}.status.jobComplete" )
     }
     println "Done " + done
 
     println "Shutdown " + hadoop.shutdown( 10, SECONDS )
 
+    exit
+
 ------------------------------------------------------------------------------
 Example #2: WebHDFS & Oozie via KnoxShell DSL
 ------------------------------------------------------------------------------
@@ -146,10 +144,8 @@
 this depending upon your preference.
 
 You can use the "embedded" Groovy interpreter provided with the distribution.
-    java -jar bin/shell.jar samples/ExampleSubmitWorkflow.groovy
 
-You can load the KnoxShell DSL script into the standard Groovy Console.
-    groovyConsole -cp bin/shell-${gateway-version}.jar samples/ExampleSubmitWorkflow.groovy
+    java -jar bin/shell.jar samples/ExampleSubmitWorkflow.groovy
 
 You can manually type in the KnoxShell DSL script into the "embedded" Groovy
 interpreter provided with the distribution.
@@ -230,21 +226,25 @@
     while( status != "SUCCEEDED" && count++ < 60 ) {
       sleep( 1000 )
       json = Workflow.status(hadoop).jobId( jobId ).now().string
-      status = JsonPath.read( json, "\$.status" )
+      status = JsonPath.read( json, "${SDS}.status" )
     }
     println "Job status " + status;
 
     println "Shutdown " + hadoop.shutdown( 10, SECONDS )
 
+    exit
+
 ------------------------------------------------------------------------------
 Example #3: WebHDFS & Templeton/WebHCat via cURL
 ------------------------------------------------------------------------------
 The example below illustrates the sequence of curl commands that could be used
 to run a "word count" map reduce job.  It utilizes the hadoop-examples.jar
-from a Hadoop install for running a simple word count job.  Take care to
+from a Hadoop install for running a simple word count job.  A copy of that
+jar has been included in the samples directory for convenience.  Take care to
 follow the instructions below for steps 4/5 and 6/7 where the Location header
 returned by the call to the NameNode is copied for use with the call to the
-DataNode that follows it.
+DataNode that follows it.  These replacement values are identified with { }
+markup.
 
     # 0. Optionally cleanup the test directory in case a previous example was run without cleaning up.
     curl -i -k -u mapred:mapred-password -X DELETE \
@@ -263,7 +263,7 @@
       'https://localhost:8443/gateway/sample/namenode/api/v1/tmp/test/hadoop-examples.jar?op=CREATE'
 
     # 4. Upload hadoop-examples.jar to /tmp/test.  Use a hadoop-examples.jar from a Hadoop install.
-    curl -i -k -u mapred:mapred-password -T hadoop-examples.jar -X PUT '{Value Location header from command above}'
+    curl -i -k -u mapred:mapred-password -T samples/hadoop-examples.jar -X PUT '{Value Location header from command above}'
 
     # 5. Create the inode for a sample file README in /tmp/test/input
     curl -i -k -u mapred:mapred-password -X PUT \
@@ -301,8 +301,9 @@
 The example below illustrates the sequence of curl commands that could be used
 to run a "word count" map reduce job via an Oozie workflow.  It utilizes the
 hadoop-examples.jar from a Hadoop install for running a simple word count job.
+A copy of that jar has been included in the samples directory for convenience.
 Take care to follow the instructions below where replacement values are
-required.  These replacement values are identivied with { } markup.
+required.  These replacement values are identified with { } markup.
 
     # 0. Optionally cleanup the test directory in case a previous example was run without cleaning up.
     curl -i -k -u mapred:mapred-password -X DELETE \
@@ -321,7 +322,7 @@
       'https://localhost:8443/gateway/sample/namenode/api/v1/tmp/test/lib/hadoop-examples.jar?op=CREATE'
 
     # 4. Upload hadoop-examples.jar to /tmp/test/lib.  Use a hadoop-examples.jar from a Hadoop install.
-    curl -i -k -u mapred:mapred-password -T hadoop-examples.jar -X PUT \
+    curl -i -k -u mapred:mapred-password -T samples/hadoop-examples.jar -X PUT \
       '{Value Location header from command above}'
 
     # 5. Create the inode for a sample input file readme.txt in /tmp/test/input.
@@ -346,7 +347,7 @@
     # 8. Submit the job via Oozie
     # Take note of the Job ID in the JSON response as this will be used in the next step.
     curl -i -k -u mapred:mapred-password -T workflow-configuration.xml -H Content-Type:application/xml -X POST \
-      'https://localhost:8443/gateway/oozie/sample/api/v1/jobs?action=start'
+      'https://localhost:8443/gateway/sample/oozie/api/v1/jobs?action=start'
 
     # 9. Query the job status via Oozie.
     curl -i -k -u mapred:mapred-password -X GET \
diff --git a/src/site/markdown/getting-started.md.vm b/src/site/markdown/getting-started.md.vm
index d00e838..e74e8d7 100644
--- a/src/site/markdown/getting-started.md.vm
+++ b/src/site/markdown/getting-started.md.vm
@@ -57,17 +57,17 @@
 ------------------------------------------------------------------------------
 ${HHH} 1. Extract the distribution ZIP
 
-Download and extract the gateway-${gateway-version}.zip file into the
+Download and extract the knox-${gateway-version}.zip file into the
 installation directory that will contain your `{GATEWAY_HOME}`
 
-    jar xf gateway-${gateway-version}.zip
+    jar xf knox-${gateway-version}.zip
 
-This will create a directory `gateway-${gateway-version}` in your current
+This will create a directory `knox-${gateway-version}` in your current
 directory.
 
 ${HHH} 2. Enter the `{GATEWAY_HOME}` directory
 
-    cd gateway-${gateway-version}
+    cd knox-${gateway-version}
 
 The fully qualified name of this directory will be referenced as
 `{GATEWAY_HOME}` throughout the remainder of this document.