SLIDER-4 initial registry designs -> slider site. Along with review of all the other pages, moving things around, etc.

git-svn-id: https://svn.apache.org/repos/asf/incubator/slider/trunk@1592818 13f79535-47bb-0310-9956-ffa450edef68
diff --git a/src/site/html/blobstore-index.html b/src/site/html/blobstore-index.html
deleted file mode 100644
index b2b092f..0000000
--- a/src/site/html/blobstore-index.html
+++ /dev/null
@@ -1,54 +0,0 @@
-<html>
-<head>
-  <meta charset="UTF-8"/>
-  <meta name="viewport" content="width=device-width, initial-scale=1.0"/>
-  <meta name="Date-Revision-yyyymmdd" content="20131008"/>
-  <meta http-equiv="Content-Language" content="en"/>
-  <title>Slider: HBase on YARN</title>
-</head>
-<!---
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-  
-  <!--
-  Blobstore index.html
-  
-  This file is for placement at the bottom of a blobstore serving
-  up the Slider artifacts. 
-  
-  -->
-<body>  
-<h1>Slider Binary Releases </h1>
-
-<p></p>
-
-This is a repository of the released Slider artifacts
-
-<p>
-Slider is a Hadoop YARN application that can dynamically deploy
-Apache HBase and Apache Accumulo Column-table databases to a Hadoop 2.2+
-cluster.
-</p>
-
-For more details, please consult 
-the <a href="https://github.com/hortonworks/slider">source repository</a>
-
-<p></p>
-
-<h2>Releases</h2>
-</body>
-
-</html>
diff --git a/src/site/markdown/architecture.md b/src/site/markdown/architecture/architecture.md
similarity index 97%
rename from src/site/markdown/architecture.md
rename to src/site/markdown/architecture/architecture.md
index 30a3816..a08baac 100644
--- a/src/site/markdown/architecture.md
+++ b/src/site/markdown/architecture/architecture.md
@@ -21,7 +21,10 @@
 
 Slider is a YARN application to deploy non-YARN-enabled applications in a YARN cluster
 
-Slider consists of a YARN application master, the "Slider AM", and a client application which communicates with YARN and the Slider AM via remote procedure calls and/or REST requests. The client application offers command line access, as well as low-level API access for test purposes
+Slider consists of a YARN application master, the "Slider AM", and a client
+application which communicates with YARN and the Slider AM via remote procedure
+calls and/or REST requests. The client application offers command line access
+ as well as low-level API access for test purposes
 
 The deployed application must be a program that can be run across a pool of
 YARN-managed servers, dynamically locating its peers. It is not Slider's
diff --git a/src/site/markdown/architecture/index.md b/src/site/markdown/architecture/index.md
new file mode 100644
index 0000000..4333dcc
--- /dev/null
+++ b/src/site/markdown/architecture/index.md
@@ -0,0 +1,24 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+  
+   http://www.apache.org/licenses/LICENSE-2.0
+  
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+  
+# Architecture
+
+* [Overview](architecture.html)
+* [Application Needs](application_needs.html)
+* [Specification](../specification/index.html)
+* [Service Registry](../registry/index.html)
+* [Role history](rolehistory.html) 
+
+
+ 
\ No newline at end of file
diff --git a/src/site/markdown/rolehistory.md b/src/site/markdown/architecture/rolehistory.md
similarity index 100%
rename from src/site/markdown/rolehistory.md
rename to src/site/markdown/architecture/rolehistory.md
diff --git a/src/site/markdown/debugging.md b/src/site/markdown/debugging.md
index df6de58..897db53 100644
--- a/src/site/markdown/debugging.md
+++ b/src/site/markdown/debugging.md
@@ -29,22 +29,37 @@
   
 ### Using a web browser
 
-The log files are accessible via the Yarn Resource Manager UI.  From the main page (e.g. http://YARN_RESOURCE_MGR_HOST:8088), click on the link for the application instance of interest, and then click on the "logs" link.  This will present you with a page with links to the slider-err.txt file and the slider-out.txt file.  The former is the file you should select.  Once the log page is presented, click on the link at the top of the page ("Click here for full log") to view the entire file.
+The log files are accessible via the Yarn Resource Manager UI.  From the main page (e.g. `http://${YARN_RESOURCE_MGR_HOST}:8088`),
+click on the link for the application instance of interest, and then click on the `logs` link.
+This will present you with a page with links to the `slider-err.txt` file and the `slider-out.txt` file.
+The former is the file you should select -it is where the applicaton logs go
+Once the log page is presented, click on the link at the top of the page ("Click here for full log") to view the entire file.
+
+If the file `slider-out.txt` is empty, then examine  `slider-err.txt` -an empty
+output log usually means that the java process failed to start -this should be
+logged in the error file.
+     
 
 ### Accessing the host machine
 
-If access to other log files is required, there is the option of logging in to the host machine on which the application component is running.  The root directory for all Yarn associated files is the value of "yarn.nodemanager.log-dirs" in yarn-site.xml - e.g. /hadoop/yarn/log.  Below the root directory you will find an application and container sub-directory (e.g. /application_1398372047522_0009/container_1398372047522_0009_01_000001/).  Below the container directory you will find any log files associated with the processes running in the given Yarn container.
+If access to other log files is required, there is the option of logging in
+ to the host machine on which the application component is running
+  -provided you have the correct permissions.
+  
+The root directory for all YARN associated files is the value of `yarn.nodemanager.log-dirs` in `yarn-site.xml` - e.g. `/hadoop/yarn/log`.
+Below the root directory you will find an application and container sub-directory (e.g. `/application_1398372047522_0009/container_1398372047522_0009_01_000001/`).
+Below the container directory you will find any log files associated with the processes running in the given Yarn container.
 
 Within a container log the following files are useful while debugging the application.
 
 **agent.log** 
   
-E.g. application_1398098639743_0024/container_1398098639743_0024_01_000003/infra/log/agent.log
+E.g. `application_1398098639743_0024/container_1398098639743_0024_01_000003/infra/log/agent.log`
 This file contains the logs from the Slider-Agent.
 
 **application component log**
 
-E.g. ./log/application_1398098639743_0024/container_1398098639743_0024_01_000003/app/log/hbase-yarn-regionserver-c6403.ambari.apache.org.log
+E.g. `./log/application_1398098639743_0024/container_1398098639743_0024_01_000003/app/log/hbase-yarn-regionserver-c6403.ambari.apache.org.log`
 
 The location of the application log is defined by the application. "${AGENT_LOG_ROOT}" is a symbol available to the app developers to use as a root folder for logging.
 
@@ -52,12 +67,20 @@
 
 E.g. ./log/application_1398098639743_0024/container_1398098639743_0024_01_000003/app/command-log/
 
-The command logs produced by the slider-agent are available in the "command-log" folder relative to "${AGENT_LOG_ROOT}"/app
+The command logs produced by the slider-agent are available in the `command-log` folder relative to `${AGENT_LOG_ROOT}/app`
 
+Note that the *fish* shell is convenient for debugging, as  `cat log/**/slider-out.txt` will find the relevant output file 
+irrespective of what the path leading to it is.
 
 ## IDE-based remote debugging of the Application Master
 
-For situtations in which the logging does not yield enough information to debug an issue, the user has the option of specifying JVM command line options for the Application Master that enable attaching to the running process with a debugger (e.g. the remote debugging facilities in Eclipse or Intellij IDEA).  In order to specify the JVM options, edit the application configuration file (the file specified as the --template argument value on the command line for cluster creation) and specify the "jvm.opts" property for the "slider-appmaster" component:
+For situtations in which the logging does not yield enough information to debug an issue,
+the user has the option of specifying JVM command line options for the
+Application Master that enable attaching to the running process with a debugger
+(e.g. the remote debugging facilities in Eclipse or Intellij IDEA). 
+In order to specify the JVM options, edit the application configuration file
+(the file specified as the `--template` argument value on the command line for cluster creation)
+and specify the `jvm.opts` property for the `slider-appmaster` component:
 
 	`"components": {
     	"slider-appmaster": {
@@ -66,4 +89,4 @@
     	},
  		...`
  		
-You may specify "suspend=y" in the line above if you wish to have the application master process wait for the debugger to attach before beginning its processing.
+You may specify `suspend=y` in the line above if you wish to have the application master process wait for the debugger to attach before beginning its processing.
diff --git a/src/site/markdown/agent_test_setup.md b/src/site/markdown/developing/agent_test_setup.md
similarity index 100%
rename from src/site/markdown/agent_test_setup.md
rename to src/site/markdown/developing/agent_test_setup.md
diff --git a/src/site/markdown/building.md b/src/site/markdown/developing/building.md
similarity index 100%
rename from src/site/markdown/building.md
rename to src/site/markdown/developing/building.md
diff --git a/src/site/markdown/developing/index.md b/src/site/markdown/developing/index.md
new file mode 100644
index 0000000..34ca20a
--- /dev/null
+++ b/src/site/markdown/developing/index.md
@@ -0,0 +1,30 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+  
+   http://www.apache.org/licenses/LICENSE-2.0
+  
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+  
+# Developing Slider
+
+Slider is an open source project -anyone is free to contributed, and we
+strongly encourage people to do so. 
+
+Here are documents covering how to go about building, testing and releasing
+Slider
+
+* [Building](building.html)
+* [Debugging](debugging.html)
+* [Testing](testing.html)
+* [Agent test setup](agent_test_setup.html)
+* [Releasing](releasing.html)
+
+
+ 
\ No newline at end of file
diff --git a/src/site/markdown/releasing.md b/src/site/markdown/developing/releasing.md
similarity index 97%
rename from src/site/markdown/releasing.md
rename to src/site/markdown/developing/releasing.md
index 99d6aff..8c4ca19 100644
--- a/src/site/markdown/releasing.md
+++ b/src/site/markdown/developing/releasing.md
@@ -20,6 +20,9 @@
 
 Here is our release process.
 
+
+## IMPORTANT: THIS IS OUT OF DATE WITH THE MOVE TO THE ASF ## 
+
 ### Before you begin
 
 Check out the latest version of the develop branch,
@@ -43,7 +46,7 @@
 **Step #1:** Create a JIRA for the release, estimate 3h
 (so you don't try to skip the tests)
 
-    export SLIDER_RELEASE_JIRA=BUG-13927
+    export SLIDER_RELEASE_JIRA=SLIDER-13927
     
 **Step #2:** Check everything in. Git flow won't let you progress without this.
 
diff --git a/src/site/markdown/testing.md b/src/site/markdown/developing/testing.md
similarity index 100%
rename from src/site/markdown/testing.md
rename to src/site/markdown/developing/testing.md
diff --git a/src/site/markdown/getting_started.md b/src/site/markdown/getting_started.md
index 168d1cc..561cb8e 100644
--- a/src/site/markdown/getting_started.md
+++ b/src/site/markdown/getting_started.md
@@ -54,7 +54,9 @@
 
 ## <a name="setup"></a>Setup the Cluster
 
-After [installing your cluster](http://docs.hortonworks.com/) (using Ambari or other means) with the Services listed above, modify your YARN configuration to allow for multiple containers on a single host. In yarn-site.xml make the following modifications:
+After setting up your Hadoop cluster (using Ambari or other means) with the 
+services listed above, modify your YARN configuration to allow for multiple
+containers on a single host. In `yarn-site.xml` make the following modifications:
 
 <table>
   <tr>
@@ -82,60 +84,62 @@
 [http://public-repo-1.hortonworks.com/slider/slider-0.22.0-all.tar.gz](http://public-repo-1.hortonworks.com/slider/slider-0.22.0-all.tar.gz)
 ## <a name="build"></a>Build Slider
 
-* From the top level directory, execute "mvn clean install -DskipTests"
+* From the top level directory, execute `mvn clean install -DskipTests`
 * Use the generated compressed tar file in slider-assembly/target directory (e.g. slider-0.22.0-all.tar.gz) for the subsequent steps
 
 ## <a name="install"></a>Install Slider
 
 Follow the following steps to expand/install Slider:
 
-* mkdir *slider-install-dir*;
+    mkdir ${slider-install-dir*;
 
-* cd *slider-install-dir*
+    cd ${slider-install-dir}
 
-* Login as the ‘yarn’ user (assuming this is a host associated with the installed cluster).  E.g., su yarn
+Login as the "yarn" user (assuming this is a host associated with the installed cluster).  E.g., `su yarn`
 *This assumes that all apps are being run as ‘yarn’ user. Any other user can be used to run the apps - ensure that file permission is granted as required.*
 
-* Expand the tar file:  tar -xvf slider-0.22.0-all.tar.gz
+Expand the tar file:  `tar -xvf slider-0.22.0-all.tar.gz`
 
-* Browse to the Slider directory: cd slider-0.22.0/bin
+Browse to the Slider directory: `cd slider-0.22.0/bin`
 
-* export PATH=$PATH:/usr/jdk64/jdk1.7.0_45/bin (or the path to the JDK bin directory)
+      export PATH=$PATH:/usr/jdk64/jdk1.7.0_45/bin 
+    
+(or the path to the JDK bin directory)
 
-* Modify Slider configuration file *slider-install-dir*/slider-0.22.0/conf/slider-client.xml to add the following properties:
+Modify Slider configuration file `${slider-install-dir}/slider-0.22.0/conf/slider-client.xml` to add the following properties:
 
-```
-		<property>
-  			<name>yarn.application.classpath</name>
-  			<value>/etc/hadoop/conf,/usr/lib/hadoop/*,/usr/lib/hadoop/lib/*,/usr/lib/hadoop-hdfs/*,/usr/lib/hadoop-hdfs/lib/*,/usr/lib/hadoop-yarn/*,/usr/lib/hadoop-yarn/lib/*,/usr/lib/hadoop-mapreduce/*,/usr/lib/hadoop-mapreduce/lib/*</value>
-		</property>
-		<property>
-  			<name>slider.zookeeper.quorum</name>
-  			<value>yourZooKeeperHost:port</value>
-		</property>
-```
+      <property>
+          <name>yarn.application.classpath</name>
+          <value>/etc/hadoop/conf,/usr/lib/hadoop/*,/usr/lib/hadoop/lib/*,/usr/lib/hadoop-hdfs/*,/usr/lib/hadoop-hdfs/lib/*,/usr/lib/hadoop-yarn/*,/usr/lib/hadoop-yarn/lib/*,/usr/lib/hadoop-mapreduce/*,/usr/lib/hadoop-mapreduce/lib/*</value>
+      </property>
+      
+      <property>
+          <name>slider.zookeeper.quorum</name>
+          <value>yourZooKeeperHost:port</value>
+      </property>
+
 
 In addition, specify the scheduler and HDFS addresses as follows:
 
-```
-		<property>
-  			<name>yarn.resourcemanager.address</name>
-  			<value>yourResourceManagerHost:8050</value>
-		</property>
-		<property>
-  			<name>yarn.resourcemanager.scheduler.address</name>
-  			<value>yourResourceManagerHost:8030</value>
-		</property>
-		<property>
-  			<name>fs.defaultFS</name>
-  			<value>hdfs://yourNameNodeHost:8020</value>
-		</property>
-```
+    <property>
+        <name>yarn.resourcemanager.address</name>
+        <value>yourResourceManagerHost:8050</value>
+    </property>
+    <property>
+        <name>yarn.resourcemanager.scheduler.address</name>
+        <value>yourResourceManagerHost:8030</value>
+    </property>
+    <property>
+        <name>fs.defaultFS</name>
+        <value>hdfs://yourNameNodeHost:8020</value>
+    </property>
 
 
-* Execute: *slider-install-dir*/slider-0.22.0/bin/slider version
+Execute:
+ 
+    ${slider-install-dir}/slider-0.22.0/bin/slider version
 
-* Ensure there are no errors and you can see "Compiled against Hadoop 2.4.0"
+Ensure there are no errors and you can see "Compiled against Hadoop 2.4.0"
 
 ## <a name="deploy"></a>Deploy Slider Resources
 
@@ -145,78 +149,76 @@
 
 Perform the following steps to create the Slider root folder with the appropriate permissions:
 
-* su hdfs
-
-* hdfs dfs -mkdir /slider
-
-* hdfs dfs -chown yarn:hdfs /slider
-
-* hdfs dfs -mkdir /user/yarn
-
-* hdfs dfs -chown yarn:hdfs /user/yarn
+    su hdfs
+    
+    hdfs dfs -mkdir /slider
+    
+    hdfs dfs -chown yarn:hdfs /slider
+    
+    hdfs dfs -mkdir /user/yarn
+    
+    hdfs dfs -chown yarn:hdfs /user/yarn
 
 ### Load Slider Agent
 
-* su yarn
-
-* hdfs dfs -mkdir /slider/agent
-
-* hdfs dfs -mkdir /slider/agent/conf
-
-* hdfs dfs -copyFromLocal *slider-install-dir*/slider-0.22.0/agent/slider-agent-0.22.0.tar.gz /slider/agent
+    su yarn
+    
+    hdfs dfs -mkdir /slider/agent
+    
+    hdfs dfs -mkdir /slider/agent/conf
+    
+    hdfs dfs -copyFromLocal ${slider-install-dir}/slider-0.22.0/agent/slider-agent-0.22.0.tar.gz /slider/agent
 
 ### Create and deploy Slider Agent configuration
 
 Create an agent config file (agent.ini) based on the sample available at:
 
-*slider-install-dir*/slider-0.22.0/agent/conf/agent.ini
+    ${slider-install-dir}/slider-0.22.0/agent/conf/agent.ini
 
 The sample agent.ini file can be used as is (see below). Some of the parameters of interest are:
 
-* log_level = INFO or DEBUG, to control the verbosity of log
+# `log_level` = INFO or DEBUG, to control the verbosity of log
+# `app_log_dir` = the relative location of the application log file
+# `log_dir` = the relative location of the agent and command log file
 
-* app_log_dir = the relative location of the application log file
+    [server]
+    hostname=localhost
+    port=8440
+    secured_port=8441
+    check_path=/ws/v1/slider/agents/
+    register_path=/ws/v1/slider/agents/{name}/register
+    heartbeat_path=/ws/v1/slider/agents/{name}/heartbeat
 
-* log_dir = the relative location of the agent and command log file
+    [agent]
+    app_pkg_dir=app/definition
+    app_install_dir=app/install
+    app_run_dir=app/run
+    app_task_dir=app/command-log
+    app_log_dir=app/log
+    app_tmp_dir=app/tmp
+    log_dir=infra/log
+    run_dir=infra/run
+    version_file=infra/version
+    log_level=INFO
 
-		[server]
-		hostname=localhost
-		port=8440
-		secured_port=8441
-		check_path=/ws/v1/slider/agents/
-		register_path=/ws/v1/slider/agents/{name}/register
-		heartbeat_path=/ws/v1/slider/agents/{name}/heartbeat
+    [python]
 
-		[agent]
-		app_pkg_dir=app/definition
-		app_install_dir=app/install
-		app_run_dir=app/run
-		app_task_dir=app/command-log
-		app_log_dir=app/log
-		app_tmp_dir=app/tmp
-		log_dir=infra/log
-		run_dir=infra/run
-		version_file=infra/version
-		log_level=INFO
+    [command]
+    max_retries=2
+    sleep_between_retries=1
 
-		[python]
+    [security]
 
-		[command]
-		max_retries=2
-		sleep_between_retries=1
-
-		[security]
-
-		[heartbeat]
-		state_interval=6
-		log_lines_count=300
+    [heartbeat]
+    state_interval=6
+    log_lines_count=300
 
 
 Once created, deploy the agent.ini file to HDFS:
 
-* su yarn
-
-* hdfs dfs -copyFromLocal agent.ini /slider/agent/conf
+    su yarn
+    
+    hdfs dfs -copyFromLocal agent.ini /slider/agent/conf
 
 ## <a name="downsample"></a>Download Sample Application Packages
 
@@ -262,23 +264,26 @@
 
 ### <a name="load"></a>Load Sample Application Package
 
-* hdfs dfs -copyFromLocal *sample-application-package* /slider
+    hdfs dfs -copyFromLocal *sample-application-package/slider
 
 If necessary, create HDFS folders needed by the application. For example, HBase requires the following HDFS-based setup:
 
-* su hdfs
-
-* hdfs dfs -mkdir /apps
-
-* hdfs dfs -mkdir /apps/hbase
-
-* hdfs dfs -chown yarn:hdfs /apps/hbase
+    su hdfs
+    
+    hdfs dfs -mkdir /apps
+    
+    hdfs dfs -mkdir /apps/hbase
+    
+    hdfs dfs -chown yarn:hdfs /apps/hbase
 
 ### <a name="create"></a>Create Application Specifications
 
-Configuring a Slider application consists of two parts: the *[Resource Specification](#resspec), and the *[Application Configuration](#appconfig). Below are guidelines for creating these files.
+Configuring a Slider application consists of two parts: the [Resource Specification](#resspec),
+ and the *[Application Configuration](#appconfig). Below are guidelines for creating these files.
 
-*Note: There are sample Resource Specifications (**resources.json**) and Application Configuration (**appConfig.json**) files in the *[Appendix](#appendixa)* and also in the root directory of the Sample Applications packages (e.g. /**hbase-v096/resources.json** and /**hbase-v096/appConfig.json**).*
+*Note: There are sample Resource Specifications (**resources.json**) and Application Configuration 
+(**appConfig.json**) files in the *[Appendix](#appendixa)* and also in the root directory of the
+Sample Applications packages (e.g. /**hbase-v096/resources.json** and /**hbase-v096/appConfig.json**).*
 
 #### <a name="resspec"></a>Resource Specification
 
@@ -286,7 +291,7 @@
 
 As Slider creates each instance of a component in its own YARN container, it also needs to know what to ask YARN for in terms of **memory** and **CPU** for those containers. 
 
-All this information goes into the **Resources Specification** file ("Resource Spec") named resources.json. The Resource Spec tells Slider how many instances of each component in the application (such as an HBase RegionServer) to deploy and the parameters for YARN.
+All this information goes into the **Resources Specification** file ("Resource Spec") named `resources.json`. The Resource Spec tells Slider how many instances of each component in the application (such as an HBase RegionServer) to deploy and the parameters for YARN.
 
 Sample Resource Spec files are available in the Appendix:
 
@@ -294,7 +299,7 @@
 
 * [Appendix B: HBase Sample Resource Specification](#heading=h.l7z5mvhvxmzv)
 
-Store the Resource Spec file on your local disk (e.g. /tmp/resources.json).
+Store the Resource Spec file on your local disk (e.g. `/tmp/resources.json`).
 
 #### <a name="appconfig"></a>Application Configuration
 
@@ -310,23 +315,23 @@
 
 Store the appConfig.json file on your local disc and a copy in HDFS:
 
-* su yarn
-
-* hdfs dfs -mkdir /slider/appconf
-
-* hdfs dfs -copyFromLocal appConf.json /slider/appconf
+    su yarn
+    
+    hdfs dfs -mkdir /slider/appconf
+    
+    hdfs dfs -copyFromLocal appConf.json /slider/appconf
 
 ### <a name="start"></a>Start the Application
 
-Once the steps above are completed, the application can be started by leveraging the **Slider Command Line Interface (CLI)**.
+Once the steps above are completed, the application can be started through the **Slider Command Line Interface (CLI)**.
 
-* Change directory to the "bin" directory under the slider installation
+Change directory to the "bin" directory under the slider installation
 
-cd *slider-install-dir*/slider-0.22.0/bin
+    cd ${slider-install-dir}/slider-0.22.0/bin
 
-* Execute the following command:
+Execute the following command:
 
-./slider create cl1 --manager yourResourceManagerHost:8050 --image hdfs://yourNameNodeHost:8020/slider/agent/slider-agent-0.22.0.tar.gz --template appConfig.json --resources resources.json
+    ./slider create cl1 --manager yourResourceManagerHost:8050 --image hdfs://yourNameNodeHost:8020/slider/agent/slider-agent-0.22.0.tar.gz --template appConfig.json --resources resources.json
 
 ### <a name="verify"></a>Verify the Application
 
@@ -344,25 +349,25 @@
 
 #### Frozen:
 
-./slider freeze cl1 --manager yourResourceManagerHost:8050  --filesystem hdfs://yourNameNodeHost:8020
+    ./slider freeze cl1 --manager yourResourceManagerHost:8050  --filesystem hdfs://yourNameNodeHost:8020
 
 #### Thawed: 
 
-./slider thaw cl1 --manager yourResourceManagerHost:8050  --filesystem hdfs://yourNameNodeHost:8020
+    ./slider thaw cl1 --manager yourResourceManagerHost:8050  --filesystem hdfs://yourNameNodeHost:8020
 
 #### Destroyed: 
 
-./slider destroy cl1 --manager yourResourceManagerHost:8050  --filesystem hdfs://yourNameNodeHost:8020
+    ./slider destroy cl1 --manager yourResourceManagerHost:8050  --filesystem hdfs://yourNameNodeHost:8020
 
 #### Flexed:
 
-./slider flex cl1 --component worker 5 --manager yourResourceManagerHost:8050  --filesystem hdfs://yourNameNodeHost:8020
+    ./slider flex cl1 --component worker 5 --manager yourResourceManagerHost:8050  --filesystem hdfs://yourNameNodeHost:8020
 
 # <a name="appendixa"></a>Appendix A: Apache Storm Sample Application Specifications
 
 ## Storm Resource Specification Sample
 
-	{
+    {
       "schema" : "http://example.org/specification/v2.0.0",
       "metadata" : {
       },
@@ -393,36 +398,36 @@
             "component.instances" : "1"
         }
       }
-	}
+    }
 
 
 ## Storm Application Configuration Sample
 
-	{
-    	"schema" : "http://example.org/specification/v2.0.0",
-    	"metadata" : {
-    	},
-    	"global" : {
-        	"A site property for type XYZ with name AA": "its value",
-        	"site.XYZ.AA": "Value",
-        	"site.hbase-site.hbase.regionserver.port": "0",
-        	"site.core-site.fs.defaultFS": "${NN_URI}",
-        	"Using a well known keyword": "Such as NN_HOST for name node host",
-        	"site.hdfs-site.dfs.namenode.http-address": "${NN_HOST}:50070",
-        	"a global property used by app scripts": "not affiliated with any site-xml",
-        	"site.global.app_user": "yarn",
-        	"Another example of available keywords": "Such as AGENT_LOG_ROOT",
-        	"site.global.app_log_dir": "${AGENT_LOG_ROOT}/app/log",
-        	"site.global.app_pid_dir": "${AGENT_WORK_ROOT}/app/run",
-    	}
-	}
+    {
+      "schema" : "http://example.org/specification/v2.0.0",
+      "metadata" : {
+      },
+      "global" : {
+          "A site property for type XYZ with name AA": "its value",
+          "site.XYZ.AA": "Value",
+          "site.hbase-site.hbase.regionserver.port": "0",
+          "site.core-site.fs.defaultFS": "${NN_URI}",
+          "Using a well known keyword": "Such as NN_HOST for name node host",
+          "site.hdfs-site.dfs.namenode.http-address": "${NN_HOST}:50070",
+          "a global property used by app scripts": "not affiliated with any site-xml",
+          "site.global.app_user": "yarn",
+          "Another example of available keywords": "Such as AGENT_LOG_ROOT",
+          "site.global.app_log_dir": "${AGENT_LOG_ROOT}/app/log",
+          "site.global.app_pid_dir": "${AGENT_WORK_ROOT}/app/run",
+      }
+    }
 
 
 # <a name="appendixb"></a>Appendix B:  Apache HBase Sample Application Specifications
 
 ## HBase Resource Specification Sample
 
-	{
+    {
       "schema" : "http://example.org/specification/v2.0.0",
       "metadata" : {
       },
@@ -445,12 +450,12 @@
             "role.script" : "scripts/hbase_regionserver.py"
         }
       }
-	}
+    }
 
 
 ## HBase Application Configuration Sample
 
-	{
+    {
       "schema" : "http://example.org/specification/v2.0.0",
       "metadata" : {
       },
@@ -505,6 +510,6 @@
         "site.hdfs-site.dfs.namenode.https-address": "${NN_HOST}:50470",
         "site.hdfs-site.dfs.namenode.http-address": "${NN_HOST}:50070"
       }
-	}
+  }
 
 
diff --git a/src/site/markdown/hoya_cluster_descriptions-old.md b/src/site/markdown/hoya_cluster_descriptions-old.md
deleted file mode 100644
index 388a213..0000000
--- a/src/site/markdown/hoya_cluster_descriptions-old.md
+++ /dev/null
@@ -1,210 +0,0 @@
-<!---
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-
-# Hoya Cluster Specification
-
-A Hoya Cluster Specification is a JSON file which describes a cluster to
-Hoya: what application is to be deployed, which archive file contains the
-application, specific cluster-wide options, and options for the individual
-roles in a cluster.
-
-##  Hoya Options
-
-These are options read by the Hoya Application Master; the program deployed
-in the YARN cluster to start all the other roles.
-
-When the AM is started, all entries in the cluster-wide  are loaded as Hadoop
-configuration values.
-
-Only those related to Hadoop, the filesystem in use, YARN and hoya will be
-used; others are likely to be ignored.
-
-## Cluster Options
-
-Cluster wide options are used to configure the application itself.
-
-These are specified at the command line with the `-O key=value` syntax
-
-All options beginning with the prefix `site.` are converted into 
-site XML options for the specific application (assuming the application uses 
-a site XML configuration file)
-
-Standard keys are defined in the class `org.apache.hoya.api.OptionKeys`.
-
-####  `hoya.test`
-
-A boolean value to indicate this is a test run, not a production run. In this
-mode Hoya opts to fail fast, rather than retry container deployments when
-they fail. It is primarily used for internal tests.
-
-
-#### `hoya.container.failure.shortlife`
-
-An integer stating the time in milliseconds before which a failed container is
-considered 'short lived'.
-
-A failure of a short-lived container is treated as a sign of a problem with
-the role configuration and/or another aspect of the Hoya cluster -or
-a problem with the specific node on which the attempt to run
-the container was made.
-
-
-
-#### `hoya.container.failure.threshold`
-
-An integer stating the number of failures tolerated in a single role before
-the cluster is considered to have failed.
-
-
-
-#### `hoya.am.monitoring.enabled`
-
-A boolean flag to indicate whether application-specific health monitoring
-should take place. Until this monitoring is completed the option
-is ignored -it is added as `false` by default on clusters created.
-
-
-## Roles
-
-A Hoya application consists of the Hoya Application Master, "the AM", which
-manages the cluster, and a number of instances of the "roles" of the actual
-application.
-
-For HBase the roles are `master` and `worker`; Accumulo has more.
-
-For every role, the cluster specification can define
-1. How many instances of that role are desired.
-1. Some options with well known names for configuring the runtimes
-of the roles.
-1. Environment variables needed to help configure and run the process.
-1. Options for YARN
-
-### Standard Role Options
-
-#### Desired instance count `role.instances`
-
-#### Additional command-line arguments `role.additional.args`
-
-This argument is meant to provide a configuration-based static option
-that is provided to every instance of the given role. For example, this is
-useful in providing a binding address to Accumulo's Monitor process.
-
-Users can override the option on the hoya executable using the roleopt argument:
-
-    --roleopt monitor role.additional.args "--address 127.0.0.1"
-
-#### YARN container memory `yarn.memory`
-
-The amount of memory in MB for the YARN container hosting
-that role. Default "256".
-
-The special value `max` indicates that Hoya should request the
-maximum value that YARN allows containers to have -a value
-determined dynamically after the cluster starts.
-
-Examples:
-
-    --roleopt master yarn.memory 2048
-    --roleopt worker yarn.memory max
-
-If a YARN cluster is configured to set process memory limits via the OS,
-and the application tries to use more memory than allocated, it will fail
-with the exit code "143". 
-
-#### YARN vCores `yarn.vcores`
-
-Number of "Cores" for the container hosting
-a role. Default value: "1"
-
-The special value `max` indicates that Hoya should request the
-maximum value that YARN allows containers to have -a value
-determined dynamically after the cluster starts.
-
-As well as being able to specify the numeric values of memory and cores
-in role, via the `--roleopt` argument, you can now ask for the maximum
-allowed value by using the parameter `max`
-
-Examples:
-
-    --roleopt master yarn.vcores 2
-    --roleopt master yarn.vcores max
-
-####  Master node web port `app.infoport`
-
-The TCP socket port number to use for the master node web UI. This is translated
-into an application-specific site.xml property for both Accumulo and HBase.
-
-If set to a number other than the default, "0", then if the given port is in
-use, the role instance will not start. This will occur if YARN is already
-running a master node on that server, or if another application is using
-the same TCP port.
-
-#### JVM Heapsize `jvm.heapsize`
-
-Heapsize as a JVM option string, such as `"256M"` or `"2G"`
-
-    --roleopt worker jvm.heapsize 8G
-
-This is not correlated with the YARN memory -changes in the YARN memory allocation
-are not reflected in the JVM heapsize -and vice versa.
-
-### Environment variables
- 
- 
-All role options beginning with `env.` are automatically converted to
-environment variables which will be set for all instances of that role.
-
-    --roleopt worker env.MALLOC_ARENA 4
-
-## Accumulo-Specific Options
-
-### Accumulo cluster options
-
-Here are options specific to Accumulo clusters.
-
-####  Mandatory: Zookeeper Home: `zk.home`
-
-Location of Zookeeper on the target machine. This is needed by the 
-Accumulo startup scripts.
-
-#### Mandatory: Hadoop Home `hadoop.home`
-
-Location of Hadoop on the target machine. This is needed by the 
-Accumulo startup scripts.
-
-#### Mandatory: Accumulo database password  `accumulo.password`
-
-This is the password used to control access to the accumulo data.
-A random password (from a UUID, hence very low-entropy) is chosen when
-the cluster is created. A more rigorous password can be set on the command
-line _at the time of cluster creation_.
-
-
-## Hoya AM Role Options
-
-The Hoya Application Master has its own role, `hoya`, which can also
-be configured with role options. Currently only JVM and YARN options 
-are supported:
-
-    --roleopt hoya jvm.heapsize 256M
-    --roleopt hoya jvm.opts "-Djvm.property=true"
-    --roleopt hoya yarn.memory 512
-
-Normal memory requirements of the AM are low, except in the special case of
-starting an accumulo cluster for the first time. In this case, `bin\accumulo init`
-needs to be run: the extra memory requirements of the accumulo process
-need to be included in the hoya role's `yarn.memory` values.
diff --git a/src/site/markdown/index.md b/src/site/markdown/index.md
index abfdea3..4ec5cfd 100644
--- a/src/site/markdown/index.md
+++ b/src/site/markdown/index.md
@@ -19,13 +19,9 @@
 
 Slider is a YARN application to deploy existing distributed applications on YARN, 
 monitor them and make them larger or smaller as desired -even while 
-the cluster is running.
+the application is running.
 
-
-Slider has a plug-in *provider* architecture to support different applications,
-and currently supports Apache HBase and Apache Accumulo.
-
-Clusters can be stopped, "frozen" and restarted, "thawed" later; the distribution
+Applications can be stopped, "frozen" and restarted, "thawed" later; the distribution
 of the deployed application across the YARN cluster is persisted -enabling
 a best-effort placement close to the previous locations on a cluster thaw.
 Applications which remember the previous placement of data (such as HBase)
@@ -41,24 +37,21 @@
 
 Some of the features are:
 
-* Allows users to create on-demand Apache HBase and Apache Accumulo clusters
+* Allows users to create on-demand applications in a YARN cluster
 
-* Allow different users/applicatins to run different versions of the application.
+* Allow different users/applications to run different versions of the application.
 
-* Allow users to configure different Hbase instances differently
+* Allow users to configure different application instances differently
 
-* Stop / Suspend / Resume clusters as needed
+* Stop / Suspend / Resume application instances as needed
 
-* Expand / shrink clusters as needed
+* Expand / shrink application instances as needed
 
 The Slider tool is a Java command line application.
 
-The tool persists the information as a JSON document into the HDFS.
-It also generates the configuration files assuming the passed configuration
-directory as a base - in particular, the HDFS and ZooKeeper root directories
-for the new HBase instance has to be generated (note that the underlying
-HDFS and ZooKeeper are shared by multiple cluster instances). Once the
-cluster has been started, the cluster can be made to grow or shrink
+The tool persists the information as JSON documents in HDFS.
+
+Once the cluster has been started, the cluster can be made to grow or shrink
 using the Slider commands. The cluster can also be stopped, *frozen*
 and later resumed, *thawed*.
       
@@ -69,20 +62,19 @@
 ## Using 
 
 * [Getting Started](getting_started.html)
-* [Installing](installing.html)
 * [Man Page](manpage.html)
 * [Examples](examples.html)
-* [Client Configuration](hoya-client-configuration.html)
+* [Client Configuration](client-configuration.html)
 * [Client Exit Codes](exitcodes.html)
 * [Security](security.html)
 * [Logging](logging.html)
+* [How to define a new slider-packaged application](slider_specs/index.html)
+* [Application configuration model](configuration/index.html)
+
 
 ## Developing 
 
-* [Architecture](architecture.html)
+* [Architecture](architecture/index.html)
+* [Developing](developing/index.html)
 * [Application Needs](app_needs.html)
-* [Building](building.html)
-* [Releasing](releasing.html)
-* [Role history](rolehistory.html) 
-* [Specification](specification/index.html)
-* [Application configuration model](configuration/index.html)
+* [Service Registry](registry/index.html)
diff --git a/src/site/markdown/installing.md b/src/site/markdown/installing.md
deleted file mode 100644
index 5efca36..0000000
--- a/src/site/markdown/installing.md
+++ /dev/null
@@ -1,26 +0,0 @@
-<!---
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-
-# Installing and Running Slider
-
-
-1. Unzip/Untar the archive
-1. Add `slider/bin` to the path
-1. The logging settings are set in `conf/log4j.properties`
-1. Standard configuration options may be set as defined in
-[Slider Client Configuration] (client-configuration.html)
-
diff --git a/src/site/markdown/registry/a_YARN_service_registry.md b/src/site/markdown/registry/a_YARN_service_registry.md
new file mode 100644
index 0000000..fa58d9c
--- /dev/null
+++ b/src/site/markdown/registry/a_YARN_service_registry.md
@@ -0,0 +1,227 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+# A YARN Service Registry
+
+## April 2014
+
+# Introduction
+
+This document looks at the needs and options of a service registry.
+
+The core issue is that as the location(s) of a dynamically deployed application are unknown, the standard Hadoop and Java configuration model of some form of text files containing hostnames, ports and URLS no longer works. You cannot define up-front where a service will be.
+
+Some Hadoop applications -HBase and Accumulo -have solved this with custom ZK bindings. This works for the specific clients, but requires hbase and accumulo client JARs in order to be able to work with the content. (or a re-implementation with knowledge of the non-standard contents of the ZK nodes)
+
+Other YARN applications will need to publish their bindings - this includes, but is not limited to- Slider deployed applications. Again, these applications can use their own registration and binding model, which would again require custom clients to locate the registry information and parse the contents.
+
+YARN provides some minimal publishing of AM remote endpoints: a URL to what is assumed to be a Web UI (not a REST API), and an IPC port. The URL is displayed in the YARN UI -in which case it is accessed via a proxy which (currently) only support HTTP GET operations. The YARN API call to list all applications can be used to locate a named instance of an application by (user, application-type, name), and then obtain the raw URL and IPC endpoints. This enumeration process is an O(apps) operation on the YARN RM and only provides access to those two endpoints. Even with the RAW URL, REST operations have proven "troublesome", due to a web filter which redirects all direct requests to the proxy -unless it comes from the same host as the proxy.
+
+Hadoop client applications tend to retrieve all their configuration information from files in the local filesystem, hadoop-site.xml, hdfs-site.xml, hbase-site.xml, etc. This requires the configuration files to be present on all systems. Tools such as Ambari can keep the files in the server up to date -assuming a low rate of change- ---but these tools do nothing for the client applications themselves. It is up to the cluster clients to (somehow) retrieve these files, and to keep their copies up to date. *This is a problem that exists with today's applications*. 
+
+As an example, if a YARN client does not know the value of "yarn.application.classpath", it cannot successfully deploy any application in the YARN cluster which needs the cluster-side Hadoop and YARN JARs on its application master's classpath. This is not a theoretical problem, as some clusters have a different classpath from the default: without a correct value the Slider AM does not start. And, as it is designed to be run remotely, it cannot rely on a local installation of YARN to provide the correct values ([YARN-973](https://issues.apache.org/jira/browse/YARN-973)).
+
+# What do we need?
+
+**Discovery**: An IPC and URL discovery system for service-aware applications to use to look up a service to which it wishes to talk to. This is not an ORB -it's not doing redirection -, but it is something that needs to be used before starting IPC or REST communications. 
+
+**Configuration**: A way for clients of a service to retrieve more configuration data than simply the service endpoints. For example: everything needed to create a site.xml document.
+
+## Client-side
+
+* Allow clients of a YARN application to locate the service instance and its service ports (web, IPC, REST...) efficiently even on a large YARN cluster. 
+
+* Allow clients to retrieve configuration values which can be processed client-side into the configuration files and options which the application needs
+
+* Give clients confidence that the service with which they interact is the one they expect to interact with -not another potentially malicious service deployed by a different user. 
+
+* clients to be able to watch a service and retrieve notification of changes
+
+* cross-language support.
+
+## For all Services
+
+* Allow services to publish their binding details for the AM and of code running in the containers (which may be published by the containers)
+
+* Use entries in registry as a way of enforcing uniqueness of the instance (app, owner, name)? 
+
+* values to update when a service is restarted on a different host
+
+* values to indicate when a service is not running. This may be implicit "no entry found" or explicit "service exists but not running"
+
+* Services to be able to act as clients to other services
+
+## For Slider Services (and presumably others)
+
+* Ability to publish information about configuration documents that can be retrieved -and URLs
+
+* Ability to publish facts internal to the application (e.g. agent reporting URLs)
+
+* Ability to use service paths as a way to ensure a single instance of a named service can be deployed by a user
+
+## Management and PaaS UIs
+
+* Retrieve lists of web UI URLs of AM and of deployed components
+
+* Enum components and their status
+
+* retrieve dynamic assignments of IPC ports
+
+* retrieve dynamic assignments of JMX ports
+
+* retrieve any health URLs for regular probes
+
+* Listen to changes in the service mix -the arrival and departure of service instances, as well as changes in their contents.
+
+
+
+## Other Needs
+
+* Registry-configured applications. In-cluster applications should be able to subscribe to part of the registry
+to pick up changes that affect them -both for their own application configuration, and for details about
+applications on which they depend themselves.
+
+* Knox: get URLs that need to be converted into remote paths
+
+* Cloud-based deployments: work on virtual infrastructures where hostnames are unpredictable.
+
+# Open Source Registry code
+
+What can we use to implement this from ASF and ASF-compatible code? 
+
+## Zookeeper
+
+We'd need a good reason not to use this. There are still some issues
+
+1. Limits on amount of published data?
+
+2. Load limits, especially during cluster startup, or if a 500-mapper job all wants to do a lookup.
+
+3. Security story
+
+4. Impact of other ZK load on the behaviour of the service registry -will it cause problems if overloaded -and are they recoverable?
+
+## Apache Curator
+
+Netflix's core curator -now [Apache Curator](http://curator.apache.org/)- framework adds a lot to make working with ZK easier, including pluggable retry policies, binding tools and other things.
+
+There is also its "experimental" [service discovery framework](http://curator.apache.org/curator-x-discovery-server/index.html), which
+
+1. Allows a service to register a URL with a name and unique ID (and custom metadata). multiple services of a given name can be registered
+
+2. Allows a service to register >1 URL.
+
+3. Has a service client which performs lookup and can cache results.
+
+4. Has a REST API
+
+Limitations
+
+* The service discovery web UI and client does not work with the version of
+Jackson (1.8.8) in Hadoop 2.4. The upgraded version in Hadoop 2.5 is compatible [HADOOP-10104](https://issues.apache.org/jira/browse/HADOOP-10104).
+
+* The per-entry configuration payload attempts to get jason to perform Object/JSON mapping with the classname provided as an attribute in the JSON. This destroys all ability of arbitrary applications to parse the published data, as well as cross-language clients -is brittle and morally wrong from a data-sharing perspective.
+
+    {
+    
+      "name" : "name",
+      "id" : "service",
+      "address" : "localhost",
+      "port" : 8080,
+      "sslPort" : 443,
+      "payload" : {
+        "@class" : "org.apache.slider.core.registry.ServiceInstanceData",
+        "externalView" : {
+          "key" : "value"
+        }
+      },
+      "registrationTimeUTC" : 1397249829062,
+      "serviceType" : "DYNAMIC",
+      "uriSpec" : {
+        "parts" : [ {
+          "value" : "http:",
+          "variable" : false
+        }, {
+          "value" : ":",
+          "variable" : false
+        } ]
+      }
+    }
+
+
+
+## [Helix Service Registry](http://helix.apache.org/0.7.0-incubating-docs/recipes/service_discovery.html)
+
+This is inside Helix somewhere, used in LI in production at scale -and worth looking at. LI separate their Helix Zookeeper Quorum from their application-layer quorum, to isolate load.
+
+Notable features
+
+1. The registry is also the liveness view of the deployed application. Client's aren't watching the service registry for changes, they are watching Helix's model of the deployed application.
+1. The deployed application can pick up changes to its state the same way, allowing for live application manipulation.
+1. Tracks nodes that continually join/leave the group and drops them as unreliable.
+
+## Twill Service Registry
+
+Twill's [service registry code](http://twill.incubator.apache.org/apidocs/index.html), lets applications register a  [(hostname, port)](http://twill.incubator.apache.org/apidocs/org/apache/twill/discovery/Discoverable.html) pair in the registry by a name, a name by which clients can look up and enumerate all services with a specific name.
+
+Clients can subscribe to changes in the list of services with a specific name -so picking up the arrival and departure of instances, and probe to see if a previously discovered entity is still registered.
+
+Zookeeper- and in-memory registry implementations are provided.
+
+One nice feature about this architecture -and Twill in general- is that its general single-method callback model means that it segues nicely into Java-8 lambda-expressions. This is something to retain in a YARN-wide service registry.
+
+Comparing it to curator, it offers a proper subset of curator's registered services [ServiceInstance](http://curator.apache.org/apidocs/org/apache/curator/x/discovery/ServiceInstance.html) -implying that you could publish and retrieve Curator-registered services via a new implementation of Twill's DiscoveryService. This would require extensions to the curator service discovery client allow ZK nodes to be watched for changes. This is a feature that would be useful in many use cases -such as watching service availability across a cluster, or simply blocking until a dependent service was launched.
+
+As with curator, the amount of information that can be published isn't enough for management tools to make effective use of the service registration, while for slider there's no way to publish configuration data. However a YARN registry will inevitably be a superset of the Twill client's enumerated and retrieved data -so if its registration API were sufficient to register a minimal service, supporting the YARN registry via Twill's existing API should be straightforward.
+
+## Twitter Commons Service Registration
+
+[Twitter Commons](https://github.com/twitter/commons) has a service registration library, which allows for registration of sets of servers, [publishing the hostname and port of each](http://twitter.github.io/commons/apidocs/com/twitter/common/service/registration/package-tree.html)., along with a map of string properties.
+
+Zookeeper based, it suffices if all servers are identical and only publishing single (hostname, port) pairs for callers.
+
+## AirBnB Smartstack
+
+SmartStack is [Air BnB's cloud-based service discovery system](http://nerds.airbnb.com/smartstack-service-discovery-cloud/).
+
+It has two parts, *Nerve* and *Synapse*:
+
+[**Nerve**](https://github.com/airbnb/nerve) is a ruby agent designed to monitor processes and register healthy instances in ZK (or to a mock reporter). It includes [probes for TCP ports, HTTP and rabbitMQ](https://github.com/airbnb/nerve/tree/master/lib/nerve/service_watcher). It's [a fairly simple liveness monitor](https://github.com/airbnb/nerve/blob/master/lib/nerve/service_watcher.rb).
+
+[**Synapse**](https://github.com/airbnb/synapse) takes the data and uses it to configure [HAProxy instances](http://haproxy.1wt.eu/). HAProxy handles the load balancing, queuing and integrating liveness probes into the queues. Synapse generates all the configuration files for an instance -but also tries to reconfigure the live instances via their socket APIs, 
+
+Alongside these, AirBnB have another published project on Github, [Optica](https://github.com/airbnb/optica), which is a web application for nodes to register themselves with (POST) and for others to query. It publishes events to RabbitMQ, and again uses ZK to store state.
+
+AirBnB do complain a bit about ZK and its brittleness. They do mention that they suspect it is due to bugs in the Ruby ZK client library. This may be exacerbated by in-cloud deployments. Hard-coding the list of ZK nodes may work for a physical cluster, but in a virtualized cluster, the hostnames/IP Addresses of those nodes may change -leading to a meta-discovery problem: how to find the ZK quorum -especially if you can't control the DNS servers.
+
+## [Apache Directory](http://directory.apache.org/apacheds/)
+
+This is an embeddable LDAP server
+
+* Embeddable inside Java apps
+
+* Supports Kerberos alongside X.500 auth. It can actually act as a Key server and TGT if desired.
+
+* Supports DNS and DHCP queries.
+
+* Accessible via classic LDAP APIs.
+
+This isn't a registry service directly, though LDAP queries do make enumeration of services *and configuration data* straightforward. As LDAP libraries are common across languages -even built in to the Java runtime- LDAP support makes publishing information to arbitrary clients relatively straightforward.
+
+If service information were to be published via LDAP, then it should allow IT-managed LDAP services to both host this information, and publish configuration data. This would be relevant for classic Hadoop applications if we were to move the Configuration class to support back-end configuration sources beyond XML files on the classpath.
+
+# Proposal
diff --git a/src/site/markdown/registry/index.md b/src/site/markdown/registry/index.md
new file mode 100644
index 0000000..8131fd4
--- /dev/null
+++ b/src/site/markdown/registry/index.md
@@ -0,0 +1,47 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+  
+# Service Registry
+
+The service registry is a core part of the Slider Architecture -it is how
+dynamically generated configurations are published for clients to pick up.
+
+The need for a service registry goes beyond Slider, however. We effectively
+have application-specific registries for HBase and Accumulo, and explicit
+registries in Apache Helix and Apache Twill, as well as re-usable registry
+code in Apache Curator.
+
+[YARN-913](https://issues.apache.org/jira/browse/YARN-913) covers the need
+for YARN itself to have a service registry. This would be the ideal ultimate
+solution -it would operate at a fixed location/ZK path, and would be guaranteed
+to be on all YARN clusters, so code could be written expecting it to be there.
+
+It could also be used to publish binding data from static applications,
+including HBase, Accumulo, Oozie, -applications deployed by management tools.
+Unless/until these applications self-published their binding data, it would
+be the duty of the management tools to do the registration.
+
+
+
+## Contents
+
+1. [YARN Application Registration and Binding: the Problem](the_YARN_application_registration_and_binding_problem.html)
+1. [A YARN Service Registry](src/site/markdown/registry/a_YARN_service_registry.html)
+1. [April 2014 Initial Registry Design](initial_registry_design.html)
+1. [Service Registry End-to-End Scenarios](service_registry_end_to_end_scenario.html)
+1. [P2P Service Registries](p2p_service_registries.html)
+1. [References](references.html)
\ No newline at end of file
diff --git a/src/site/markdown/registry/initial_registry_design.md b/src/site/markdown/registry/initial_registry_design.md
new file mode 100644
index 0000000..a816d84
--- /dev/null
+++ b/src/site/markdown/registry/initial_registry_design.md
@@ -0,0 +1,110 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+# April 2014 Initial Registry Design
+
+This is the plan for the initial registry design.
+
+1. Use Apache Curator [service discovery code](http://curator.apache.org/curator-x-discovery/index.html). 
+
+2. AMs to register as (user, name). Maybe "service type" if we add that as an option in the slider configs
+
+3. Lift "external view" term from Helix -concept that this is the public view, not internal.
+
+4. application/properties section to list app-wide values
+
+5. application/services section to list public service URLs; publish each as unique-ID-> (human name, URL, human text). code can resolve from UniqueID; UIs can use human data.
+
+6. String Template 2 templates for generation of output (rationale:  library for Python Java, .NET)
+
+7. Java CLI to retrieve values from ZK and apply named template (local, hdfs). Include ability to restrict to list of named properties (pattern match).
+
+8. AM to serve up curator service (later -host in RM? elsewhere?)
+
+### forwards-compatilibity
+
+1. This initial design will hide the fact that Apache Curator is being used to discover services,
+by storing information in the payload, `ServiceInstanceData` rather than in (the minimdal) curator
+service entries themselves. If we move to an alternate registry, provided we
+can use the same datatype -or map to it- changes should not be visible.
+
+1. The first implementation will not support watching for changes.
+
+### Initial templates 
+
+* hadoop XML conf files
+
+* Java properties file
+
+* HTML listing of services
+
+
+
+## Example Curator Service Entry
+
+This is the prototype's content
+
+Toplevel
+
+    service CuratorServiceInstance{name='slider', id='stevel.test_registry_am', address='192.168.1.101', port=62552, sslPort=null, payload=org.apache.slider.core.registry.info.ServiceInstanceData@4e9af21b, registrationTimeUTC=1397574073203, serviceType=DYNAMIC, uriSpec=org.apache.curator.x.discovery.UriSpec@ef8dacf0} 
+
+Slider payload.
+
+    payload=
+    {
+      "internalView" : {
+        "endpoints" : {
+          "/agents" : {
+            "value" : "http://stevel-8.local:62552/ws/v1/slider/agents",
+            "protocol" : "http",
+            "type" : "url",
+            "description" : "Agent API"
+          }
+        },
+        "settings" : { }
+      },
+    
+      "externalView" : {
+        "endpoints" : {
+          "/mgmt" : {
+            "value" : "http://stevel-8.local:62552/ws/v1/slider/mgmt",
+            "protocol" : "http",
+            "type" : "url",
+            "description" : "Management API"
+          },
+    
+          "slider/IPC" : {
+            "value" : "stevel-8.local/192.168.1.101:62550",
+            "protocol" : "org.apache.hadoop.ipc.Protobuf",
+            "type" : "address",
+            "description" : "Slider AM RPC"
+          },
+          "registry" : {
+            "value" : "http://stevel-8.local:62552/ws/registry",
+            "protocol" : "http",
+            "type" : "url",
+            "description" : "Registry"
+          }
+        },
+        "settings" : { }
+      }
+    }
+
+ 
+
+   
+
diff --git a/src/site/markdown/registry/p2p_service_registries.md b/src/site/markdown/registry/p2p_service_registries.md
new file mode 100644
index 0000000..a30698f
--- /dev/null
+++ b/src/site/markdown/registry/p2p_service_registries.md
@@ -0,0 +1,96 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+  
+   http://www.apache.org/licenses/LICENSE-2.0
+  
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+  
+# P2P Service Registries
+
+Alongside the centralized service registries, there's much prior work on P2P discovery systems, especially for mobile and consumer devices.
+
+They perform some multicast- or distributed hash table-based lookup, and tend to have common limitations:
+
+* scalability
+
+* the bootstrapping problem
+
+* security: can you trust the results to be honest?
+
+* consistency: can you trust the results to be complete and current?
+
+Bootstrapping is usually done via multicast, possibly then switching to unicast for better scale. As multicasting doesn't work in cloud infrastructures, none of the services work unmodified  in public clouds. There's multiple anecdotes of [Amazon's SimpleDB service](http://aws.amazon.com/simpledb/) being used as a registry for in-EC2 applications. At the very least, this service and its equivalents in other cloud providers could be used to bootstrap ZK client bindings in cloud environments. 
+
+## Service Location Protocol 
+
+Service Location Protocol is a protocol for discovery services that came out of Sun, Novell and others -it is still available for printer discovery and suchlike
+
+It supports both a multicast discovery mechanism, and a unicast protocol to talk to a Directory Agent -an agent that is itself discovered by multicast requests, or by listening for the agent's intermittent multicast announcements.
+
+There's an extension to DHCP, RFC2610, which added the ability for DHCP to advertise Directory Agents -this was designed to solve the bootstrap problem (though not necessarily security or in-cloud deployment). Apart from a few mentions in Windows Server technical notes, it does not appear to exist.
+
+* [[RFC2608](http://www.ietf.org/rfc/rfc2608.txt)] *Service Location Protocol, Version 2* , IEEE, 1999
+
+* [[RFC3224](http://www.ietf.org/rfc/rfc3224.txt)] *Vendor Extensions for Service Location Protocol, Version 2*, IETF, 2003
+
+* [[RFC2610](http://www.ietf.org/rfc/rfc2610.txt)] *DHCP Options for Service Location Protocol, IETF, 1999*
+
+## [Zeroconf](http://www.zeroconf.org/)
+
+The multicast discovery service implemented in Apple's Bonjour system -multicasting DNS lookups to all peers in the subnet.
+
+This allows for URLs and hostnames to be dynamically positioned, with DNS domain searches allowing for enumeration of service groups. 
+
+This protocol scales very badly; the load on *every* client in the subnet is is O(DNS-queries-across-subnet), hence implicitly `O(devices)*O(device-activity)`. 
+
+The special domains "_tcp", "_udp"  and below can also be served up via a normal DNS server.
+
+##  [Jini/Apache River](http://river.apache.org/doc/specs/html/lookup-spec.html)
+
+Attribute-driven service enumeration, which drives the, Java-client-only model of downloading client-side code. There's no requirement for the remote services to be in Java, only that drivers are.
+
+## [Serf](http://www.serfdom.io/)  
+
+This is a library that implements the [SWIM protocol](http://www.cs.cornell.edu/~asdas/research/dsn02-swim.pdf) to propagate information around a cluster. Apparently works in virtualized clusters too. It's already been used in a Flume-on-Hoya provider.
+
+## [Anubis](http://sourceforge.net/p/smartfrog/svn/HEAD/tree/trunk/core/components/anubis/)
+
+An HP Labs-built [High Availability tuple-space](http://sourceforge.net/p/smartfrog/svn/HEAD/tree/trunk/core/components/anubis/doc/HPL-2005-72.pdf?format=raw) in SmartFrog; used in production in some of HP's telco products. An agent publishes facts into the T-Space, and within one heartbeat all other agents have it. One heart-beat later, unless there's been a change in the membership, the publisher knows the others have it. One heartbeat later the agents know the publisher knows it, etc.
+
+Strengths: 
+
+* The shared knowledge mechanism permits reasoning and mathematical proofs.
+
+* Strict ordering between heartbeats implies an ordering in receipt. This is stronger than ZK's guarantees.
+
+* Lets you share a moderate amount of data (the longer the heartbeat interval, the more data you can publish).
+
+* Provided the JVM hosting the Anubis agent is also hosting the service, liveness is implicit
+
+* Secure to the extent that it can be locked down to allow only nodes with mutual trust of HTTPS certificates to join the tuple-space.
+
+Weaknesses
+
+* (Currently) bootstraps via multicast discovery.
+
+* Brittle to timing, especially on virtualized clusters where clocks are unpredictable.
+
+It proved good for workload sharing -tasks can be published to it, any agent can say "I'm working on it" and take up the work. If the process fails, the task becomes available again. We used this for distributed scheduling in a rendering farm.
+
+## [Carmen](http://www.hpl.hp.com/techreports/2002/HPL-2002-257)
+
+This was another HP Labs project, related to the Cooltown "ubiquitous computing" work, which was a decade too early to be relevant. It was also positioned by management as a B2B platform, so ended up competing with - and losing against - WS-* and UDDI.. 
+
+Carmen aimed to provide service discovery with both fixed services, and with highly mobile client services that will roam around the network -they are assumed to be wireless devices.
+
+Services were published with and searched for by attributed, locality was considered to be a key attribute -local instances of a service prioritized. Those services with a static location and low rate of change became the stable caches of service information -becoming, as with skype, "supernodes". 
+
+Bootstrapping the cluster relied on multicast, though alternatives based on DHCP and DNS were proposed.
+
diff --git a/src/site/markdown/registry/references.md b/src/site/markdown/registry/references.md
new file mode 100644
index 0000000..ade4f4f
--- /dev/null
+++ b/src/site/markdown/registry/references.md
@@ -0,0 +1,46 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+  
+   http://www.apache.org/licenses/LICENSE-2.0
+  
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+  
+# References
+
+Service registration and discovery is a problem in distributed computing that has been explored for over thirty years, with 
+[Birrell81]'s *Grapevine* system the first known implementation -though of 
+
+# Papers
+
+* **[Birrell81]** Birrell, A. et al, [*Grapevine: An exercise in distributed computing*](http://research.microsoft.com/apps/pubs/default.aspx?id=63661). Comm. ACM 25, 4 (Apr 1982), pp260-274. 
+The first documented directory service; relied on service shutdown to resolve update operations.
+
+* **[Das02]** [*SWIM: Scalable Weakly-consistent Infection-style Process Group Membership Protocol*](http://www.cs.cornell.edu/~asdas/research/dsn02-swim.pdf)
+P2P gossip-style data sharing protocol with random liveness probes to address scalable liveness checking. Ceph uses similar liveness checking.
+
+* **[Marti02]** Marti S. and Krishnam V., [*Carmen: A Dynamic Service Discovery Architecture*](http://www.hpl.hp.com/techreports/2002/HPL-2002-257), 
+
+* **[Lampson86]** Lampson, B. [*Designing a Global Naming Service*](http://research.microsoft.com/en-us/um/people/blampson/36-GlobalNames/Acrobat.pdf). DEC. 
+Distributed; includes an update protocol and the ability to add links to other parts of the tree. Also refers to [*Xerox Clearinghouse*](http://bitsavers.informatik.uni-stuttgart.de/pdf/xerox/parc/techReports/OPD-T8103_The_Clearinghouse.pdf), which apparently shipped.
+
+* **[Mockapetris88]** Mockapetris, P. [*Development of the domain name system*](http://bnrg.eecs.berkeley.edu/~randy/Courses/CS268.F08/papers/31_dns.pdf). The history of DNS
+
+* **[Schroeder84]** Schroeder, M.D. et al, [*Experience with Grapevine: The Growth of a Distributed System*](http://research.microsoft.com/apps/pubs/default.aspx?id=61509). Xerox.
+Writeup of the experiences of using grapevine, with its eventual consistency and lack of idempotent message delivery called out -along with coverage of operations issues.
+
+* **[van Renesse08]**  van Renesse, R. et al, [*Astrolabe: A Robust and Scalable Technology For Distributed System Monitoring, Management, and Data Mining*](http://www.cs.cornell.edu/home/rvr/papers/astrolabe.pdf). ACM Transactions on Computer Systems
+Grandest P2P management framework to date; the work that earned Werner Vogel his CTO position at Amazon.
+ 
+* **[van Steen86]** van Steen, M. et al, [*A Scalable Location Service for Distributed Objects*](http://www.cs.vu.nl/~ast/publications/asci-1996a.pdf). 
+Vrije Universiteit, Amsterdam. Probably the first Object Request Broker
+
+
+
+ 
\ No newline at end of file
diff --git a/src/site/markdown/registry/service_registry_end_to_end_scenario.md b/src/site/markdown/registry/service_registry_end_to_end_scenario.md
new file mode 100644
index 0000000..ebc32c9
--- /dev/null
+++ b/src/site/markdown/registry/service_registry_end_to_end_scenario.md
@@ -0,0 +1,156 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+# Service Registry End-to-End Scenarios
+
+## AM startup
+
+1. AM starts, reads in configuration, creates provider
+
+2. AM builds web site, involving provider in process  (*there's a possible race condition here, due to the AM registration sequence)*
+
+3. AM registers self with RM, including web and IPC ports, and receives list of existing containers; container loss notifications come in asynchronously *(which is why the AM startup process is in a synchronized block)*
+
+4. AM inits it's `ApplicationState` instance with the config, instance description and RM-supplied container list.
+
+5. AM creates service registry client using ZK quorum and path provided when AM was started
+
+6. AM registers standard endpoints: RPC, WebUI, REST APIs
+
+7. AM registers standard content it can serve (e.g `yarn-site.xml`)
+
+8. AM passes registry to provider in `bind()` operation.
+
+9. AM triggers review of application state, requesting/releasing nodes as appropriate
+
+## Agent Startup: standalone
+
+1. Container is issued to AM
+
+2. AM chooses component, launches agent on it -with URL of AM a parameter (TODO: Add registry bonding of ZK quorum and path)
+
+3. Agent starts up.
+
+4. Agent locates AM via URL/ZK info
+
+5. Agent heartbeats in with state
+
+6. AM gives agent next state command.
+
+## AM gets state from agent:
+
+1. Agent heartbeats in
+
+2. AM decides if it wants to receive config 
+
+3. AM issues request for state information -all (dynamic) config data
+
+4. Agent receives it
+
+5. Agent returns all config state, including: hostnames, allocated ports, generated values (e.g. database connection strings, URLs) - as two-level (allows agent to define which config options are relevant to which document)
+
+## AM saves state for serving
+
+1. AM saves state in RAM (assumptions: small, will rebuild on restart)
+
+2. AM updates service registry with list of content that can be served up and URLs to retrieve them.
+
+3. AM fields HTTP GET requests on content
+
+## AM Serves content
+
+A simple REST service serves up content on paths published to the service registry. It is also possible to enumerate documents published by GET  operations on parent paths.
+
+1. On GET command, AM locates referenced agent values
+
+2. AM builds up response document from K-V pairs. This can be in a limited set of formats: Hadoop XML, Java properties, YAML, CSV, HTTP, JSON chosen as ? type param. (this generation is done from template processing in AM using slider.core.template module)
+
+3. response is streamed with headers of : `content-type`, `content-length`, do not cache in proxy, expires,* (with expiry date chosen as ??)*
+
+# Slider Client
+
+Currently slider client enumerates the YARN registry looking for slider instances -including any instances of the same application running before launching a cluster. 
+
+This 
+
+* has race conditions
+* has scale limitations `O(apps-in-YARN-cluster)` + `O(completed-apps-in-RM-memory)`
+* only retrieves configuration information from slider-deployed application instances. *We do not need to restrict ourselves here.*
+
+## Slider Client lists applications
+
+    slider registry --list [--servicetype <application-type>]
+
+1. Client starts
+
+2. Client creates creates service registry client using ZK quorum and path provided in client config properties (slider-client.xml)
+
+3. Client enumerates registered services and lists them
+
+## Slider Client lists content published by an application instance
+
+    slider registry <instance> --listconf  [--servicetype <application-type>]
+
+1. Client starts
+
+2. Client creates creates service registry client using ZK quorum and path provided in client config properties (slider-client.xml)
+
+3. Client locates registered service entry -or fails
+
+4. Client retrieves service data, specifically the listing of published documents
+
+5. Client displays list of content
+
+## Slider Client retrieves content published by an application instance
+
+    slider registry <instance> --getconf <document> [--format (xml|properties|text|html|csv|yaml|json,...) [--dest <file>]  [--servicetype <application-type>]
+
+1. Client starts
+
+2. Client creates creates service registry client using ZK quorum and path provided in client config properties (slider-client.xml)
+
+3. Client locates registered service entry -or fails
+
+4. Client retrieves service data, specifically the listing of published documents
+
+5. Client locates URL of content
+
+6. Client builds GET request including format
+
+7. Client executes command, follows redirects, validates content length against supplied data.
+
+8. Client prints response to console or saves to output file. This is the path specified as a destination, or, if that path refers to a directory, to
+a file underneath.
+
+## Slider Client retrieves content set published by an application instance
+
+Here a set of documents published is retrieved in the desired format of an application.
+
+## Slider Client retrieves document and applies template to it
+
+Here a set of documents published is retrieved in the desired format of an application.
+
+    slider registry <instance> --source <document> [--template <path-to-template>] [--outfile <file>]  [--servicetype <application-type>]
+
+1. document is retrieved as before, using a simple format such as json to retrieve it.
+
+2. The document is parsed and converted back into K-V pairs
+
+3. A template using a common/defined template library is applied to the content , generating the final output.
+
+Template paths may include local filesystem paths or (somehow) something in a package file
+
diff --git a/src/site/markdown/registry/the_YARN_application_registration_and_binding_problem.md b/src/site/markdown/registry/the_YARN_application_registration_and_binding_problem.md
new file mode 100644
index 0000000..805e69b
--- /dev/null
+++ b/src/site/markdown/registry/the_YARN_application_registration_and_binding_problem.md
@@ -0,0 +1,145 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+# YARN Application Registration and Binding: the Problem
+
+## March 2014
+
+# How to bind client applications to services dynamically placed applications?
+
+
+There are some constraints here
+
+1. The clients may be running outside the cluster -potentially over long-haul links.
+
+1. The location of an application deployed in a YARN cluster cannot be predicted.
+
+2. The ports used for application service endpoints cannot be hard-coded or predicted. (Alternatively: if they are hard-coded, then Socket-In-Use exceptions may occur)
+
+3: As components fail and get re-instantiated, their location may change. The rate of this depends on cluster and application stability; the longer lived the application, the more common it is.
+
+Existing Hadoop client apps have a configuration problem of their own: how are the settings in files such as `yarn-site.xml`picked up by today's applications? This is an issue which has historically been out of scope for Hadoop clusters -but if we are looking at registration and binding of YARN applications, there should be no reason why
+static applications cannot be discovered and bonded to using the same mechanisms. 
+
+# Other constraints:
+
+1. Reduce the amount of change needed in existing applications to a minimum -ideally none, though some pre-launch setup may be acceptable.
+
+2. Prevent malicious applications from registering a service endpoints.
+
+3. Scale with #of applications and #of clients; not overload on a cluster partitioning.
+
+4. Offer a design that works with apps that are deployed in a YARN custer outside of Slider. Rationale: want a mechanism that works with pure-YARN apps
+
+## Possible Solutions:
+
+### ZK
+
+Client applications use ZK to find services (addresses #1, #2 and #3). Requires location code in the client.
+
+HBase and Accumulo do this as part of a failover-ready design.
+
+### DNS
+
+Client apps use DNS to find services, with custom DNS server for a subdomain representing YARN services. Addresses #1; with a shortened TTL and no DNS address caching, #3. #2 addressed only if other DNS entries are used to publish service entries. 
+
+Should support existing applications, with a configuration that is stable over time. It does require the clients to not cache DNS addresses forever (this must be explicitly set on Java applications,
+irrespective of the published TTL). It generates a load on the DNS servers that is `O(clients/TTL)`
+
+Google Chubby offers a DNS service to handle this. ZK does not -yet.
+
+### Floating IP Addresses
+
+If the clients know/cache IP addresses of services, these addresses could be floated across service instances. Linux HA has floating IP address support, while Docker containers can make use of them, especially if an integrated DHCP server handles the assignment of IP addresses to specific containers. 
+
+ARP caching is the inevitable problem here, but it is still less brittle than relying on applications to know not to cache IP addresses -and nor does it place so much on DNS servers as short-TTL DNS entries.
+
+### LDAP
+
+Enterprise Directory services are used to publish/locate services. Requires lookup into the directory on binding (#1, #2), re-lookup on failure (#3). LDAP permissions can prevent untrusted applications registering.
+
+* Works well with Windows registries.
+
+* Less common Java-side, though possible -and implemented in the core Java libraries. Spring-LDAP is focused on connection to an LDAP server -not LDAP-driven application config.
+
+### Registration Web Service
+
+ Custom web service registration services used. 
+
+* The sole WS-* one, UDDI, does not have a REST equivalent -DNS is assumed to take on that role.
+
+* Requires new client-side code anyway.
+
+### Zookeeper URL Schema
+
+Offer our own `zk://` URL; java & .NET implementations (others?) to resolve, browser plugins. 
+
+* Would address requirements #1 & #3
+
+* Cost: non-standard; needs an extension for every application/platform, and will not work with tools such as CURL or web browsers
+
+### AM-side config generation
+
+App-side config generation-YARN applications to generate client-side configuration files for launch-time information (#1, #2). The AM can dynamically create these, and as the storage load is all in the AM, does not consume as much resources in a central server as would publishing it all to that central server.
+
+* Requires application to know of client-side applications to support - and be able to generate to their configuration information (i.e. formatted files).
+
+* Requires the AM to get all information from deployed application components needed to generate bindings. Unless the AM can resolve YARN App templates, need a way to get one of the components in the app to generate settings for the entire cluster, and push them back.
+
+* Needs to be repeated for all YARN apps, however deployed.
+
+* Needs something similar for statically deployed applications.
+
+
+### Client-side config generation
+
+YARN app to publish attributes as key-val pairs, client-side code to read and generate configs from  (#1, #2).  Example configuration generators could create: Hadoop-client XML, Spring, tomcat, guice configs, something for .NET.
+
+* Not limited to Hoya application deployments only.
+
+* K-V pairs need to be published "somewhere". A structured section in the ZK tree per app is the obvious location -though potentially expensive. An alternative is AM-published data.
+
+* Needs client-side code capable of extracting information from YARN cluster to generate client-specific configuration.
+
+* Assumes (key, value) pairs sufficient for client config generation. Again, some template expansion will aid here (this time: client-side interpretation).
+
+* Client config generators need to find and bind to the target application themselves.
+
+ 
+
+Multiple options:
+
+* Standard ZK structure for YARN applications (maybe: YARN itself to register paths in ZK & set up child permissions,so enforcing security).
+
+* Agents to push to ZK dynamic information as K-V pairs
+
+* Agent provider on AM to fetch K-V pairs and include in status requests
+
+* CLI to fetch app config keys, echo out responses (needs client log4j settings to log all slf/log4j to stderr, so that app > results.txt would save results explicitly
+
+*  client side code per app to generate specific binding information
+
+### Load-balancer app Yarn App 
+
+Spread requests round a set of registered handlers, e.g web servers. Support plugins for session binding/sharding. 
+
+Some web servers can do this already; a custom YARN app could use grizzy embedded. Binding problem exists, but would support scaleable dispatch of values.
+
+*  Could be offered as an AM extension (in provider, ...): scales well with #of apps in cluster, but adds initial location/failover problems.
+
+* If offered as a core-YARN service, location is handled via a fixed URL. This could place high load on the service, even just 302 redirects.
+
diff --git a/src/site/markdown/security.md b/src/site/markdown/security.md
index ddc51b6..51665f7 100644
--- a/src/site/markdown/security.md
+++ b/src/site/markdown/security.md
@@ -50,17 +50,17 @@
   as the user.
 
 
-## Requirements
+## Security Requirements
 
 
 ### Needs
-*  Slider and HBase to work against secure HDFS
+*  Slider and deployed applications to work against secure HDFS
 *  Slider to work with secure YARN.
-*  Slider to start a secure HBase cluster
+*  Slider to start secure applications
 *  Kerberos and ActiveDirectory to perform the authentication.
 *  Slider to only allow cluster operations by authenticated users -command line and direct RPC. 
 *  Any Slider Web UI and REST API for Ambari to only allow access to authenticated users.
-*  The Slider database in ~/.slider/clusters/$name/data to be writable by HBase
+*  The Slider database in `~/.slider/clusters/$name/data` to be writable by HBase
 
 
 ### Short-lived Clusters
@@ -138,15 +138,15 @@
 This can be done in `slider-client.xml`:
 
 
-  <property>
-    <name>hadoop.security.authorization</name>
-    <value>true</value>
-  </property>
-
-  <property>
-    <name>hadoop.security.authentication</name>
-    <value>kerberos</value>
-  </property>
+    <property>
+      <name>hadoop.security.authorization</name>
+      <value>true</value>
+    </property>
+    
+    <property>
+      <name>hadoop.security.authentication</name>
+      <value>kerberos</value>
+    </property>
 
 
 Or it can be done on the command line
@@ -165,7 +165,7 @@
 
 The realm and controller can be defined in the Java system properties
 `java.security.krb5.realm` and `java.security.krb5.kdc`. These can be fixed
-in the JVM options, as described in the [Client Configuration] (slider-client-configuration.html)
+in the JVM options, as described in the [Client Configuration] (client-configuration.html)
 documentation.
 
 They can also be set on the Slider command line itself, using the `-S` parameter.
diff --git a/src/site/markdown/slider_specs/app_developer_guideline.md b/src/site/markdown/slider_specs/app_developer_guideline.md
deleted file mode 100644
index 3232b58..0000000
--- a/src/site/markdown/slider_specs/app_developer_guideline.md
+++ /dev/null
@@ -1,140 +0,0 @@
-<!---
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-
-# Slider's needs of an application
- 
-Slider installs and runs applications in a YARN cluster -applications that
-do not need to be written for YARN. 
-
-What they do need to be is deployable by Slider, which means installable by YARN,
-configurable by Slider, and, finally, executable by YARN. YARN will kill the
-executed process when destroying a container, so the deployed application
-must expect this to happen and be able to start up from a kill-initiated
-shutdown without any manual recovery process.
-
-They need to locate each other dynamically, both at startup and during execution,
-because the location of processes will be unknown at startup, and may change
-due to server and process failures. 
- 
-## Must
-
-* Install and run from a tarball -and be run from a user that is not root. 
-
-* Be self contained or have all dependencies pre-installed.
-
-* Support dynamic discovery of nodes -such as via ZK.
- 
-* Nodes to rebind themselves dynamically -so if nodes are moved, the application
-can continue
-
-* Handle kill as a normal shutdown mechanism.
-
-* Support multiple instances of the application running in the same cluster,
-  with processes from different application instances sharing
-the same servers.
-
-* Operate correctly when more than one role instance in the application is
-deployed on the same physical host. (If YARN adds anti-affinity options in 
-container requests this will no longer be a requirement)
-
-* Dynamically allocate any RPC or web ports -such as supporting 0 as the number
-of the port to listen on  in configuration options.
-
-* Be trusted. YARN does not run code in a sandbox.
-
-* If it talks to HDFS or other parts of Hadoop, be built against/ship with
-libaries compatible with the version of Hadoop running on the cluster.
-
-* Store persistent data in HDFS (directly or indirectly) with the exact storage location
-configurable. Specifically: not to the local filesystem, and not in a hard coded location
-such as `hdfs://app/data`. Slider creates per-Slider application directories for
-persistent data.
-
-* Be configurable as to where any configuration directory is (or simply relative
-to the tarball). The application must not require it to be in a hard-coded
-location such as `/etc`.
-
-* Not have a fixed location for log output -such as `/var/log/something`
-
-* Run until explicitly terminated. Slider treats an application termination
-(which triggers a container release) as a failure -and reacts to it by restarting
-the container.
-
-
-
-## MUST NOT
-
-* Require human intervention at startup or termination.
-
-## SHOULD
-
-These are the features that we'd like from a service:
-
-* Publish the actual RPC and HTTP ports in a way that can be picked up, such as via ZK
-or an admin API.
-
-* Be configurable via the standard Hadoop mechanisms: text files and XML configuration files.
-If not, custom parsers/configuration generators will be required.
-
-* Support an explicit parameter to define the configuration directory.
-
-* Take late bindings params via -D args or similar
-
-* Be possible to exec without running a complex script, so that process inheritance works everywhere, including (for testing) OS/X
-
-* Provide a way for Slider to get list of nodes in cluster and status. This will let Slider detect failed worker nodes and react to it.
-
-* FUTURE: If a graceful decommissioning is preferred, have an RPC method that a Slider provider can call to invoke this.
-
-* Be location aware from startup. Example: worker nodes to be allocated tables to serve based on which tables are
-stored locally/in-rack, rather than just randomly. This will accelerate startup time.
-
-* Support simple liveness probes (such as an HTTP GET operations).
-
-* Return a well documented set of exit codes, so that failures can be propagated
-  and understood.
-
-* Support cluster size flexing: the dynamic addition and removal of nodes.
-
-
-* Support a management platform such as Apache Ambari -so that the operational
-state of a Slider application can be monitored.
-
-## MAY
-
-* Include a single process that will run at a fixed location and whose termination
-can trigger application termination. Such a process will be executed
-in the same container as the Slider AM, and so known before all other containers
-are requested. If a live cluster is unable to handle restart/migration of 
-such a process, then the Slider application will be unable to handle
-Slider AM restarts.
-
-* Ideally: report on load/cost of decommissioning.
-  E.g amount of data; app load. 
-  
-
-## MAY NOT
-
-* Be written for YARN.
-
-* Be (pure) Java. If the tarball contains native binaries for the cluster's hardware & OS,
-  they should be executable.
-
-* Be dynamically reconfigurable, except for the special requirement of handling
-movement of manager/peer containers in an application-specific manner.
-
-
diff --git a/src/site/markdown/slider_specs/application_configuration.md b/src/site/markdown/slider_specs/application_configuration.md
index 55b78ae..4e16869 100644
--- a/src/site/markdown/slider_specs/application_configuration.md
+++ b/src/site/markdown/slider_specs/application_configuration.md
@@ -15,7 +15,7 @@
    limitations under the License.
 -->
 
-#Application Configuration
+# Application Configuration
 
 App Configuration captures the default configuration associated with the application. *Details of configuration management is discussed in a separate spec*. The default configuration is modified based on user provided InstanceConfiguration, cluster specific details (e.g. HDFS root, local dir root), container allocated resources (port and hostname), and dependencies (e.g. ZK quorom hosts) and handed to the component instances.
 
@@ -27,47 +27,47 @@
 
 A config file is of the form:
 
-```
-<?xml version="1.0"?>
-<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
-<configuration>
-  <property>
-  ...
-  </property>
-</configuration>
-```
+
+    <?xml version="1.0"?>
+    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+    <configuration>
+      <property>
+      ...
+      </property>
+    </configuration>
+
 
 
 Each configuration property is specified as follows:
 
-```
-<property>
-    <name>storm.zookeeper.session.timeout</name>
-    <value>20000</value>
-    <description>The session timeout for clients to ZooKeeper.</description>
-    <required>false</required>
-    <valueRestriction>0-30000</valueRestriction>
-  </property>
-  <property>
-    <name>storm.zookeeper.root</name>
-    <value>/storm</value>
-    <description>The root location at which Storm stores data in ZK.</description>
-    <required>true</required>
-  </property>
-  <property>
-    <name>jvm.heapsize</name>
-    <value>256</value>
-    <description>The default JVM heap size for any component instance.</description>
-    <required>true</required>
-  </property>
-  <property>
-    <name>nimbus.host</name>
-    <value>localhost</value>
-    <description>The host that the master server is running on.</description>
-    <required>true</required>
-    <clientVisible>true</clientVisible>
-  </property>
-  ```
+
+    <property>
+        <name>storm.zookeeper.session.timeout</name>
+        <value>20000</value>
+        <description>The session timeout for clients to ZooKeeper.</description>
+        <required>false</required>
+        <valueRestriction>0-30000</valueRestriction>
+      </property>
+      <property>
+        <name>storm.zookeeper.root</name>
+        <value>/storm</value>
+        <description>The root location at which Storm stores data in ZK.</description>
+        <required>true</required>
+      </property>
+      <property>
+        <name>jvm.heapsize</name>
+        <value>256</value>
+        <description>The default JVM heap size for any component instance.</description>
+        <required>true</required>
+      </property>
+      <property>
+        <name>nimbus.host</name>
+        <value>localhost</value>
+        <description>The host that the master server is running on.</description>
+        <required>true</required>
+        <clientVisible>true</clientVisible>
+      </property>
+      
 
 
 * name: name of the parameter
diff --git a/src/site/markdown/slider_specs/application_definition.md b/src/site/markdown/slider_specs/application_definition.md
index e6bd510..da745e6 100644
--- a/src/site/markdown/slider_specs/application_definition.md
+++ b/src/site/markdown/slider_specs/application_definition.md
@@ -85,87 +85,87 @@
 
 * **requirement**: a set of requirements that lets Slider know what properties are required by the app command scripts
 
-```
-  <metainfo>
-    <schemaVersion>2.0</schemaVersion>
-    <application>
-      <name>HBASE</name>
-      <version>0.96.0.2.1.1</version>
-      <type>YARN-APP</type>
-      <minHadoopVersion>2.1.0</minHadoopVersion>
-      <components>
-        <component>
-          <name>HBASE_MASTER</name>
-          <category>MASTER</category>
-          <minInstanceCount>1</minInstanceCount>
-          <maxInstanceCount>2</maxInstanceCount>
-          <commandScript>
-            <script>scripts/hbase_master.py</script>
-            <scriptType>PYTHON</scriptType>
-            <timeout>600</timeout>
-          </commandScript>
-          <customCommands>
-            <customCommand>
-              <name>GRACEFUL_STOP</name>
+
+      <metainfo>
+        <schemaVersion>2.0</schemaVersion>
+        <application>
+          <name>HBASE</name>
+          <version>0.96.0.2.1.1</version>
+          <type>YARN-APP</type>
+          <minHadoopVersion>2.1.0</minHadoopVersion>
+          <components>
+            <component>
+              <name>HBASE_MASTER</name>
+              <category>MASTER</category>
+              <minInstanceCount>1</minInstanceCount>
+              <maxInstanceCount>2</maxInstanceCount>
               <commandScript>
                 <script>scripts/hbase_master.py</script>
                 <scriptType>PYTHON</scriptType>
-                <timeout>1800</timeout>
+                <timeout>600</timeout>
               </commandScript>
-          </customCommand>
-        </customCommands>
-        </component>
+              <customCommands>
+                <customCommand>
+                  <name>GRACEFUL_STOP</name>
+                  <commandScript>
+                    <script>scripts/hbase_master.py</script>
+                    <scriptType>PYTHON</scriptType>
+                    <timeout>1800</timeout>
+                  </commandScript>
+              </customCommand>
+            </customCommands>
+            </component>
+    
+            <component>
+              <name>HBASE_REGIONSERVER</name>
+              <category>SLAVE</category>
+              <minInstanceCount>1</minInstanceCount>
+              ...
+            </component>
+    
+            <component>
+              <name>HBASE_CLIENT</name>
+              <category>CLIENT</category>
+              ...
+          </components>
+    
+          <osSpecifics>
+            <osSpecific>
+              <osType>any</osType>
+              <packages>
+                <package>
+                  <type>tarball</type>
+                  <name>hbase-0.96.1-tar.gz</name>
+                  <location>package/files</location>
+                </package>
+              </packages>
+            </osSpecific>
+          </osSpecifics>
+    
+          <commandScript>
+            <script>scripts/app_health_check.py</script>
+            <scriptType>PYTHON</scriptType>
+            <timeout>300</timeout>
+          </commandScript>
+    
+          <dependencies>
+            <dependency>
+              <name>ZOOKEEPER</name>
+              <scope>cluster</scope>
+              <requirement>client,zk_quorom_hosts</requirement>
+            </dependency>
+          </dependencies>
+    
+        </application>
+      </metainfo>
 
-        <component>
-          <name>HBASE_REGIONSERVER</name>
-          <category>SLAVE</category>
-          <minInstanceCount>1</minInstanceCount>
-          ...
-        </component>
-
-        <component>
-          <name>HBASE_CLIENT</name>
-          <category>CLIENT</category>
-          ...
-      </components>
-
-      <osSpecifics>
-        <osSpecific>
-          <osType>any</osType>
-          <packages>
-            <package>
-              <type>tarball</type>
-              <name>hbase-0.96.1-tar.gz</name>
-              <location>package/files</location>
-            </package>
-          </packages>
-        </osSpecific>
-      </osSpecifics>
-
-      <commandScript>
-        <script>scripts/app_health_check.py</script>
-        <scriptType>PYTHON</scriptType>
-        <timeout>300</timeout>
-      </commandScript>
-
-      <dependencies>
-        <dependency>
-          <name>ZOOKEEPER</name>
-          <scope>cluster</scope>
-          <requirement>client,zk_quorom_hosts</requirement>
-        </dependency>
-      </dependencies>
-
-    </application>
-  </metainfo>
-```
 
 
 ## Open Questions
 
 1. Applications may need some information from other applications or base services such as ZK, YARN, HDFS. Additionally, they may need a dedicated ZK node, a HDFS working folder, etc. How do we capture this requirement? There needs to be a well-known way to ask for these information e.g. fs.default.name, zk_hosts.
 
-2. Similar to the above there are common parameters such as JAVA_HOME and other environment variables. Application should be able to refer to these parameters and Slider should be able to provide them.
+2. Similar to the above there are common parameters such as `JAVA_HOME` and other environment variables. Application should be able to refer to these parameters and Slider should be able to provide them.
 
 3. Composite application definition: Composite application definition would require a spec that refers to this spec and binds multiple applications together.
 
diff --git a/src/site/markdown/slider_specs/application_instance_configuration.md b/src/site/markdown/slider_specs/application_instance_configuration.md
index 6ccdece..dad7f4e 100644
--- a/src/site/markdown/slider_specs/application_instance_configuration.md
+++ b/src/site/markdown/slider_specs/application_instance_configuration.md
@@ -21,18 +21,18 @@
 
 Instance configuration is a JSON formatted doc in the following form:
 
-```
-{
-    "configurations": {
-        "app-global-config": {
-        },
-        "config-type-1": {
-        },
-        "config-type-2": {
-        },
+
+    {
+        "configurations": {
+            "app-global-config": {
+            },
+            "config-type-1": {
+            },
+            "config-type-2": {
+            },
+        }
     }
-}
-```
+
 
 
 The configuration overrides are organized in a two level structure where name-value pairs are grouped on the basis of config types they belong to. App instantiator can provide arbitrary custom name-value pairs within a config type defined in the AppPackage or can create a completely new config type that does not exist in the AppAPackage. The interpretation of the configuration is entirely up to the command implementations present in the AppPackage. Slider will simply merge the configs with the InstanceConfiguration being higher priority than that default configuration and hand it off to the app commands.
@@ -40,21 +40,21 @@
 A sample config for hbase may be as follows:
 
 
-```
-{
-    "configurations": {
-        "hbase-log4j": {
-            "log4j.logger.org.apache.zookeeper": "INFO",
-            "log4j.logger.org.apache.hadoop.hbase": "DEBUG"
-        },
-        "hbase-site": {
-            "hbase.hstore.flush.retries.number": "120",
-            "hbase.regionserver.info.port": "",
-            "hbase.master.info.port": "60010"
-        }
-}
-```
+
+    {
+        "configurations": {
+            "hbase-log4j": {
+                "log4j.logger.org.apache.zookeeper": "INFO",
+                "log4j.logger.org.apache.hadoop.hbase": "DEBUG"
+            },
+            "hbase-site": {
+                "hbase.hstore.flush.retries.number": "120",
+                "hbase.regionserver.info.port": "",
+                "hbase.master.info.port": "60010"
+            }
+    }
+    
 
 
-The above config overwrites few parameters in hbase-site and hbase-log4j files. Several config properties such as "hbase.zookeeper.quorum" for hbase may not be known to the user at the time of app instantiation. These configurations will be provided by the Slider infrastructure in a well-known form so that the app implementation can read and set them while instantiating component instances..
+The above config overwrites few parameters in hbase-site and hbase-log4j files. Several config properties such as `hbase.zookeeper.quorum` for hbase may not be known to the user at the time of app instantiation. These configurations will be provided by the Slider infrastructure in a well-known form so that the app implementation can read and set them while instantiating component instances..
 
diff --git a/src/site/markdown/app_needs.md b/src/site/markdown/slider_specs/application_needs.md
similarity index 100%
rename from src/site/markdown/app_needs.md
rename to src/site/markdown/slider_specs/application_needs.md
diff --git a/src/site/markdown/slider_specs/application_package.md b/src/site/markdown/slider_specs/application_package.md
index 691c4d1..79620fd 100644
--- a/src/site/markdown/slider_specs/application_package.md
+++ b/src/site/markdown/slider_specs/application_package.md
@@ -42,7 +42,7 @@
 other scripts, txt files, tarballs, etc.
 
 
-![Image](../images/app_package_sample_04.png?raw=true)
+![Image](../images/app_package_sample_04.png)
 
 The example above shows a semi-expanded view of an application "HBASE-YARN-APP" and the package structure for OOZIE command scripts.
 
@@ -76,37 +76,37 @@
 
 The script specified in the metainfo is expected to understand the command. It can choose to call other scripts based on how the application author organizes the code base. For example:
 
-```
-class OozieServer(Script):
-  def install(self, env):
-    self.install_packages(env)
-    
-  def configure(self, env):
-    import params
-    env.set_params(params)
-    oozie(is_server=True)
-    
-  def start(self, env):
-    import params
-    env.set_params(params)
-    self.configure(env)
-    oozie_service(action='start')
-    
-  def stop(self, env):
-    import params
-    env.set_params(params)
-    oozie_service(action='stop')
 
-  def status(self, env):
-    import status_params
-    env.set_params(status_params)
-    check_process_status(status_params.pid_file)
-```
+    class OozieServer(Script):
+      def install(self, env):
+        self.install_packages(env)
+        
+      def configure(self, env):
+        import params
+        env.set_params(params)
+        oozie(is_server=True)
+        
+      def start(self, env):
+        import params
+        env.set_params(params)
+        self.configure(env)
+        oozie_service(action='start')
+        
+      def stop(self, env):
+        import params
+        env.set_params(params)
+        oozie_service(action='stop')
+    
+      def status(self, env):
+        import status_params
+        env.set_params(status_params)
+        check_process_status(status_params.pid_file)
+
 
 
 The scripts are invoked in the following manner:
 
-`python SCRIPT COMMAND JSON_FILE PACKAGE_ROOT STRUCTURED_OUT_FILE`
+    python SCRIPT COMMAND JSON_FILE PACKAGE_ROOT STRUCTURED_OUT_FILE
 
 * SCRIPT is the top level script that implements the commands for the component. 
 
@@ -134,13 +134,11 @@
 
 Sample template file for dfs.exclude file to list excluded/decommissioned hosts. hdfs_exclude_files in the property defined in params.py which is populated from config parameters defined in JSON_FILE.
 
-```
-{% if hdfs_exclude_file %} 
-{% for host in hdfs_exclude_file %}
-{{host}}
-{% endfor %}
-{% endif %}
-```
+    {% if hdfs_exclude_file %} 
+    {% for host in hdfs_exclude_file %}
+    {{host}}
+    {% endfor %}
+    {% endif %}
 
 
 ### files folder
diff --git a/src/site/markdown/slider_specs/apps_on_yarn_cli.md b/src/site/markdown/slider_specs/apps_on_yarn_cli.md
deleted file mode 100644
index d834115..0000000
--- a/src/site/markdown/slider_specs/apps_on_yarn_cli.md
+++ /dev/null
@@ -1,92 +0,0 @@
-<!---
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-
-#Slider CLI
-
-This document describes the CLI to deploy and manage YARN applications using Slider.
-
-## Operations
-
-### `create <application> <--app.package packagelocation> <--resource resourcespec> <--app.instance.configuration appconfiguration> <--options sliderconfiguration> [--provider providername]`
-
-Build an application specification by laying out the application artifacts in HDFS and prepares it for *start*. This involves specifying the application name, application package, YARN resource requirements, application specific configuration overrides, options for Slider, and optionally declaring the provider, etc. The provider is invoked during the build process, and can set default values for components.
-
-The default application configuration and components would be built from the application metadata contained in the package. *create* performs a structural validation of the application package and validates the supplied resources specification and instance configuration against the application package.
-
-**parameters**
-
-* application: name of the application instance, must be unique within the Hadoop cluster
-* packagelocation: the application package on local disk or HDFS
-* resourcespec: YARN resource requirements for application components
-* appconfiguration: configuration override for the application
-* sliderconfiguration: configuration for Slider itself such as location of YARN resource manager, HDFS file system, ZooKeeper quorom nodes, etc.
-* providername: name of the any non-default provider. Agent provider is default.
-
-### `destroy <application>` 
-
-destroy a (stopped) application. The stop check is there to prevent accidentally destroying an application in use
-
-### `start <application>` 
-
-Start an application instance that is already created through *create*
-
-### `stop <application>  [--force]`
-
-Stop the application instance. 
-
-The --force operation tells YARN to kill the application instance without involving the AppMaster.
-
-### `flex <application> [--component componentname count]* <--resource resourcespec>`
-
-Update component instance count
- 
-### `configure <application> <--app.instance.configuration appconfiguration>`
- 
-Modify application instance configuration. Updated configuration is only applied after the application is restarted.
-
-### `status <application>`
-
-Report the status of an application instance. If there is a record of an application instance in a failed/finished state AND there is no live application instance, the finished application is reported. Otherwise, the running application's status is reported.
-
-If there a no instances of an application in the YARN history, the application is looked up in the applications directory, and the status is listed if present.
-
-
-### `listapplications [--accepted] [--started] [--live] [--finished] [--failed] [--stopped]` 
-
-List all applications, optionally the ones in the named specific states. 
-
-
-### `getconfig <application> [--config filename  [--dir destdir|--outfile destfile]]`
-
-list/retrieve any configs published by the application.
-
-if no --config option is provided all available configs are listed
-
-If a --file is specified, it is downloaded to the current directory with the specified filename, unless a destination directory/filename is provided
-
-
-### `history <application>`
-
-Lists all life-cycle events of the application instance since the last *start*
-
-### `kill --containers [containers] --components [components] --nodes [nodes]`
-
-Kill listed containers, everything in specific components, or on specific nodes. This can be used to trigger restart of services and decommission of nodes
-
-### `wait <application> [started|live|stopped] --timeout <time>`
-
-Block waiting for a application to enter the specififed state. Can fail if the application stops while waiting for it to be started/live
diff --git a/src/site/markdown/slider_specs/canonical_scenarios.md b/src/site/markdown/slider_specs/canonical_scenarios.md
index 3576b91..3f204d6 100644
--- a/src/site/markdown/slider_specs/canonical_scenarios.md
+++ b/src/site/markdown/slider_specs/canonical_scenarios.md
@@ -95,12 +95,12 @@
 
 * **Managed** - the application client is deployed via Slider mechanisms.  Clients, in this context, differ from the other application components in that they are not running, daemon processes.  However, in a managed environment there is the expectation that the appropriate binaries and application elements will be distributed to the designated client hosts, and the configuration on those hosts will be updated to allow for execution of requests to the application’s master/server components.  Therefore, client components should be defined in the application specification as elements that the management infrastructure supports (Figure 1).
 
-![Image](../images/managed_client.png?raw=true)
+![Image](../images/managed_client.png)
 Figure 1 - Managed Application Client and associated Slider Application
 
 * **Unmanaged** - the application client is run as a process outside of Slider/yarn, although it may leverage Slider provided libraries that allow for server component discovery etc (Figure 2).  These libraries would primarily be client bindings providing access to the registry leveraged by Slider (e.g. Java and python bindings to Zookeeper)
 
-![Image](../images/unmanaged_client.png?raw=true)
+![Image](../images/unmanaged_client.png)
 Figure 2 - Unmanaged Application Client and associated Slider Application
 
 ### Managed Application Client
diff --git a/src/site/markdown/slider_specs/creating_app_definitions.md b/src/site/markdown/slider_specs/creating_app_definitions.md
index 40ae707..61dcb0b 100644
--- a/src/site/markdown/slider_specs/creating_app_definitions.md
+++ b/src/site/markdown/slider_specs/creating_app_definitions.md
@@ -27,24 +27,24 @@
 
 For example:
 	
-* yarn      8849  -- python ./infra/agent/slider-agent/agent/main.py --label container_1397675825552_0011_01_000003___HBASE_REGIONSERVER --host AM_HOST --port 47830
-* yarn      9085  -- bash /hadoop/yarn/local/usercache/yarn/appcache/application_1397675825552_0011/ ... internal_start regionserver
-* yarn      9114 -- /usr/jdk64/jdk1.7.0_45/bin/java -Dproc_regionserver -XX:OnOutOfMemoryError=...
+    yarn      8849  -- python ./infra/agent/slider-agent/agent/main.py --label container_1397675825552_0011_01_000003___HBASE_REGIONSERVER --host AM_HOST --port 47830
+    yarn      9085  -- bash /hadoop/yarn/local/usercache/yarn/appcache/application_1397675825552_0011/ ... internal_start regionserver
+    yarn      9114 -- /usr/jdk64/jdk1.7.0_45/bin/java -Dproc_regionserver -XX:OnOutOfMemoryError=...
 
 Shows three processes, the Slider-Agent process, the bash script to start HBase Region Server and the HBase Region server itself. Three of these together constitute the container.	
 
 ## Using an AppPackage
 The following command creates an HBase application using the AppPackage for HBase.
 
-	"./slider create cl1 --zkhosts zk1,zk2 --image hdfs://NN:8020/slider/agent/slider-agent-0.21.tar --option agent.conf hdfs://NN:8020/slider/agent/conf/agent.ini  --template /work/appConf.json --resources /work/resources.json  --option application.def hdfs://NN:8020/slider/hbase_v096.tar"
+	  ./slider create cl1 --zkhosts zk1,zk2 --image hdfs://NN:8020/slider/agent/slider-agent-0.21.tar --option agent.conf hdfs://NN:8020/slider/agent/conf/agent.ini  --template /work/appConf.json --resources /work/resources.json  --option application.def hdfs://NN:8020/slider/hbase_v096.tar
 	
 Lets analyze various parameters from the perspective of app creation:
   
-* **--image**: its the slider agent tarball
-* **--option agent.conf**: the configuration file for the agent instance
-* **--option app.def**: app def (AppPackage)
-* **--template**: app configuration
-* **--resources**: yarn resource requests
+* `--image`: its the slider agent tarball
+* `--option agent.conf`: the configuration file for the agent instance
+* `--option app.def`: app def (AppPackage)
+* `--template`: app configuration
+* `--resources`: yarn resource requests
 * … other parameters are described in accompanying docs. 
 
 ### AppPackage
@@ -52,94 +52,92 @@
 
 In the enlistment there are three example AppPackages
 
-* app-packages/hbase-v0_96
-* app-packages/accumulo-v1_5
-* app-packages/storm-v0_91
+* `app-packages/hbase-v0_96`
+* `app-packages/accumulo-v1_5`
+* `app-packages/storm-v0_91`
 
 The application tarball, containing the binaries/artifacts of the application itself is a component within the AppPackage. They are:
 
-* For hbase-v0_96 - app-packages/hbase-v0_96/package/files/hbase-0.96.1-hadoop2-bin.tar.gz.REPLACE
-* For accumulo-v1_5 - app-packages/accumulo-v1_5/package/files/accumulo-1.5.1-bin.tar.gz.REPLACE
-* For storm-v0_91 - app-packages/storm-v0_91/package/files/apache-storm-0.9.1.2.1.1.0-237.tar.gz.placeholder
+* For hbase - `app-packages/hbase-v0_96/package/files/hbase-0.96.1-hadoop2-bin.tar.gz.REPLACE`
+* For accumulo - `app-packages/accumulo-v1_5/package/files/accumulo-1.5.1-bin.tar.gz.REPLACE`
+* For storm - `app-packages/storm-v0_91/package/files/apache-storm-0.9.1.2.1.1.0-237.tar.gz.placeholder`
 
 They are placehoder files, mostly because the files themselves are too large as well as users are free to use their own version of the package. To create a Slider AppPackage - replace the file with an actual application tarball and then ensure that the metainfo.xml has the correct file name. After that create a tarball using standard tar commands and ensure that the package has the metainfo.xml file at the root folder.
 
 ### appConf.json
 An appConf.json contains the application configuration. The sample below shows configuration for HBase.
 
-```
-{
-    "schema" : "http://example.org/specification/v2.0.0",
-    "metadata" : {
-    },
-    "global" : {
-        "config_types": "core-site,hdfs-site,hbase-site",
-        
-        "java_home": "/usr/jdk64/jdk1.7.0_45",
-        "package_list": "files/hbase-0.96.1-hadoop2-bin.tar",
-        
-        "site.global.app_user": "yarn",
-        "site.global.app_log_dir": "${AGENT_LOG_ROOT}/app/log",
-        "site.global.app_pid_dir": "${AGENT_WORK_ROOT}/app/run",
-        "site.global.security_enabled": "false",
 
-        "site.hbase-site.hbase.hstore.flush.retries.number": "120",
-        "site.hbase-site.hbase.client.keyvalue.maxsize": "10485760",
-        "site.hbase-site.hbase.hstore.compactionThreshold": "3",
-        "site.hbase-site.hbase.rootdir": "${NN_URI}/apps/hbase/data",
-        "site.hbase-site.hbase.tmp.dir": "${AGENT_WORK_ROOT}/work/app/tmp",
-        "site.hbase-site.hbase.regionserver.port": "0",
-
-        "site.core-site.fs.defaultFS": "${NN_URI}",
-        "site.hdfs-site.dfs.namenode.https-address": "${NN_HOST}:50470",
-        "site.hdfs-site.dfs.namenode.http-address": "${NN_HOST}:50070"
+    {
+      "schema" : "http://example.org/specification/v2.0.0",
+      "metadata" : {
+      },
+      "global" : {
+          "config_types": "core-site,hdfs-site,hbase-site",
+          
+          "java_home": "/usr/jdk64/jdk1.7.0_45",
+          "package_list": "files/hbase-0.96.1-hadoop2-bin.tar",
+          
+          "site.global.app_user": "yarn",
+          "site.global.app_log_dir": "${AGENT_LOG_ROOT}/app/log",
+          "site.global.app_pid_dir": "${AGENT_WORK_ROOT}/app/run",
+          "site.global.security_enabled": "false",
+  
+          "site.hbase-site.hbase.hstore.flush.retries.number": "120",
+          "site.hbase-site.hbase.client.keyvalue.maxsize": "10485760",
+          "site.hbase-site.hbase.hstore.compactionThreshold": "3",
+          "site.hbase-site.hbase.rootdir": "${NN_URI}/apps/hbase/data",
+          "site.hbase-site.hbase.tmp.dir": "${AGENT_WORK_ROOT}/work/app/tmp",
+          "site.hbase-site.hbase.regionserver.port": "0",
+  
+          "site.core-site.fs.defaultFS": "${NN_URI}",
+          "site.hdfs-site.dfs.namenode.https-address": "${NN_HOST}:50470",
+          "site.hdfs-site.dfs.namenode.http-address": "${NN_HOST}:50070"
+      }
     }
-}
-```
+
 appConf.jso allows you to pass in arbitrary set of configuration that Slider will forward to the application component instances.
 
-* Variables of the form "site.xx.yy" translates to variables by the name "yy" within the group "xx" and are typically converted to site config files by the name "xx" containing variable "yy". For example, "site.hbase-site.hbase.regionserver.port":"" will be sent to the Slider-Agent as "hbase-site" : { "hbase.regionserver.port": ""} and app def scripts can access all variables under "hbase-site" as a single property bag.
-* Similarly, "site.core-site.fs.defaultFS" allows you to pass in the default fs. *This specific variable is automatically made available by Slider but its shown here as an example.*
-* Variables of the form "site.global.zz" are sent in the same manner as other site variables except these variables are not expected to get translated to a site xml file. Usually, variables needed for template or other filter conditions (such as security_enabled = true/false) can be sent in as "global variable". 
+* Variables of the form `site.xx.yy` translates to variables by the name `yy` within the group `xx` and are typically converted to site config files by the name `xx` containing variable `yy`. For example, `"site.hbase-site.hbase.regionserver.port":""` will be sent to the Slider-Agent as `"hbase-site" : { "hbase.regionserver.port": ""}` and app def scripts can access all variables under `hbase-site` as a single property bag.
+* Similarly, `site.core-site.fs.defaultFS` allows you to pass in the default fs. *This specific variable is automatically made available by Slider but its shown here as an example.*
+* Variables of the form `site.global.zz` are sent in the same manner as other site variables except these variables are not expected to get translated to a site xml file. Usually, variables needed for template or other filter conditions (such as security_enabled = true/false) can be sent in as "global variable". 
 
 ### --resources resources.json
 The resources.json file encodes the Yarn resource count requirement for the application instance.
 
 The components section lists the two application component for an HBase application.
 
-* wait.heartbeat: a crude mechanism to control the order of component activation. A heartbeat is ~10 seconds.
-* role.priority: each component must be assigned unique priority
-* component.instances: number of instances for this component type
-* role.script: the script path for the role *a temporary work-around as this will eventually be gleaned from metadata.xml*
+* `wait.heartbeat`: a crude mechanism to control the order of component activation. A heartbeat is ~10 seconds.
+* `role.priority`: each component must be assigned unique priority
+* `component.instances`: number of instances for this component type
+* `role.script`: the script path for the role *a temporary work-around as this will eventually be gleaned from metadata.xml*
             
 Sample:
 
-```
-{
-    "schema" : "http://example.org/specification/v2.0.0",
-    "metadata" : {
-    },
-    "global" : {
-    },
-    "components" : {
-        "HBASE_MASTER" : {
-            "wait.heartbeat" : "5",
-            "role.priority" : "1",
-            "component.instances" : "1",
-            "role.script" : "scripts/hbase_master.py"
-        },
-        "slider-appmaster" : {
-            "jvm.heapsize" : "256M"
-        },
-        "HBASE_REGIONSERVER" : {
-            "wait.heartbeat" : "3",
-            "role.priority" : "2",
-            "component.instances" : "1",
-            "role.script" : "scripts/hbase_regionserver.py"
-        }
+    {
+      "schema" : "http://example.org/specification/v2.0.0",
+      "metadata" : {
+      },
+      "global" : {
+      },
+      "components" : {
+          "HBASE_MASTER" : {
+              "wait.heartbeat" : "5",
+              "role.priority" : "1",
+              "component.instances" : "1",
+              "role.script" : "scripts/hbase_master.py"
+          },
+          "slider-appmaster" : {
+              "jvm.heapsize" : "256M"
+          },
+          "HBASE_REGIONSERVER" : {
+              "wait.heartbeat" : "3",
+              "role.priority" : "2",
+              "component.instances" : "1",
+              "role.script" : "scripts/hbase_regionserver.py"
+          }
+      }
     }
-}
-```
 
 ## Creating AppPackage
 Refer to [App Command Scripts](writing_app_command_scripts) for details on how to write scripts for a AppPackage. These scripts are in the package/script folder within the AppPackage. *Use the checked in samples for HBase/Storm/Accumulo as reference for script development.*
diff --git a/src/site/markdown/slider_specs/index.md b/src/site/markdown/slider_specs/index.md
index eb63ec3..d960588 100644
--- a/src/site/markdown/slider_specs/index.md
+++ b/src/site/markdown/slider_specs/index.md
@@ -15,11 +15,9 @@
    limitations under the License.
 -->
 
-PROJECT SLIDER
-===
+# PROJECT SLIDER
 
-Introduction
----
+##Introduction
 
 **SLIDER: A collection of tools and technologies to simplify the packaging, deployment and management of long-running applications on YARN.**
 
@@ -27,14 +25,13 @@
 - Flexibility (dynamic scaling) - YARN provides the application with the facilities to allow for scale-up or scale-down
 - Resource Mgmt (optimization) - YARN handles allocation of cluster resources.
 
-Terminology
----
+## Terminology
 
-- **Apps on YARN**
+### Apps on YARN
  - Application written to run directly on YARN
  - Packaging, deployment and lifecycle management are custom built for each application
 
-- **Slider Apps**
+### Slider Apps
  - Applications deployed and managed on YARN using Slider
  - Use of slider minimizes custom code for deployment + lifecycle management
  - Requires apps to follow Slider guidelines and packaging ("Sliderize")
@@ -44,11 +41,11 @@
 
 The entry points to leverage Slider are:
 
-- [Specifications for AppPackage](application_package.md)
-- [Documentation for the SliderCLI](apps_on_yarn_cli.md)
-- [Specifications for Application Definition](application_definition.md)
-- [Specifications for Configuration](application_configuration.md)
-- [Specification of Resources](resource_specification.md)
-- [Specifications InstanceConfiguration](application_instance_configuration.md)
-- [Guidelines for Clients and Client Applications](canonical_scenarios.md)
-- [Documentation for "General Developer Guidelines"](app_developer_guideline.md)
+- [Specifications for AppPackage](application_package.html)
+- [Application Needs](application_needs.html)
+- [Specifications for Application Definition](application_definition.html)
+- [Specifications for Configuration](application_configuration.html)
+- [Specification of Resources](resource_specification.html)
+- [Specifications InstanceConfiguration](application_instance_configuration.html)
+- [Guidelines for Clients and Client Applications](canonical_scenarios.html)
+- [Documentation for "General Developer Guidelines"](app_developer_guideline.html)
diff --git a/src/site/markdown/slider_specs/resource_specification.md b/src/site/markdown/slider_specs/resource_specification.md
index a3f62ec..5099705 100644
--- a/src/site/markdown/slider_specs/resource_specification.md
+++ b/src/site/markdown/slider_specs/resource_specification.md
@@ -28,28 +28,27 @@
 
 An example resource requirement for an application that has two components "master" and "worker" is as follows. Slider will automatically add the requirements for the AppMaster for the application. This compoent is named "slider".
 
-```
-"components" : {
-    "worker" : {
-      "yarn.memory" : "768",
-      "env.MALLOC_ARENA_MAX" : "4",
-      "component.instances" : "1",
-      "component.name" : "worker",
-      "yarn.vcores" : "1"
-    },
-    "slider" : {
-      "yarn.memory" : "256",
-      "env.MALLOC_ARENA_MAX" : "4",
-      "component.instances" : "1",
-      "component.name" : "slider",
-      "yarn.vcores" : "1"
-    },
-    "master" : {
-      "yarn.memory" : "1024",
-      "env.MALLOC_ARENA_MAX" : "4",
-      "component.instances" : "1",
-      "component.name" : "master",
-      "yarn.vcores" : "1"
+    "components" : {
+      "worker" : {
+        "yarn.memory" : "768",
+        "env.MALLOC_ARENA_MAX" : "4",
+        "component.instances" : "1",
+        "component.name" : "worker",
+        "yarn.vcores" : "1"
+      },
+      "slider" : {
+        "yarn.memory" : "256",
+        "env.MALLOC_ARENA_MAX" : "4",
+        "component.instances" : "1",
+        "component.name" : "slider",
+        "yarn.vcores" : "1"
+      },
+      "master" : {
+        "yarn.memory" : "1024",
+        "env.MALLOC_ARENA_MAX" : "4",
+        "component.instances" : "1",
+        "component.name" : "master",
+        "yarn.vcores" : "1"
+      }
     }
-  }
-```
+
diff --git a/src/site/markdown/slider_specs/slider-specifications-draft-20140310.zip b/src/site/markdown/slider_specs/slider-specifications-draft-20140310.zip
deleted file mode 100644
index 3dd4feb..0000000
--- a/src/site/markdown/slider_specs/slider-specifications-draft-20140310.zip
+++ /dev/null
Binary files differ
diff --git a/src/site/markdown/slider_specs/writing_app_command_scripts.md b/src/site/markdown/slider_specs/writing_app_command_scripts.md
index 4b97c0f..fd025d4 100644
--- a/src/site/markdown/slider_specs/writing_app_command_scripts.md
+++ b/src/site/markdown/slider_specs/writing_app_command_scripts.md
@@ -15,11 +15,11 @@
    limitations under the License.
 -->
 
-#Developing App Command Scripts
+# Developing App Command Scripts
 
 App command implementations follow a standard structure so that they can be invoked in an uniform manner. For any command, the python scripts are invoked as:
 
-`python SCRIPT COMMAND JSON_FILE PACKAGE_ROOT STRUCTURED_OUT_FILE`
+    python SCRIPT COMMAND JSON_FILE PACKAGE_ROOT STRUCTURED_OUT_FILE
 
 * SCRIPT is the top level script that implements the commands for the component. 
 
@@ -35,9 +35,7 @@
 
 Sample:
 
-```
-python /apps/HBASE_ON_YARN/package/scripts/hbase_regionserver.py START /apps/commands/cmd_332/command.json /apps/HBASE_ON_YARN/package /apps/commands/cmd_332/strout.txt
-```
+    python /apps/HBASE_ON_YARN/package/scripts/hbase_regionserver.py START /apps/commands/cmd_332/command.json /apps/HBASE_ON_YARN/package /apps/commands/cmd_332/strout.txt
 
 **Note**: The above is how Slider-Agent invokes the scripts. Its provided as a reference for developing the scripts themselves as well as a way to test/debug the scripts.
 
@@ -45,174 +43,169 @@
 
 The parameters are organized as multi-layer name-value pairs.
 
-```
-{
-    "commandId": "Command Id as assigned by Slider",
-    "command": "Command being executed",
-    "commandType": "Type of command",
-    "clusterName": "Name of the cluster",
-    "appName": "Name of the app",
-    "component": "Name of the component",
-    "hostname": "Name of the host",
-    "public_hostname": "FQDN of the host",
-    "hostParams": {
-        "host specific parameters common to all commands"
-    },
-    "componentParams": {
-        "component specific parameters, if any"
-    },
-    "commandParams": {
-        "command specific parameters, usually used in case of custom commands"
-    },
-    "configurations": {
-        "app-global-config": {
+    {
+        "commandId": "Command Id as assigned by Slider",
+        "command": "Command being executed",
+        "commandType": "Type of command",
+        "clusterName": "Name of the cluster",
+        "appName": "Name of the app",
+        "component": "Name of the component",
+        "hostname": "Name of the host",
+        "public_hostname": "FQDN of the host",
+        "hostParams": {
+            "host specific parameters common to all commands"
         },
-        "config-type-2": {
+        "componentParams": {
+            "component specific parameters, if any"
         },
-        "config-type-2": {
+        "commandParams": {
+            "command specific parameters, usually used in case of custom commands"
+        },
+        "configurations": {
+            "app-global-config": {
+            },
+            "config-type-2": {
+            },
+            "config-type-2": {
+            }
         }
     }
-}
-```
 
 
 ## Sample configuration parameters
 
-```
-{
-    "commandId": "2-2",
-    "command": "START",
-    "commandType": "EXECUTION_COMMAND",
-    "clusterName": "c1",
-    "appName": "HBASE",
-    "componentName": "HBASE_MASTER",
-    "hostParams": {
-        "java_home": "/usr/jdk64/jdk1.7.0_45"
-    },
-    "componentParams": {},
-    "commandParams": {},
-    "hostname": "c6403.ambari.apache.org",
-    "public_hostname": "c6403.ambari.apache.org",
-    "configurations": {
-        "hbase-log4j": {
-         "log4j.threshold": "ALL",
-         "log4j.rootLogger": "${hbase.root.logger}",
-         "log4j.logger.org.apache.zookeeper": "INFO",
-         "log4j.logger.org.apache.hadoop.hbase": "DEBUG",
-         "log4j.logger.org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher": "INFO",
-         "log4j.logger.org.apache.hadoop.hbase.zookeeper.ZKUtil": "INFO",
-         "log4j.category.SecurityLogger": "${hbase.security.logger}",
-         "log4j.appender.console": "org.apache.log4j.ConsoleAppender",
-         "log4j.appender.console.target": "System.err",
-         "log4j.appender.console.layout": "org.apache.log4j.PatternLayout",
-         "log4j.appender.console.layout.ConversionPattern": "%d{ISO8601} %-5p [%t] %c{2}: %m%n",
-         "log4j.appender.RFAS": "org.apache.log4j.RollingFileAppender",
-         "log4j.appender.RFAS.layout": "org.apache.log4j.PatternLayout",
-         "log4j.appender.RFAS.layout.ConversionPattern": "%d{ISO8601} %p %c: %m%n",
-         "log4j.appender.RFAS.MaxFileSize": "${hbase.security.log.maxfilesize}",
-         "log4j.appender.RFAS.MaxBackupIndex": "${hbase.security.log.maxbackupindex}",
-         "log4j.appender.RFAS.File": "${hbase.log.dir}/${hbase.security.log.file}",
-         "log4j.appender.RFA": "org.apache.log4j.RollingFileAppender",
-         "log4j.appender.RFA.layout": "org.apache.log4j.PatternLayout",
-         "log4j.appender.RFA.layout.ConversionPattern": "%d{ISO8601} %-5p [%t] %c{2}: %m%n",
-         "log4j.appender.RFA.MaxFileSize": "${hbase.log.maxfilesize}",
-         "log4j.appender.RFA.MaxBackupIndex": "${hbase.log.maxbackupindex}",
-         "log4j.appender.RFA.File": "${hbase.log.dir}/${hbase.log.file}",
-         "log4j.appender.NullAppender": "org.apache.log4j.varia.NullAppender",
-         "log4j.appender.DRFA": "org.apache.log4j.DailyRollingFileAppender",
-         "log4j.appender.DRFA.layout": "org.apache.log4j.PatternLayout",
-         "log4j.appender.DRFA.layout.ConversionPattern": "%d{ISO8601} %-5p [%t] %c{2}: %m%n",
-         "log4j.appender.DRFA.File": "${hbase.log.dir}/${hbase.log.file}",
-         "log4j.appender.DRFA.DatePattern": ".yyyy-MM-dd",
-         "log4j.additivity.SecurityLogger": "false",
-         "hbase.security.logger": "INFO,console",
-         "hbase.security.log.maxfilesize": "256MB",
-         "hbase.security.log.maxbackupindex": "20",
-         "hbase.security.log.file": "SecurityAuth.audit",
-         "hbase.root.logger": "INFO,console",
-         "hbase.log.maxfilesize": "256MB",
-         "hbase.log.maxbackupindex": "20",
-         "hbase.log.file": "hbase.log",
-         "hbase.log.dir": "."
-        },
-        "app-global-config": {
-         "security_enabled": "false",
-         "pid_dir": "/hadoop/yarn/log/application_1394053491953_0003/run",
-         "log_dir": "/hadoop/yarn/log/application_1394053491953_0003/log",
-         "tmp_dir": "/hadoop/yarn/log/application_1394053491953_0003/tmp",
-         "user_group": "hadoop",
-         "user": "hbase",
-         "hbase_regionserver_heapsize": "1024m",
-         "hbase_master_heapsize": "1024m",
-         "fs_default_name": "hdfs://c6403.ambari.apache.org:8020",
-         "hdfs_root": "/apps/hbase/instances/01",
-         "zookeeper_node": "/apps/hbase/instances/01",
-         "zookeeper_quorom_hosts": "c6403.ambari.apache.org",
-         "zookeeper_port": "2181",
-        },
-        "hbase-site": {
-         "hbase.hstore.flush.retries.number": "120",
-         "hbase.client.keyvalue.maxsize": "10485760",
-         "hbase.hstore.compactionThreshold": "3",
-         "hbase.rootdir": "hdfs://c6403.ambari.apache.org:8020/apps/hbase/instances/01/data",
-         "hbase.stagingdir": "hdfs://c6403.ambari.apache.org:8020/apps/hbase/instances/01/staging",
-         "hbase.regionserver.handler.count": "60",
-         "hbase.regionserver.global.memstore.lowerLimit": "0.38",
-         "hbase.hregion.memstore.block.multiplier": "2",
-         "hbase.hregion.memstore.flush.size": "134217728",
-         "hbase.superuser": "yarn",
-         "hbase.zookeeper.property.clientPort": "2181",
-         "hbase.regionserver.global.memstore.upperLimit": "0.4",
-         "zookeeper.session.timeout": "30000",
-         "hbase.tmp.dir": "/hadoop/yarn/log/application_1394053491953_0003/tmp",
-         "hbase.hregion.max.filesize": "10737418240",
-         "hfile.block.cache.size": "0.40",
-         "hbase.security.authentication": "simple",
-         "hbase.defaults.for.version.skip": "true",
-         "hbase.zookeeper.quorum": "c6403.ambari.apache.org",
-         "zookeeper.znode.parent": "/apps/hbase/instances/01",
-         "hbase.hstore.blockingStoreFiles": "10",
-         "hbase.hregion.majorcompaction": "86400000",
-         "hbase.security.authorization": "false",
-         "hbase.cluster.distributed": "true",
-         "hbase.hregion.memstore.mslab.enabled": "true",
-         "hbase.client.scanner.caching": "100",
-         "hbase.zookeeper.useMulti": "true",
-         "hbase.regionserver.info.port": "",
-         "hbase.master.info.port": "60010"
-        }
+    {
+      "commandId": "2-2",
+      "command": "START",
+      "commandType": "EXECUTION_COMMAND",
+      "clusterName": "c1",
+      "appName": "HBASE",
+      "componentName": "HBASE_MASTER",
+      "hostParams": {
+          "java_home": "/usr/jdk64/jdk1.7.0_45"
+      },
+      "componentParams": {},
+      "commandParams": {},
+      "hostname": "c6403.ambari.apache.org",
+      "public_hostname": "c6403.ambari.apache.org",
+      "configurations": {
+          "hbase-log4j": {
+           "log4j.threshold": "ALL",
+           "log4j.rootLogger": "${hbase.root.logger}",
+           "log4j.logger.org.apache.zookeeper": "INFO",
+           "log4j.logger.org.apache.hadoop.hbase": "DEBUG",
+           "log4j.logger.org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher": "INFO",
+           "log4j.logger.org.apache.hadoop.hbase.zookeeper.ZKUtil": "INFO",
+           "log4j.category.SecurityLogger": "${hbase.security.logger}",
+           "log4j.appender.console": "org.apache.log4j.ConsoleAppender",
+           "log4j.appender.console.target": "System.err",
+           "log4j.appender.console.layout": "org.apache.log4j.PatternLayout",
+           "log4j.appender.console.layout.ConversionPattern": "%d{ISO8601} %-5p [%t] %c{2}: %m%n",
+           "log4j.appender.RFAS": "org.apache.log4j.RollingFileAppender",
+           "log4j.appender.RFAS.layout": "org.apache.log4j.PatternLayout",
+           "log4j.appender.RFAS.layout.ConversionPattern": "%d{ISO8601} %p %c: %m%n",
+           "log4j.appender.RFAS.MaxFileSize": "${hbase.security.log.maxfilesize}",
+           "log4j.appender.RFAS.MaxBackupIndex": "${hbase.security.log.maxbackupindex}",
+           "log4j.appender.RFAS.File": "${hbase.log.dir}/${hbase.security.log.file}",
+           "log4j.appender.RFA": "org.apache.log4j.RollingFileAppender",
+           "log4j.appender.RFA.layout": "org.apache.log4j.PatternLayout",
+           "log4j.appender.RFA.layout.ConversionPattern": "%d{ISO8601} %-5p [%t] %c{2}: %m%n",
+           "log4j.appender.RFA.MaxFileSize": "${hbase.log.maxfilesize}",
+           "log4j.appender.RFA.MaxBackupIndex": "${hbase.log.maxbackupindex}",
+           "log4j.appender.RFA.File": "${hbase.log.dir}/${hbase.log.file}",
+           "log4j.appender.NullAppender": "org.apache.log4j.varia.NullAppender",
+           "log4j.appender.DRFA": "org.apache.log4j.DailyRollingFileAppender",
+           "log4j.appender.DRFA.layout": "org.apache.log4j.PatternLayout",
+           "log4j.appender.DRFA.layout.ConversionPattern": "%d{ISO8601} %-5p [%t] %c{2}: %m%n",
+           "log4j.appender.DRFA.File": "${hbase.log.dir}/${hbase.log.file}",
+           "log4j.appender.DRFA.DatePattern": ".yyyy-MM-dd",
+           "log4j.additivity.SecurityLogger": "false",
+           "hbase.security.logger": "INFO,console",
+           "hbase.security.log.maxfilesize": "256MB",
+           "hbase.security.log.maxbackupindex": "20",
+           "hbase.security.log.file": "SecurityAuth.audit",
+           "hbase.root.logger": "INFO,console",
+           "hbase.log.maxfilesize": "256MB",
+           "hbase.log.maxbackupindex": "20",
+           "hbase.log.file": "hbase.log",
+           "hbase.log.dir": "."
+          },
+          "app-global-config": {
+           "security_enabled": "false",
+           "pid_dir": "/hadoop/yarn/log/application_1394053491953_0003/run",
+           "log_dir": "/hadoop/yarn/log/application_1394053491953_0003/log",
+           "tmp_dir": "/hadoop/yarn/log/application_1394053491953_0003/tmp",
+           "user_group": "hadoop",
+           "user": "hbase",
+           "hbase_regionserver_heapsize": "1024m",
+           "hbase_master_heapsize": "1024m",
+           "fs_default_name": "hdfs://c6403.ambari.apache.org:8020",
+           "hdfs_root": "/apps/hbase/instances/01",
+           "zookeeper_node": "/apps/hbase/instances/01",
+           "zookeeper_quorom_hosts": "c6403.ambari.apache.org",
+           "zookeeper_port": "2181",
+          },
+          "hbase-site": {
+           "hbase.hstore.flush.retries.number": "120",
+           "hbase.client.keyvalue.maxsize": "10485760",
+           "hbase.hstore.compactionThreshold": "3",
+           "hbase.rootdir": "hdfs://c6403.ambari.apache.org:8020/apps/hbase/instances/01/data",
+           "hbase.stagingdir": "hdfs://c6403.ambari.apache.org:8020/apps/hbase/instances/01/staging",
+           "hbase.regionserver.handler.count": "60",
+           "hbase.regionserver.global.memstore.lowerLimit": "0.38",
+           "hbase.hregion.memstore.block.multiplier": "2",
+           "hbase.hregion.memstore.flush.size": "134217728",
+           "hbase.superuser": "yarn",
+           "hbase.zookeeper.property.clientPort": "2181",
+           "hbase.regionserver.global.memstore.upperLimit": "0.4",
+           "zookeeper.session.timeout": "30000",
+           "hbase.tmp.dir": "/hadoop/yarn/log/application_1394053491953_0003/tmp",
+           "hbase.hregion.max.filesize": "10737418240",
+           "hfile.block.cache.size": "0.40",
+           "hbase.security.authentication": "simple",
+           "hbase.defaults.for.version.skip": "true",
+           "hbase.zookeeper.quorum": "c6403.ambari.apache.org",
+           "zookeeper.znode.parent": "/apps/hbase/instances/01",
+           "hbase.hstore.blockingStoreFiles": "10",
+           "hbase.hregion.majorcompaction": "86400000",
+           "hbase.security.authorization": "false",
+           "hbase.cluster.distributed": "true",
+           "hbase.hregion.memstore.mslab.enabled": "true",
+           "hbase.client.scanner.caching": "100",
+           "hbase.zookeeper.useMulti": "true",
+           "hbase.regionserver.info.port": "",
+           "hbase.master.info.port": "60010"
+          }
+      }
     }
-}
-```
 
 
 ## Sample command script
 
-```
-class OozieServer(Script):
-  def install(self, env):
-    self.install_packages(env)
+    class OozieServer(Script):
+      def install(self, env):
+        self.install_packages(env)
+        
+      def configure(self, env):
+        import params
+        env.set_params(params)
+        oozie(is_server=True)
+        
+      def start(self, env):
+        import params
+        env.set_params(params)
+        self.configure(env)
+        oozie_service(action='start')
+        
+      def stop(self, env):
+        import params
+        env.set_params(params)
+        oozie_service(action='stop')
     
-  def configure(self, env):
-    import params
-    env.set_params(params)
-    oozie(is_server=True)
-    
-  def start(self, env):
-    import params
-    env.set_params(params)
-    self.configure(env)
-    oozie_service(action='start')
-    
-  def stop(self, env):
-    import params
-    env.set_params(params)
-    oozie_service(action='stop')
+      def status(self, env):
+        import status_params
+        env.set_params(status_params)
+        check_process_status(status_params.pid_file)
 
-  def status(self, env):
-    import status_params
-    env.set_params(status_params)
-    check_process_status(status_params.pid_file)
-```
 
 
diff --git a/src/site/markdown/images/app_config_folders_01.png b/src/site/resources/images/app_config_folders_01.png
similarity index 100%
rename from src/site/markdown/images/app_config_folders_01.png
rename to src/site/resources/images/app_config_folders_01.png
Binary files differ
diff --git a/src/site/markdown/images/app_package_sample_04.png b/src/site/resources/images/app_package_sample_04.png
similarity index 100%
rename from src/site/markdown/images/app_package_sample_04.png
rename to src/site/resources/images/app_package_sample_04.png
Binary files differ
diff --git a/src/site/markdown/images/managed_client.png b/src/site/resources/images/managed_client.png
similarity index 100%
rename from src/site/markdown/images/managed_client.png
rename to src/site/resources/images/managed_client.png
Binary files differ
diff --git a/src/site/markdown/images/slider-container.png b/src/site/resources/images/slider-container.png
similarity index 100%
rename from src/site/markdown/images/slider-container.png
rename to src/site/resources/images/slider-container.png
Binary files differ
diff --git a/src/site/markdown/images/unmanaged_client.png b/src/site/resources/images/unmanaged_client.png
similarity index 100%
rename from src/site/markdown/images/unmanaged_client.png
rename to src/site/resources/images/unmanaged_client.png
Binary files differ
diff --git a/src/site/site.xml b/src/site/site.xml
index b5d0677..9d5a598 100644
--- a/src/site/site.xml
+++ b/src/site/site.xml
@@ -49,17 +49,12 @@
     <menu ref="reports"/>
 
     <menu name="Documents">
-      <!--item name="announcement" href="/announcement.html"/-->
       <item name="Getting Started" href="/getting_started.html"/>
-      <item name="installing" href="/installing.html"/>
-      <item name="architecture" href="/architecture.html"/>
       <item name="manpage" href="/manpage.html"/>
-      <item name="application Needs" href="/app_needs.html"/>
-      <item name="building" href="/building.html"/>
-      <!--item name="examples" href="/examples.html"/-->
-      <item name="exitcodes" href="/exitcodes.html"/>
-      <!--item name="slider_cluster_descriptions" href="/slider_cluster_descriptions.html"/-->
-      <item name="rolehistory" href="/rolehistory.html"/>
+      <item name="Troubleshooting" href="/troubleshooting.html"/>
+      <item name="Architecture" href="/architecture/index.html"/>
+      <item name="Developing" href="/developing/index.html"/>
+      <item name="Exitcodes" href="/exitcodes.html"/>
     </menu>
   </body>
 </project>