Merge tag 'release-0.60.0'

Slider 0.60.0 incubating
diff --git a/README.md b/README.md
index 480502d..a25b83a 100644
--- a/README.md
+++ b/README.md
@@ -22,9 +22,9 @@
 monitor them and make them larger or smaller as desired -even while 
 the cluster is running.
 
-Clusters can be stopped, "frozen" and restarted, "thawed" later; the distribution
+Clusters can be stopped and restarted later; the distribution
 of the deployed application across the YARN cluster is persisted -enabling
-a best-effort placement close to the previous locations on a cluster thaw.
+a best-effort placement close to the previous locations on a cluster start.
 Applications which remember the previous placement of data (such as HBase)
 can exhibit fast start-up times from this feature.
 
@@ -91,3 +91,31 @@
     WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     See the License for the specific language governing permissions and
     limitations under the License. See accompanying LICENSE file.
+
+# Export Control
+
+This distribution includes cryptographic software. The country in which you
+currently reside may have restrictions on the import, possession, use, and/or
+re-export to another country, of encryption software. BEFORE using any
+encryption software, please check your country's laws, regulations and
+policies concerning the import, possession, or use, and re-export of encryption
+software, to see if this is permitted. See <http://www.wassenaar.org/> for more
+information.
+
+The U.S. Government Department of Commerce, Bureau of Industry and Security
+(BIS), has classified this software as Export Commodity Control Number (ECCN)
+5D002.C.1, which includes information security software using or performing
+cryptographic functions with asymmetric algorithms. The form and manner of this
+Apache Software Foundation distribution makes it eligible for export under the
+License Exception ENC Technology Software Unrestricted (TSU) exception (see the
+BIS Export Administration Regulations, Section 740.13) for both object code and
+source code.
+
+The following provides more details on the included cryptographic software:
+
+Apache Slider uses the built-in java cryptography libraries. See Oracle's
+information regarding Java cryptographic export regulations for more details:
+http://www.oracle.com/us/products/export/export-regulations-345813.html
+
+Apache Slider uses the SSL libraries from the Jetty project distributed by the
+Eclipse Foundation (http://eclipse.org/jetty).
diff --git a/app-packages/accumulo/README.md b/app-packages/accumulo/README.md
new file mode 100644
index 0000000..537d769
--- /dev/null
+++ b/app-packages/accumulo/README.md
@@ -0,0 +1,113 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+# How to create a Slider package for Accumulo?
+
+    mvn clean package -DskipTests -Paccumulo-app-package-maven
+  
+OR
+
+    mvn clean package -DskipTests -Paccumulo-app-package -Dpkg.version=1.6.1 \
+      -Dpkg.name=accumulo-1.6.1-bin.tar.gz -Dpkg.src=/local/path/to/tarball
+
+App package can be found in
+
+    app-packages/accumulo/target/slider-accumulo-app-package-*.zip
+    
+    
+
+In the first case, the version number of the app package will match the
+slider version, and in the second case it will match the `pkg.version`
+(intended to be the accumulo version).
+
+Verify the content using
+
+    zip -Tv slider-accumulo-app-package*.zip
+
+`appConfig-default.json` and `resources-default.json` are not required to be packaged.
+These files are included as reference configuration for Slider apps and are suitable
+for a one-node cluster.
+
+In the maven packaging case, the version of Accumulo used for the app package
+can be adjusted by adding a flag such as
+
+    -Daccumulo.version=1.5.1
+
+**Note that the LICENSE.txt and NOTICE.txt that are bundled with the app
+package are designed for Accumulo 1.6.0 only and may need to be modified to be
+applicable for other versions of the app package.
+
+Note also that the sample `appConfig-default.json` provided only works with Accumulo 1.6.
+For Accumulo 1.5 the instance.volumes property must be replaced with
+instance.dfs.dir (and it cannot use the provided variable `${DEFAULT_DATA_DIR}`
+which is an HDFS URI).
+
+A less descriptive file name can be specified with
+`-Dapp.package.name=accumulo_160` which would create a file `accumulo_160.zip`
+
+# Building Native Libraries
+
+Accumulo works better with its native libraries, and these must be built
+manually for Accumulo releases 1.6.0 and greater.  They should be built on a
+machine Accumulo will be deployed on, or an equivalent.  The procedure below
+illustrates the steps for extracting and rebuilding the Accumulo app package
+with native libraries, in the case of Accumulo version 1.6.0.  You will need a
+C++ compiler/toolchain installed to build this library, and `JAVA_HOME` must be
+set.
+
+    unzip ${app.package.name}.zip package/files/accumulo*gz
+    cd package/files/
+    gunzip accumulo-1.6.0-bin.tar.gz
+    tar xvf accumulo-1.6.0-bin.tar
+    accumulo-1.6.0/bin/build_native_library.sh
+    tar uvf accumulo-1.6.0-bin.tar accumulo-1.6.0
+    rm -rf accumulo-1.6.0
+    gzip accumulo-1.6.0-bin.tar
+    cd ../../
+    zip ${app.package.name}.zip -r package
+    rm -rf package
+
+# Export Control
+
+This distribution includes cryptographic software. The country in which you
+currently reside may have restrictions on the import, possession, use, and/or
+re-export to another country, of encryption software. BEFORE using any
+encryption software, please check your country's laws, regulations and
+policies concerning the import, possession, or use, and re-export of encryption
+software, to see if this is permitted. See [http://www.wassenaar.org/](http://www.wassenaar.org/) for more
+information.
+
+The U.S. Government Department of Commerce, Bureau of Industry and Security
+(BIS), has classified this software as Export Commodity Control Number (ECCN)
+5D002.C.1, which includes information security software using or performing
+cryptographic functions with asymmetric algorithms. The form and manner of this
+Apache Software Foundation distribution makes it eligible for export under the
+License Exception ENC Technology Software Unrestricted (TSU) exception (see the
+BIS Export Administration Regulations, Section 740.13) for both object code and
+source code.
+
+The following provides more details on the included cryptographic software:
+
+Apache Slider uses the built-in java cryptography libraries. See Oracle's
+information regarding Java cryptographic export regulations for more details:
+[http://www.oracle.com/us/products/export/export-regulations-345813.html](http://www.oracle.com/us/products/export/export-regulations-345813.html)
+
+Apache Slider uses the SSL libraries from the Jetty project distributed by the
+Eclipse Foundation [http://eclipse.org/jetty](http://eclipse.org/jetty).
+
+See also the Apache Accumulo export control notice in the README:
+[http://accumulo.apache.org/downloads](http://accumulo.apache.org/downloads)
diff --git a/app-packages/accumulo/README.txt b/app-packages/accumulo/README.txt
deleted file mode 100644
index 8e8fac2..0000000
--- a/app-packages/accumulo/README.txt
+++ /dev/null
@@ -1,47 +0,0 @@
-<!---
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-
-How to create a Slider package for Accumulo?
-
-  mvn clean package -DskipTests -Paccumulo-app-package
-
-App package can be found in
-  app-packages/accumulo/target/apache-slider-accumulo-${accumulo.version}-app-package-${slider.version}.zip
-
-Verify the content using
-  zip -Tv apache-slider-accumulo-*.zip
-
-While appConfig.json and resources.json are not required for the package they
-work well as the default configuration for Slider apps. So it is advisable that
-when you create an application package for Slider, include sample/default
-resources.json and appConfig.json for a minimal Yarn cluster.
-
-The version of Accumulo used for the app package can be adjusted by adding a
-flag such as
-  -Daccumulo.version=1.5.1
-
-**Note that the LICENSE.txt and NOTICE.txt that are bundled with the app
-package are designed for Accumulo 1.6.0 only and may need to be modified to be
-applicable for other versions of the app package.
-
-Note also that the sample appConfig.json provided only works with Accumulo 1.6,
-while for Accumulo 1.5 the instance.volumes property must be replaced with
-instance.dfs.dir (and it cannot use the provided variable ${DEFAULT_DATA_DIR}
-which is an HDFS URI).
-
-A less descriptive file name can be specified with
--Dapp.package.name=accumulo_160 which would create a file accumulo_160.zip.
diff --git a/app-packages/accumulo/appConfig-default.json b/app-packages/accumulo/appConfig-default.json
new file mode 100644
index 0000000..9d11bae
--- /dev/null
+++ b/app-packages/accumulo/appConfig-default.json
@@ -0,0 +1,69 @@
+{
+  "schema": "http://example.org/specification/v2.0.0",
+  "metadata": {
+  },
+  "global": {
+    "application.def": ".slider/package/ACCUMULO/${app.package.name}.zip",
+    "java_home": "${app.java.home}",
+
+    "site.global.app_root": "${AGENT_WORK_ROOT}/app/install/accumulo-${accumulo.version}",
+    "site.global.app_user": "${app.user}",
+    "site.global.user_group": "${app.user.group}",
+
+    "site.accumulo-env.java_home": "${JAVA_HOME}",
+    "site.accumulo-env.tserver_heapsize": "256m",
+    "site.accumulo-env.master_heapsize": "128m",
+    "site.accumulo-env.monitor_heapsize": "64m",
+    "site.accumulo-env.gc_heapsize": "64m",
+    "site.accumulo-env.other_heapsize": "128m",
+    "site.accumulo-env.hadoop_prefix": "${hadoop.dir}",
+    "site.accumulo-env.hadoop_conf_dir": "/etc/hadoop/conf",
+    "site.accumulo-env.zookeeper_home": "${zk.dir}",
+
+    "site.client.instance.name": "${USER}-${CLUSTER_NAME}",
+
+    "site.global.accumulo_root_password": "NOT_USED",
+    "site.global.ssl_cert_dir": "ssl",
+    "site.global.monitor_protocol": "http",
+
+    "site.accumulo-site.instance.volumes": "${DEFAULT_DATA_DIR}/data",
+    "site.accumulo-site.instance.zookeeper.host": "${ZK_HOST}",
+    "site.accumulo-site.instance.security.authenticator": "org.apache.slider.accumulo.CustomAuthenticator",
+
+    "site.accumulo-site.general.security.credential.provider.paths": "jceks://hdfs/user/${USER}/accumulo-${CLUSTER_NAME}.jceks",
+    "site.accumulo-site.instance.rpc.ssl.enabled": "false",
+    "site.accumulo-site.instance.rpc.ssl.clientAuth": "false",
+    "site.accumulo-site.general.kerberos.keytab": "${accumulo.keytab}",
+    "site.accumulo-site.general.kerberos.principal": "${accumulo.principal}",
+
+    "site.accumulo-site.tserver.memory.maps.native.enabled": "false",
+    "site.accumulo-site.tserver.memory.maps.max": "80M",
+    "site.accumulo-site.tserver.cache.data.size": "7M",
+    "site.accumulo-site.tserver.cache.index.size": "20M",
+    "site.accumulo-site.tserver.sort.buffer.size": "50M",
+    "site.accumulo-site.tserver.walog.max.size": "40M",
+
+    "site.accumulo-site.trace.user": "root",
+
+    "site.accumulo-site.master.port.client": "0",
+    "site.accumulo-site.trace.port.client": "0",
+    "site.accumulo-site.tserver.port.client": "0",
+    "site.accumulo-site.gc.port.client": "0",
+    "site.accumulo-site.monitor.port.client": "${ACCUMULO_MONITOR.ALLOCATED_PORT}",
+    "site.accumulo-site.monitor.port.log4j": "0",
+    "site.accumulo-site.master.replication.coordinator.port": "0",
+    "site.accumulo-site.replication.receipt.service.port": "0",
+
+    "site.accumulo-site.general.classpaths": "$ACCUMULO_HOME/lib/accumulo-server.jar,\n$ACCUMULO_HOME/lib/accumulo-core.jar,\n$ACCUMULO_HOME/lib/accumulo-start.jar,\n$ACCUMULO_HOME/lib/accumulo-fate.jar,\n$ACCUMULO_HOME/lib/accumulo-proxy.jar,\n$ACCUMULO_HOME/lib/[^.].*.jar,\n$ZOOKEEPER_HOME/zookeeper[^.].*.jar,\n$HADOOP_CONF_DIR,\n${@//site/accumulo-env/hadoop_conf_dir},\n$HADOOP_PREFIX/[^.].*.jar,\n$HADOOP_PREFIX/lib/[^.].*.jar,\n$HADOOP_PREFIX/share/hadoop/common/.*.jar,\n$HADOOP_PREFIX/share/hadoop/common/lib/.*.jar,\n$HADOOP_PREFIX/share/hadoop/hdfs/.*.jar,\n$HADOOP_PREFIX/share/hadoop/mapreduce/.*.jar,\n$HADOOP_PREFIX/share/hadoop/yarn/.*.jar,\n${hadoop.dir}/.*.jar,\n${hadoop.dir}/lib/.*.jar,\n${hdfs.dir}/.*.jar,\n${mapred.dir}/.*.jar,\n${yarn.dir}/.*.jar,"
+  },
+  "credentials": {
+    "jceks://hdfs/user/${USER}/accumulo-${CLUSTER_NAME}.jceks": ["root.initial.password", "instance.secret", "trace.token.property.password"]
+  },
+  "components": {
+    "slider-appmaster": {
+      "jvm.heapsize": "256M",
+      "slider.am.keytab.local.path": "${accumulo.headless.keytab}",
+      "slider.keytab.principal.name": "${accumulo.headless.principal}"
+    }
+  }
+}
diff --git a/app-packages/accumulo/appConfig-secured-default.json b/app-packages/accumulo/appConfig-secured-default.json
new file mode 100644
index 0000000..b493ccc
--- /dev/null
+++ b/app-packages/accumulo/appConfig-secured-default.json
@@ -0,0 +1,70 @@
+{
+  "schema": "http://example.org/specification/v2.0.0",
+  "metadata": {
+  },
+  "global": {
+    "application.def": ".slider/package/ACCUMULO/${app.package.name}.zip",
+    "java_home": "${app.java.home}",
+
+    "site.global.app_root": "${AGENT_WORK_ROOT}/app/install/accumulo-${accumulo.version}",
+    "site.global.app_user": "${USER}",
+    "site.global.user_group": "${USER}",
+
+    "site.accumulo-env.java_home": "${JAVA_HOME}",
+    "site.accumulo-env.tserver_heapsize": "256m",
+    "site.accumulo-env.master_heapsize": "128m",
+    "site.accumulo-env.monitor_heapsize": "64m",
+    "site.accumulo-env.gc_heapsize": "64m",
+    "site.accumulo-env.other_heapsize": "128m",
+    "site.accumulo-env.hadoop_prefix": "${hadoop.dir}",
+    "site.accumulo-env.hadoop_conf_dir": "/etc/hadoop/conf",
+    "site.accumulo-env.zookeeper_home": "${zk.dir}",
+
+    "site.client.instance.name": "${USER}-${CLUSTER_NAME}",
+
+    "site.global.accumulo_root_password": "NOT_USED",
+    "site.global.ssl_cert_dir": "ssl",
+    "site.global.monitor_protocol": "http",
+
+    "site.accumulo-site.instance.volumes": "${DEFAULT_DATA_DIR}/data",
+    "site.accumulo-site.instance.zookeeper.host": "${ZK_HOST}",
+    "site.accumulo-site.instance.security.authenticator": "org.apache.slider.accumulo.CustomAuthenticator",
+
+    "site.accumulo-site.general.security.credential.provider.paths": "jceks://hdfs/user/${USER}/accumulo-${CLUSTER_NAME}.jceks",
+    "site.accumulo-site.instance.rpc.ssl.enabled": "false",
+    "site.accumulo-site.instance.rpc.ssl.clientAuth": "false",
+    "site.accumulo-site.general.kerberos.keytab": "${AGENT_WORK_ROOT}/keytabs/${USER_NAME}.ACCUMULO.service.keytab",
+    "site.accumulo-site.general.kerberos.principal": "${USER_NAME}/_HOST@EXAMPLE.COM",
+
+    "site.accumulo-site.tserver.memory.maps.native.enabled": "false",
+    "site.accumulo-site.tserver.memory.maps.max": "80M",
+    "site.accumulo-site.tserver.cache.data.size": "7M",
+    "site.accumulo-site.tserver.cache.index.size": "20M",
+    "site.accumulo-site.tserver.sort.buffer.size": "50M",
+    "site.accumulo-site.tserver.walog.max.size": "40M",
+
+    "site.accumulo-site.trace.user": "root",
+
+    "site.accumulo-site.master.port.client": "0",
+    "site.accumulo-site.trace.port.client": "0",
+    "site.accumulo-site.tserver.port.client": "0",
+    "site.accumulo-site.gc.port.client": "0",
+    "site.accumulo-site.monitor.port.client": "${ACCUMULO_MONITOR.ALLOCATED_PORT}",
+    "site.accumulo-site.monitor.port.log4j": "0",
+    "site.accumulo-site.master.replication.coordinator.port": "0",
+    "site.accumulo-site.replication.receipt.service.port": "0",
+
+    "site.accumulo-site.general.classpaths": "$ACCUMULO_HOME/lib/accumulo-server.jar,\n$ACCUMULO_HOME/lib/accumulo-core.jar,\n$ACCUMULO_HOME/lib/accumulo-start.jar,\n$ACCUMULO_HOME/lib/accumulo-fate.jar,\n$ACCUMULO_HOME/lib/accumulo-proxy.jar,\n$ACCUMULO_HOME/lib/[^.].*.jar,\n$ZOOKEEPER_HOME/zookeeper[^.].*.jar,\n$HADOOP_CONF_DIR,\n${@//site/accumulo-env/hadoop_conf_dir},\n$HADOOP_PREFIX/[^.].*.jar,\n$HADOOP_PREFIX/lib/[^.].*.jar,\n$HADOOP_PREFIX/share/hadoop/common/.*.jar,\n$HADOOP_PREFIX/share/hadoop/common/lib/.*.jar,\n$HADOOP_PREFIX/share/hadoop/hdfs/.*.jar,\n$HADOOP_PREFIX/share/hadoop/mapreduce/.*.jar,\n$HADOOP_PREFIX/share/hadoop/yarn/.*.jar,\n${hadoop.dir}/.*.jar,\n${hadoop.dir}/lib/.*.jar,\n${hdfs.dir}/.*.jar,\n${mapred.dir}/.*.jar,\n${yarn.dir}/.*.jar,"
+  },
+  "credentials": {
+    "jceks://hdfs/user/${USER}/accumulo-${CLUSTER_NAME}.jceks": ["root.initial.password", "instance.secret", "trace.token.property.password"]
+  },
+  "components": {
+    "slider-appmaster": {
+      "jvm.heapsize": "256M",
+      "slider.hdfs.keytab.dir": ".slider/keytabs/accumulo",
+      "slider.am.login.keytab.name": "${USER_NAME}.headless.keytab",
+      "slider.keytab.principal.name": "${USER_NAME}"
+    }
+  }
+}
diff --git a/app-packages/accumulo/appConfig.json b/app-packages/accumulo/appConfig.json
deleted file mode 100644
index 6b7033e..0000000
--- a/app-packages/accumulo/appConfig.json
+++ /dev/null
@@ -1,61 +0,0 @@
-{
-  "schema": "http://example.org/specification/v2.0.0",
-  "metadata": {
-  },
-  "global": {
-    "application.def": "${app.package.name}.zip",
-    "config_types": "accumulo-site",
-    "java_home": "/usr/jdk64/jdk1.7.0_45",
-    "package_list": "files/accumulo-${accumulo.version}-bin.tar.gz",
-    "site.global.app_user": "yarn",
-    "site.global.app_log_dir": "${AGENT_LOG_ROOT}",
-    "site.global.app_pid_dir": "${AGENT_WORK_ROOT}/app/run",
-    "site.global.app_root": "${AGENT_WORK_ROOT}/app/install/accumulo-${accumulo.version}",
-    "site.global.app_install_dir": "${AGENT_WORK_ROOT}/app/install",
-    "site.global.tserver_heapsize": "128m",
-    "site.global.master_heapsize": "128m",
-    "site.global.monitor_heapsize": "64m",
-    "site.global.gc_heapsize": "64m",
-    "site.global.other_heapsize": "128m",
-    "site.global.hadoop_prefix": "/usr/lib/hadoop",
-    "site.global.hadoop_conf_dir": "/etc/hadoop/conf",
-    "site.global.zookeeper_home": "/usr/lib/zookeeper",
-    "site.global.accumulo_instance_name": "instancename",
-    "site.global.accumulo_root_password": "secret",
-    "site.global.user_group": "hadoop",
-    "site.global.security_enabled": "false",
-    "site.global.monitor_protocol": "http",
-    "site.accumulo-site.instance.volumes": "${DEFAULT_DATA_DIR}/data",
-    "site.accumulo-site.instance.zookeeper.host": "${ZK_HOST}",
-    "site.accumulo-site.instance.secret": "DEFAULT",
-    "site.accumulo-site.tserver.memory.maps.max": "80M",
-    "site.accumulo-site.tserver.cache.data.size": "7M",
-    "site.accumulo-site.tserver.cache.index.size": "20M",
-    "site.accumulo-site.trace.token.property.password": "secret",
-    "site.accumulo-site.trace.user": "root",
-    "site.accumulo-site.tserver.sort.buffer.size": "50M",
-    "site.accumulo-site.tserver.walog.max.size": "100M",
-    "site.accumulo-site.master.port.client": "0",
-    "site.accumulo-site.trace.port.client": "0",
-    "site.accumulo-site.tserver.port.client": "0",
-    "site.accumulo-site.gc.port.client": "0",
-    "site.accumulo-site.monitor.port.client": "${ACCUMULO_MONITOR.ALLOCATED_PORT}",
-    "site.accumulo-site.monitor.port.log4j": "0",
-    "site.accumulo-site.general.classpaths": "$ACCUMULO_HOME/lib/accumulo-server.jar,\n$ACCUMULO_HOME/lib/accumulo-core.jar,\n$ACCUMULO_HOME/lib/accumulo-start.jar,\n$ACCUMULO_HOME/lib/accumulo-fate.jar,\n$ACCUMULO_HOME/lib/accumulo-proxy.jar,\n$ACCUMULO_HOME/lib/[^.].*.jar,\n$ZOOKEEPER_HOME/zookeeper[^.].*.jar,\n$HADOOP_CONF_DIR,\n$HADOOP_PREFIX/[^.].*.jar,\n$HADOOP_PREFIX/lib/[^.].*.jar,\n$HADOOP_PREFIX/share/hadoop/common/.*.jar,\n$HADOOP_PREFIX/share/hadoop/common/lib/.*.jar,\n$HADOOP_PREFIX/share/hadoop/hdfs/.*.jar,\n$HADOOP_PREFIX/share/hadoop/mapreduce/.*.jar,\n$HADOOP_PREFIX/share/hadoop/yarn/.*.jar,\n/usr/lib/hadoop/.*.jar,\n/usr/lib/hadoop/lib/.*.jar,\n/usr/lib/hadoop-hdfs/.*.jar,\n/usr/lib/hadoop-mapreduce/.*.jar,\n/usr/lib/hadoop-yarn/.*.jar,"
-  },
-  "components": {
-    "ACCUMULO_MASTER": {
-    },
-    "slider-appmaster": {
-      "jvm.heapsize": "256M"
-    },
-    "ACCUMULO_TSERVER": {
-    },
-    "ACCUMULO_MONITOR": {
-    },
-    "ACCUMULO_GC": {
-    },
-    "ACCUMULO_TRACER": {
-    }
-  }
-}
diff --git a/app-packages/accumulo/configuration/global.xml b/app-packages/accumulo/configuration/accumulo-env.xml
similarity index 63%
rename from app-packages/accumulo/configuration/global.xml
rename to app-packages/accumulo/configuration/accumulo-env.xml
index 5d39dca..65b6804 100644
--- a/app-packages/accumulo/configuration/global.xml
+++ b/app-packages/accumulo/configuration/accumulo-env.xml
@@ -22,18 +22,8 @@
 
 <configuration>
   <property>
-    <name>app_log_dir</name>
-    <value>/var/log/accumulo</value>
-    <description>Log Directories for Accumulo.</description>
-  </property>
-  <property>
-    <name>app_pid_dir</name>
-    <value>/var/run/accumulo</value>
-    <description>Pid Directories for Accumulo.</description>
-  </property>
-  <property>
     <name>tserver_heapsize</name>
-    <value>128m</value>
+    <value>256m</value>
     <description>TServer heap size.</description>
   </property>
   <property>
@@ -57,21 +47,6 @@
     <description>Other Heap Size</description>
   </property>
   <property>
-    <name>accumulo_hdfs_root_dir</name>
-    <value>/apps/accumulo/data</value>
-    <description>Accumulo Relative Path to HDFS.</description>
-  </property>
-  <property>
-    <name>accumulo_conf_dir</name>
-    <value>/etc/accumulo</value>
-    <description>Config Directory for Accumulo.</description>
-  </property>
-  <property>
-    <name>app_user</name>
-    <value>yarn</value>
-    <description>Accumulo User Name.</description>
-  </property>
-  <property>
     <name>hadoop_prefix</name>
     <value>/usr/lib/hadoop</value>
     <description>Hadoop directory.</description>
@@ -91,4 +66,24 @@
     <value>accumulo-instance</value>
     <description>Accumulo Instance Name.</description>
   </property>
+  <property>
+    <name>content</name>
+    <description>This is the template for a client accumulo-env.sh file</description>
+    <value>
+#! /usr/bin/env bash
+export HADOOP_PREFIX=${@//site/accumulo-env/hadoop_prefix}
+export HADOOP_CONF_DIR=${@//site/accumulo-env/hadoop_conf_dir}
+export JAVA_HOME=${@//site/accumulo-env/java_home}
+export ZOOKEEPER_HOME=${@//site/accumulo-env/zookeeper_home}
+export ACCUMULO_LOG_DIR=$ACCUMULO_HOME/logs
+export ACCUMULO_TSERVER_OPTS="-Xmx${@//site/accumulo-env/tserver_heapsize} -Xms${@//site/accumulo-env/tserver_heapsize}"
+export ACCUMULO_MASTER_OPTS="-Xmx${@//site/accumulo-env/master_heapsize} -Xms${@//site/accumulo-env/master_heapsize}"
+export ACCUMULO_MONITOR_OPTS="-Xmx${@//site/accumulo-env/monitor_heapsize} -Xms${@//site/accumulo-env/monitor_heapsize}"
+export ACCUMULO_GC_OPTS="-Xmx${@//site/accumulo-env/gc_heapsize} -Xms${@//site/accumulo-env/gc_heapsize}"
+export ACCUMULO_GENERAL_OPTS="-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -Djava.net.preferIPv4Stack=true"
+export ACCUMULO_OTHER_OPTS="-Xmx${@//site/accumulo-env/other_heapsize} -Xms${@//site/accumulo-env/other_heapsize}"
+# what do when the JVM runs out of heap memory
+export ACCUMULO_KILL_CMD='kill -9 %p'
+    </value>
+  </property>
 </configuration>
diff --git a/app-packages/accumulo/configuration/accumulo-site.xml b/app-packages/accumulo/configuration/accumulo-site.xml
index 269cc2b..3001c45 100644
--- a/app-packages/accumulo/configuration/accumulo-site.xml
+++ b/app-packages/accumulo/configuration/accumulo-site.xml
@@ -28,17 +28,6 @@
   </property>
 
   <property>
-    <name>instance.secret</name>
-    <value>DEFAULT</value>
-    <description>A secret unique to a given instance that all servers
-      must know in order to communicate with one another.
-      Change it before initialization. To
-      change it later use ./bin/accumulo org.apache.accumulo.server.util.ChangeSecret --old [oldpasswd] --new [newpasswd],
-      and then update this file.
-    </description>
-  </property>
-
-  <property>
     <name>tserver.memory.maps.max</name>
     <value>80M</value>
   </property>
@@ -54,12 +43,6 @@
   </property>
 
   <property>
-    <name>trace.token.property.password</name>
-    <!-- change this to the root user's password, and/or change the user below -->
-    <value>secret</value>
-  </property>
-
-  <property>
     <name>trace.user</name>
     <value>root</value>
   </property>
@@ -71,7 +54,7 @@
 
   <property>
     <name>tserver.walog.max.size</name>
-    <value>100M</value>
+    <value>40M</value>
   </property>
 
   <property>
diff --git a/app-packages/accumulo/configuration/client.xml b/app-packages/accumulo/configuration/client.xml
new file mode 100644
index 0000000..481b7d1
--- /dev/null
+++ b/app-packages/accumulo/configuration/client.xml
@@ -0,0 +1,49 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+
+<configuration>
+  <property>
+    <name>instance.name</name>
+    <value>accumulo-instance</value>
+    <description>Accumulo Instance Name.</description>
+  </property>
+  <property>
+    <name>instance.zookeeper.host</name>
+    <value>${@//site/accumulo-site/instance.zookeeper.host}</value>
+    <description>Zookeeper hosts.</description>
+  </property>
+  <property>
+    <name>instance.rpc.ssl.enabled</name>
+    <value>${@//site/accumulo-site/instance.rpc.ssl.enabled}</value>
+    <description>SSL enabled.</description>
+  </property>
+  <property>
+    <name>instance.rpc.ssl.clientAuth</name>
+    <value>${@//site/accumulo-site/instance.rpc.ssl.clientAuth}</value>
+    <description>SSL client auth enabled.</description>
+  </property>
+  <property>
+    <name>general.security.credential.provider.paths</name>
+    <value>${@//site/accumulo-site/general.security.credential.provider.paths}</value>
+    <description>Client credential provider containing cert passwords.</description>
+  </property>
+</configuration>
diff --git a/app-packages/storm/package/files/apache-storm-0.9.1.2.1.1.0-237.tar.gz.REPLACE b/app-packages/accumulo/getconf.sh
old mode 100644
new mode 100755
similarity index 63%
copy from app-packages/storm/package/files/apache-storm-0.9.1.2.1.1.0-237.tar.gz.REPLACE
copy to app-packages/accumulo/getconf.sh
index dd934d5..7d6a1ac
--- a/app-packages/storm/package/files/apache-storm-0.9.1.2.1.1.0-237.tar.gz.REPLACE
+++ b/app-packages/accumulo/getconf.sh
@@ -13,4 +13,9 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-Replace with the actual storm package.
+CLUSTER=$1
+
+slider registry --getconf accumulo-site --name $CLUSTER --format xml --dest accumulo-site.xml
+slider registry --getconf client --name $CLUSTER --format properties --dest client.conf
+slider registry --getconf accumulo-env --name $CLUSTER --format json --dest accumulo-env.json
+python -c "import json; file = open('accumulo-env.json'); content = json.load(file); file.close(); print content['content']" > accumulo-env.sh
diff --git a/app-packages/accumulo/metainfo.xml b/app-packages/accumulo/metainfo.xml
index b1aa9de..d3fb263 100644
--- a/app-packages/accumulo/metainfo.xml
+++ b/app-packages/accumulo/metainfo.xml
@@ -40,7 +40,7 @@
             </value>
           </export>
           <export>
-            <name>app.jmx</name>
+            <name>org.apache.slider.jmx</name>
             <value>
               ${site.global.monitor_protocol}://${ACCUMULO_MONITOR_HOST}:${site.accumulo-site.monitor.port.client}/xml
             </value>
@@ -69,6 +69,14 @@
         <command>ACCUMULO_TRACER-START</command>
         <requires>ACCUMULO_MASTER-STARTED</requires>
       </commandOrder>
+      <commandOrder>
+        <command>ACCUMULO_GC-START</command>
+        <requires>ACCUMULO_TSERVER-STARTED</requires>
+      </commandOrder>
+      <commandOrder>
+        <command>ACCUMULO_TRACER-START</command>
+        <requires>ACCUMULO_TSERVER-STARTED</requires>
+      </commandOrder>
     </commandOrders>
     <components>
       <component>
@@ -85,7 +93,7 @@
         <name>ACCUMULO_MONITOR</name>
         <category>MASTER</category>
         <publishConfig>true</publishConfig>
-        <appExports>QuickLinks-app.jmx,QuickLinks-org.apache.slider.monitor</appExports>
+        <appExports>QuickLinks-org.apache.slider.jmx,QuickLinks-org.apache.slider.monitor</appExports>
         <commandScript>
           <script>scripts/accumulo_monitor.py</script>
           <scriptType>PYTHON</scriptType>
@@ -144,5 +152,23 @@
       </osSpecific>
     </osSpecifics>
 
+    <configFiles>
+      <configFile>
+        <type>xml</type>
+        <fileName>accumulo-site.xml</fileName>
+        <dictionaryName>accumulo-site</dictionaryName>
+      </configFile>
+      <configFile>
+        <type>env</type>
+        <fileName>accumulo-env.sh</fileName>
+        <dictionaryName>accumulo-env</dictionaryName>
+      </configFile>
+      <configFile>
+        <type>properties</type>
+        <fileName>client.conf</fileName>
+        <dictionaryName>client</dictionaryName>
+      </configFile>
+    </configFiles>
+
   </application>
 </metainfo>
diff --git a/app-packages/accumulo/package/files/accumulo-metrics.xml b/app-packages/accumulo/package/files/accumulo-metrics.xml
index 60f9f8d..3b97809 100644
--- a/app-packages/accumulo/package/files/accumulo-metrics.xml
+++ b/app-packages/accumulo/package/files/accumulo-metrics.xml
@@ -33,10 +33,6 @@
     <enabled type="boolean">false</enabled>
     <logging type="boolean">false</logging>
   </master>
-  <logger>
-    <enabled type="boolean">false</enabled>
-    <logging type="boolean">false</logging>
-  </logger>
   <tserver>
     <enabled type="boolean">false</enabled>
     <logging type="boolean">false</logging>
@@ -57,4 +53,8 @@
     <enabled type="boolean">false</enabled>
     <logging type="boolean">false</logging>
   </thrift>
+  <replication>
+    <enabled type="boolean">false</enabled>
+    <logging type="boolean">false</logging>
+  </replication>
 </config>
diff --git a/app-packages/accumulo/package/files/log4j.properties b/app-packages/accumulo/package/files/log4j.properties
index a4bcb2e..f3eaddc 100644
--- a/app-packages/accumulo/package/files/log4j.properties
+++ b/app-packages/accumulo/package/files/log4j.properties
@@ -20,8 +20,9 @@
 # hide Jetty junk
 log4j.logger.org.mortbay.log=WARN,A1
 
-# hide "Got brand-new compresssor" messages
+# hide "Got brand-new compressor" messages
 log4j.logger.org.apache.hadoop.io.compress=WARN,A1
+log4j.logger.org.apache.accumulo.core.file.rfile.bcfile.Compression=WARN,A1
 
 # hide junk from TestRandomDeletes
 log4j.logger.org.apache.accumulo.test.TestRandomDeletes=WARN,A1
diff --git a/app-packages/accumulo/package/scripts/accumulo_client.py b/app-packages/accumulo/package/scripts/accumulo_client.py
index 45d07dd..f50addf 100644
--- a/app-packages/accumulo/package/scripts/accumulo_client.py
+++ b/app-packages/accumulo/package/scripts/accumulo_client.py
@@ -36,7 +36,7 @@
     setup_conf_dir(name='client')
 
   def status(self, env):
-    raise ClientComponentHasNoStatus()
+    pass
 
 
 if __name__ == "__main__":
diff --git a/app-packages/accumulo/package/scripts/accumulo_configuration.py b/app-packages/accumulo/package/scripts/accumulo_configuration.py
index 8299c36..fb4410e 100644
--- a/app-packages/accumulo/package/scripts/accumulo_configuration.py
+++ b/app-packages/accumulo/package/scripts/accumulo_configuration.py
@@ -20,8 +20,7 @@
 
 from resource_management import *
 
-def setup_conf_dir(name=None, # 'master' or 'tserver' or 'monitor' or 'gc' or 'tracer' or 'client'
-              extra_params=None):
+def setup_conf_dir(name=None): # 'master' or 'tserver' or 'monitor' or 'gc' or 'tracer' or 'client'
   import params
 
   # create the conf directory
@@ -31,6 +30,51 @@
       recursive = True
   )
 
+  ssl_params = False
+  if params.ssl_enabled or (params.monitor_security_enabled and
+                                name == 'monitor'):
+    import os
+
+    ssl_params = True
+    if os.path.exists(params.keystore_path) or os.path.exists(params.truststore_path):
+      if os.path.exists(params.keystore_path) and os.path.exists(params.truststore_path):
+        # assume keystores were already set up properly
+        pass
+      else:
+        self.fail_with_error("something went wrong when certs were created")
+
+    Directory( format("{params.conf_dir}/ssl"),
+               owner = params.accumulo_user,
+               group = params.user_group,
+               recursive = True
+    )
+    if not os.path.exists(params.truststore_path):
+      Execute( format("{hadoop_prefix}/bin/hadoop fs -get {params.ssl_cert_dir}/truststore.jks "
+                      "{params.truststore_path}"),
+               user=params.accumulo_user)
+      File( params.truststore_path,
+            mode=0600,
+            group=params.user_group,
+            owner=params.accumulo_user,
+            replace=False)
+    if not os.path.exists(params.keystore_path):
+      Execute( format("{hadoop_prefix}/bin/hadoop fs -get {params.ssl_cert_dir}/{params.hostname}.jks "
+                      "{params.keystore_path}"),
+               user=params.accumulo_user)
+      File( params.keystore_path,
+            mode=0600,
+            group=params.user_group,
+            owner=params.accumulo_user,
+            replace=False)
+
+  jarname = "SliderAccumuloUtils.jar"
+  File(format("{params.accumulo_root}/lib/{jarname}"),
+       mode=0644,
+       group=params.user_group,
+       owner=params.accumulo_user,
+       content=StaticFile(jarname)
+  )
+
   if name != "client":
     # create pid dir
     Directory( params.pid_dir,
@@ -47,12 +91,16 @@
     )
 
     configs = {}
-    if extra_params == None:
-      configs = params.config['configurations']['accumulo-site']
-    else:
+    if ssl_params:
       configs.update(params.config['configurations']['accumulo-site'])
-      for k in extra_params:
-        configs[k] = extra_params[k]
+      if (params.monitor_security_enabled and name == 'monitor'):
+        configs[params.monitor_keystore_property] = params.keystore_path
+        configs[params.monitor_truststore_property] = params.truststore_path
+      if params.ssl_enabled:
+        configs[params.ssl_keystore_file_property] = params.keystore_path
+        configs[params.ssl_truststore_file_property] = params.truststore_path
+    else:
+      configs = params.config['configurations']['accumulo-site']
 
     # create a site file for server processes
     XmlConfig( "accumulo-site.xml",
@@ -66,7 +114,6 @@
     # create a minimal site file for client processes
     client_configurations = {}
     client_configurations['instance.zookeeper.host'] = params.config['configurations']['accumulo-site']['instance.zookeeper.host']
-    client_configurations['instance.dfs.dir'] = params.config['configurations']['accumulo-site']['instance.dfs.dir']
     client_configurations['instance.volumes'] = params.config['configurations']['accumulo-site']['instance.volumes']
     client_configurations['general.classpaths'] = params.config['configurations']['accumulo-site']['general.classpaths']
     XmlConfig( "accumulo-site.xml",
@@ -79,6 +126,13 @@
   # create env file
   accumulo_TemplateConfig( 'accumulo-env.sh')
 
+  # create client.conf file
+  PropertiesFile(format("{params.conf_dir}/client.conf"),
+       properties = params.config['configurations']['client'],
+       owner = params.accumulo_user,
+       group = params.user_group
+  )
+
   # create host files
   accumulo_StaticFile( 'masters')
   accumulo_StaticFile( 'slaves')
diff --git a/app-packages/accumulo/package/scripts/accumulo_script.py b/app-packages/accumulo/package/scripts/accumulo_script.py
index 5e2ceba..6227261 100644
--- a/app-packages/accumulo/package/scripts/accumulo_script.py
+++ b/app-packages/accumulo/package/scripts/accumulo_script.py
@@ -22,10 +22,8 @@
 from resource_management.core.environment import Environment
 
 from accumulo_configuration import setup_conf_dir
-from accumulo_configuration import accumulo_StaticFile
 from accumulo_service import accumulo_service
 
-
 class AccumuloScript(Script):
   def __init__(self, component):
     self.component = component
@@ -37,44 +35,7 @@
     import params
     env.set_params(params)
 
-    if params.monitor_security_enabled and self.component == 'monitor':
-      import os
-      import random
-      import string
-
-      basedir = Environment.get_instance().config.basedir
-      keystore_file = os.path.join(basedir, "files", "keystore.jks")
-      truststore_file = os.path.join(basedir, "files", "cacerts.jks")
-      cert_file = os.path.join(basedir, "files", "server.cer")
-
-      if os.path.exists(keystore_file) or os.path.exists(truststore_file) or os.path.exists(cert_file):
-        self.fail_with_error("trying to create monitor certs but they already existed")
-
-      goodchars = string.lowercase + string.uppercase + string.digits + '#%+,-./:=?@^_'
-      keypass = ''.join(random.choice(goodchars) for x in range(20))
-      storepass = ''.join(random.choice(goodchars) for x in range(20))
-
-      https_params = {}
-      https_params[params.keystore_property] = params.keystore_path
-      https_params[params.truststore_property] = params.truststore_path
-      https_params[params.keystore_password_property] = keypass
-      https_params[params.truststore_password_property] = storepass
-
-      setup_conf_dir(name=self.component, extra_params=https_params)
-
-      Execute( format("{java64_home}/bin/keytool -genkey -alias \"default\" -keyalg RSA -keypass {keypass} -storepass {storepass} -keystore {keystore_file} -dname \"CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, C=Unknown\""),
-               user=params.accumulo_user)
-      Execute( format("{java64_home}/bin/keytool -export -alias \"default\" -storepass {storepass} -file {cert_file} -keystore {keystore_file}"),
-               user=params.accumulo_user)
-      Execute( format("echo \"yes\" | {java64_home}/bin/keytool -import -v -trustcacerts -alias \"default\" -file {cert_file} -keystore {truststore_file} -keypass {keypass} -storepass {storepass}"),
-               user=params.accumulo_user)
-
-      accumulo_StaticFile("keystore.jks")
-      accumulo_StaticFile("cacerts.jks")
-
-    else:
-      setup_conf_dir(name=self.component)
-
+    setup_conf_dir(name=self.component)
 
   def start(self, env):
     import params
@@ -82,9 +43,17 @@
     self.configure(env) # for security
 
     if self.component == 'master':
-      Execute( format("{daemon_script} init --instance-name {accumulo_instance_name} --password {accumulo_root_password} --clear-instance-name"),
-               not_if=format("hadoop fs -stat {accumulo_hdfs_root_dir}"),
+      try:
+        Execute( format("{daemon_script} init --instance-name {accumulo_instance_name} --password {accumulo_root_password} --clear-instance-name >{log_dir}/accumulo-{accumulo_user}-init.out 2>{log_dir}/accumulo-{accumulo_user}-init.err"),
+               not_if=format("{hadoop_prefix}/bin/hadoop fs -stat {accumulo_hdfs_root_dir}"),
                user=params.accumulo_user)
+      except Exception, e:
+        try:
+          Execute( format("{hadoop_prefix}/bin/hadoop fs -rm -R {accumulo_hdfs_root_dir}"),
+               user=params.accumulo_user)
+        except:
+          pass
+        raise e
 
     accumulo_service( self.component,
       action = 'start'
diff --git a/app-packages/accumulo/package/scripts/accumulo_service.py b/app-packages/accumulo/package/scripts/accumulo_service.py
index 562ef5d..ca21cc8 100644
--- a/app-packages/accumulo/package/scripts/accumulo_service.py
+++ b/app-packages/accumulo/package/scripts/accumulo_service.py
@@ -30,7 +30,7 @@
     pid_exists = format("ls {pid_file} >/dev/null 2>&1 && ps `cat {pid_file}` >/dev/null 2>&1")
 
     if action == 'start':
-      daemon_cmd = format("{daemon_script} {role} > {log_dir}/accumulo-{accumulo_user}-{role}.out 2>{log_dir}/accumulo-{accumulo_user}-{role}.err & echo $! > {pid_file}")
+      daemon_cmd = format("{daemon_script} {role} --address {params.hostname} > {log_dir}/accumulo-{accumulo_user}-{role}.out 2>{log_dir}/accumulo-{accumulo_user}-{role}.err & echo $! > {pid_file}")
       Execute ( daemon_cmd,
         not_if=pid_exists,
         user=params.accumulo_user
diff --git a/app-packages/accumulo/package/scripts/params.py b/app-packages/accumulo/package/scripts/params.py
index 3eaa1ab..9e6e8dd 100644
--- a/app-packages/accumulo/package/scripts/params.py
+++ b/app-packages/accumulo/package/scripts/params.py
@@ -23,6 +23,7 @@
 
 # server configurations
 config = Script.get_config()
+hostname = config["public_hostname"]
 
 # user and status
 accumulo_user = status_params.accumulo_user
@@ -31,43 +32,51 @@
 
 # accumulo env
 java64_home = config['hostLevelParams']['java_home']
-hadoop_prefix = config['configurations']['global']['hadoop_prefix']
-hadoop_conf_dir = config['configurations']['global']['hadoop_conf_dir']
-zookeeper_home = config['configurations']['global']['zookeeper_home']
-master_heapsize = config['configurations']['global']['master_heapsize']
-tserver_heapsize = config['configurations']['global']['tserver_heapsize']
-monitor_heapsize = config['configurations']['global']['monitor_heapsize']
-gc_heapsize = config['configurations']['global']['gc_heapsize']
-other_heapsize = config['configurations']['global']['other_heapsize']
+hadoop_prefix = config['configurations']['accumulo-env']['hadoop_prefix']
+hadoop_conf_dir = config['configurations']['accumulo-env']['hadoop_conf_dir']
+zookeeper_home = config['configurations']['accumulo-env']['zookeeper_home']
+zookeeper_host = config['configurations']['accumulo-site']['instance.zookeeper.host']
+master_heapsize = config['configurations']['accumulo-env']['master_heapsize']
+tserver_heapsize = config['configurations']['accumulo-env']['tserver_heapsize']
+monitor_heapsize = config['configurations']['accumulo-env']['monitor_heapsize']
+gc_heapsize = config['configurations']['accumulo-env']['gc_heapsize']
+other_heapsize = config['configurations']['accumulo-env']['other_heapsize']
+env_sh_template = config['configurations']['accumulo-env']['content']
 
 # accumulo local directory structure
 accumulo_root = config['configurations']['global']['app_root']
-conf_dir = None
-if ('accumulo_conf_dir' in config['configurations']['global']):
-  conf_dir = config['configurations']['global']['accumulo_conf_dir']
-else:
-  conf_dir = format("{accumulo_root}/conf")
+conf_dir = format("{accumulo_root}/conf")
 log_dir = config['configurations']['global']['app_log_dir']
 daemon_script = format("{accumulo_root}/bin/accumulo")
 
 # accumulo monitor certificate properties
 monitor_security_enabled = config['configurations']['global']['monitor_protocol'] == "https"
-keystore_path = format("{accumulo_root}/conf/keystore.jks")
-truststore_path = format("{accumulo_root}/conf/cacerts.jks")
-cert_path = format("{accumulo_root}/conf/server.cer")
-keystore_property = "monitor.ssl.keyStore"
-keystore_password_property = "monitor.ssl.keyStorePassword"
-truststore_property = "monitor.ssl.trustStore"
-truststore_password_property = "monitor.ssl.trustStorePassword"
+monitor_keystore_property = "monitor.ssl.keyStore"
+monitor_truststore_property = "monitor.ssl.trustStore"
+
+# accumulo ssl properties
+ssl_enabled = False
+if 'instance.rpc.ssl.enabled' in config['configurations']['accumulo-site']:
+  ssl_enabled = config['configurations']['accumulo-site']['instance.rpc.ssl.enabled']
+clientauth_enabled = False
+if 'instance.rpc.ssl.clientAuth' in config['configurations']['accumulo-site']:
+  clientauth_enabled = config['configurations']['accumulo-site']['instance.rpc.ssl.clientAuth']
+ssl_cert_dir = config['configurations']['global']['ssl_cert_dir']
+keystore_path = format("{conf_dir}/ssl/keystore.jks")
+truststore_path = format("{conf_dir}/ssl/truststore.jks")
+ssl_keystore_file_property = "rpc.javax.net.ssl.keyStore"
+ssl_truststore_file_property = "rpc.javax.net.ssl.trustStore"
+credential_provider = config['configurations']['accumulo-site']["general.security.credential.provider.paths"]
+#credential_provider = credential_provider.replace("${HOST}", hostname) # if enabled, must propagate to configuration
+if ssl_keystore_file_property in config['configurations']['accumulo-site']:
+  keystore_path = config['configurations']['accumulo-site'][ssl_keystore_file_property]
+if ssl_truststore_file_property in config['configurations']['accumulo-site']:
+  truststore_path = config['configurations']['accumulo-site'][ssl_truststore_file_property]
 
 # accumulo initialization parameters
-accumulo_instance_name = config['configurations']['global']['accumulo_instance_name']
+accumulo_instance_name = config['configurations']['client']['instance.name']
 accumulo_root_password = config['configurations']['global']['accumulo_root_password']
-accumulo_hdfs_root_dir = None
-if ('instance.dfs.dir' in config['configurations']['accumulo-site']):
-  accumulo_hdfs_root_dir = config['configurations']['accumulo-site']['instance.dfs.dir']
-else:
-  accumulo_hdfs_root_dir = config['configurations']['accumulo-site']['instance.volumes'].split(",")[0]
+accumulo_hdfs_root_dir = config['configurations']['accumulo-site']['instance.volumes'].split(",")[0]
 
 #log4j.properties
 if (('accumulo-log4j' in config['configurations']) and ('content' in config['configurations']['accumulo-log4j'])):
diff --git a/app-packages/accumulo/package/templates/accumulo-env.sh.j2 b/app-packages/accumulo/package/templates/accumulo-env.sh.j2
index 7ffec53..9e365af 100755
--- a/app-packages/accumulo/package/templates/accumulo-env.sh.j2
+++ b/app-packages/accumulo/package/templates/accumulo-env.sh.j2
@@ -36,7 +36,7 @@
 export ACCUMULO_MASTER_OPTS="-Xmx{{master_heapsize}} -Xms{{master_heapsize}}"
 export ACCUMULO_MONITOR_OPTS="-Xmx{{monitor_heapsize}} -Xms{{monitor_heapsize}}"
 export ACCUMULO_GC_OPTS="-Xmx{{gc_heapsize}} -Xms{{gc_heapsize}}"
-export ACCUMULO_GENERAL_OPTS="-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75"
+export ACCUMULO_GENERAL_OPTS="-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -Djava.net.preferIPv4Stack=true"
 export ACCUMULO_OTHER_OPTS="-Xmx{{other_heapsize}} -Xms{{other_heapsize}}"
 # what do when the JVM runs out of heap memory
 export ACCUMULO_KILL_CMD='kill -9 %p'
diff --git a/app-packages/accumulo/pom.xml b/app-packages/accumulo/pom.xml
index fe71c70..469ca85 100644
--- a/app-packages/accumulo/pom.xml
+++ b/app-packages/accumulo/pom.xml
@@ -19,7 +19,7 @@
   <parent>
     <groupId>org.apache.slider</groupId>
     <artifactId>slider</artifactId>
-    <version>0.50.2-incubating</version>
+    <version>0.60.0-incubating</version>
     <relativePath>../../pom.xml</relativePath>
   </parent>
   <modelVersion>4.0.0</modelVersion>
@@ -30,11 +30,43 @@
 
   <properties>
     <work.dir>package-tmp</work.dir>
-    <app.package.name>apache-slider-accumulo-${accumulo.version}-app-package-${project.version}</app.package.name>
+    <app.package.name>${project.artifactId}-${pkg.version}</app.package.name>
+    <pkg.src>${project.build.directory}/${work.dir}</pkg.src>
+    <pkg.version>${project.version}</pkg.version>
+    <pkg.name>accumulo-${accumulo.version}-bin.tar.gz</pkg.name>
+    <!-- the following properties are used for testing -->
+    <slider.bin.dir>../../slider-assembly/target/slider-${project.version}-all/slider-${project.version}</slider.bin.dir>
+    <test.app.pkg.dir>${project.build.directory}</test.app.pkg.dir>
+    <test.app.resources.dir>${project.build.directory}/test-config</test.app.resources.dir>
+    <!-- these properties are used in the default and the test appConfigs -->
+    <hadoop.dir>/usr/lib/hadoop</hadoop.dir>
+    <hdfs.dir>/usr/lib/hadoop-hdfs</hdfs.dir>
+    <yarn.dir>/usr/lib/hadoop-yarn</yarn.dir>
+    <mapred.dir>/usr/lib/hadoop-mapred</mapred.dir>
+    <zk.dir>/usr/lib/zookeeper</zk.dir>
+    <app.java.home>${java.home}</app.java.home>
+    <app.user>yarn</app.user>
+    <app.user.group>hadoop</app.user.group>
+    <!-- these are for accumulo processes -->
+    <accumulo.keytab></accumulo.keytab>
+    <accumulo.principal></accumulo.principal>
+    <!-- these are for the AM -->
+    <accumulo.headless.keytab>${accumulo.keytab}</accumulo.headless.keytab>
+    <accumulo.headless.principal>${accumulo.principal}</accumulo.headless.principal>
   </properties>
 
   <profiles>
     <profile>
+      <id>hdp</id>
+      <properties>
+        <hadoop.dir>/usr/hdp/current/hadoop-client</hadoop.dir>
+        <hdfs.dir>/usr/hdp/current/hadoop-hdfs-client</hdfs.dir>
+        <yarn.dir>/usr/hdp/current/hadoop-yarn-client</yarn.dir>
+        <mapred.dir>/usr/hdp/current/hadoop-mapreduce-client</mapred.dir>
+        <zk.dir>/usr/hdp/current/zookeeper-client</zk.dir>
+      </properties>
+    </profile>
+    <profile>
       <id>accumulo-app-package</id>
       <build>
         <plugins>
@@ -59,6 +91,78 @@
 
           <plugin>
             <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-antrun-plugin</artifactId>
+            <version>${maven-antrun-plugin.version}</version>
+            <executions>
+              <execution>
+                <id>copy</id>
+                <phase>validate</phase>
+                <configuration>
+                  <target name="copy and rename file">
+                    <copy file="${pkg.src}/${pkg.name}" tofile="${project.build.directory}/${work.dir}/${pkg.name}" />
+                  </target>
+                </configuration>
+                <goals>
+                  <goal>run</goal>
+                </goals>
+              </execution>
+            </executions>
+          </plugin>
+        </plugins>
+      </build>
+    </profile>
+    <profile>
+      <id>accumulo-app-package-maven</id>
+      <dependencies>
+        <dependency>
+          <groupId>org.apache.accumulo</groupId>
+          <artifactId>accumulo</artifactId>
+          <version>${accumulo.version}</version>
+          <classifier>bin</classifier>
+          <type>tar.gz</type>
+          <exclusions>
+            <exclusion>
+              <groupId>org.apache.accumulo</groupId>
+              <artifactId>accumulo-fate</artifactId>
+            </exclusion>
+            <exclusion>
+              <groupId>org.apache.accumulo</groupId>
+              <artifactId>accumulo-gc</artifactId>
+            </exclusion>
+            <exclusion>
+              <groupId>org.apache.accumulo</groupId>
+              <artifactId>accumulo-master</artifactId>
+            </exclusion>
+            <exclusion>
+              <groupId>org.apache.accumulo</groupId>
+              <artifactId>accumulo-minicluster</artifactId>
+            </exclusion>
+          </exclusions>
+        </dependency>
+      </dependencies>
+      <build>
+        <plugins>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-assembly-plugin</artifactId>
+            <configuration>
+              <descriptor>src/assembly/accumulo.xml</descriptor>
+              <appendAssemblyId>false</appendAssemblyId>
+              <finalName>${project.artifactId}-${pkg.version}</finalName>
+            </configuration>
+            <executions>
+              <execution>
+                <id>build-app-package</id>
+                <phase>package</phase>
+                <goals>
+                  <goal>single</goal>
+                </goals>
+              </execution>
+            </executions>
+          </plugin>
+
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
             <artifactId>maven-dependency-plugin</artifactId>
             <version>${maven-dependency-plugin.version}</version>
             <executions>
@@ -77,7 +181,13 @@
               </execution>
             </executions>
           </plugin>
-
+        </plugins>
+      </build>
+    </profile>
+    <profile>
+      <id>accumulo-funtest</id>
+      <build>
+        <plugins>
           <plugin>
             <groupId>org.apache.maven.plugins</groupId>
             <artifactId>maven-failsafe-plugin</artifactId>
@@ -97,11 +207,11 @@
                 <java.awt.headless>true</java.awt.headless>
                 <!-- this property must be supplied-->
                 <slider.conf.dir>${slider.conf.dir}</slider.conf.dir>
-                <slider.bin.dir>../../slider-assembly/target/slider-${project.version}-all/slider-${project.version}</slider.bin.dir>
-                <test.app.pkg.dir>target</test.app.pkg.dir>
+                <slider.bin.dir>${slider.bin.dir}</slider.bin.dir>
+                <test.app.pkg.dir>${test.app.pkg.dir}</test.app.pkg.dir>
                 <test.app.pkg.file>${app.package.name}.zip</test.app.pkg.file>
-                <test.app.resource>target/test-config/resources.json</test.app.resource>
-                <test.app.template>target/${app.package.name}/appConfig.json</test.app.template>
+                <test.app.pkg.name>ACCUMULO</test.app.pkg.name>
+                <test.app.resources.dir>${test.app.resources.dir}</test.app.resources.dir>
               </systemPropertyVariables>
             </configuration>
           </plugin>
@@ -116,7 +226,15 @@
       <resource>
         <directory>src/test/resources</directory>
         <filtering>true</filtering>
-        <targetPath>${project.build.directory}/test-config</targetPath>
+        <targetPath>${test.app.resources.dir}</targetPath>
+      </resource>
+      <resource>
+        <directory>.</directory>
+        <filtering>true</filtering>
+        <targetPath>${test.app.resources.dir}</targetPath>
+        <includes>
+          <include>appConfig-default.json</include>
+        </includes>
       </resource>
     </resources>
 
@@ -142,19 +260,22 @@
   <dependencies>
     <dependency>
       <groupId>org.apache.accumulo</groupId>
-      <artifactId>accumulo</artifactId>
+      <artifactId>accumulo-server-base</artifactId>
       <version>${accumulo.version}</version>
-      <classifier>bin</classifier>
-      <type>tar.gz</type>
-    </dependency>
-    <dependency>
-      <groupId>junit</groupId>
-      <artifactId>junit</artifactId>
-      <scope>test</scope>
     </dependency>
     <dependency>
       <groupId>org.apache.accumulo</groupId>
       <artifactId>accumulo-core</artifactId>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-client</artifactId>
+      <version>${hadoop.version}</version>
+    </dependency>
+
+    <dependency>
+      <groupId>junit</groupId>
+      <artifactId>junit</artifactId>
       <scope>test</scope>
     </dependency>
     <dependency>
diff --git a/app-packages/accumulo/resources.json b/app-packages/accumulo/resources-default.json
similarity index 88%
rename from app-packages/accumulo/resources.json
rename to app-packages/accumulo/resources-default.json
index f876901..f0923f2 100644
--- a/app-packages/accumulo/resources.json
+++ b/app-packages/accumulo/resources-default.json
@@ -3,6 +3,8 @@
   "metadata": {
   },
   "global": {
+    "yarn.log.include.patterns": "",
+    "yarn.log.exclude.patterns": ""
   },
   "components": {
     "ACCUMULO_MASTER": {
@@ -15,7 +17,7 @@
     "ACCUMULO_TSERVER": {
       "yarn.role.priority": "2",
       "yarn.component.instances": "1",
-      "yarn.memory": "256"
+      "yarn.memory": "512"
     },
     "ACCUMULO_MONITOR": {
       "yarn.role.priority": "3",
diff --git a/app-packages/accumulo/src/assembly/accumulo.xml b/app-packages/accumulo/src/assembly/accumulo.xml
index a8f9578..7be1942 100644
--- a/app-packages/accumulo/src/assembly/accumulo.xml
+++ b/app-packages/accumulo/src/assembly/accumulo.xml
@@ -24,13 +24,18 @@
   <id>accumulo_v${accumulo.version}</id>
   <formats>
     <format>zip</format>
-    <format>dir</format>
   </formats>
   <includeBaseDirectory>false</includeBaseDirectory>
 
   <files>
     <file>
-      <source>appConfig.json</source>
+      <source>appConfig-default.json</source>
+      <outputDirectory>/</outputDirectory>
+      <filtered>true</filtered>
+      <fileMode>0755</fileMode>
+    </file>
+    <file>
+      <source>appConfig-secured-default.json</source>
       <outputDirectory>/</outputDirectory>
       <filtered>true</filtered>
       <fileMode>0755</fileMode>
@@ -41,6 +46,19 @@
       <filtered>true</filtered>
       <fileMode>0755</fileMode>
     </file>
+    <file>
+      <source>${project.build.directory}/${work.dir}/${pkg.name}</source>
+      <outputDirectory>package/files</outputDirectory>
+      <filtered>false</filtered>
+      <fileMode>0755</fileMode>
+    </file>
+    <file>
+      <source>${project.build.directory}/slider-accumulo-app-package-${project.version}.jar</source>
+      <outputDirectory>package/files</outputDirectory>
+      <destName>SliderAccumuloUtils.jar</destName>
+      <filtered>false</filtered>
+      <fileMode>0755</fileMode>
+    </file>
   </files>
 
   <fileSets>
@@ -51,22 +69,12 @@
         <exclude>pom.xml</exclude>
         <exclude>src/**</exclude>
         <exclude>target/**</exclude>
-        <exclude>appConfig.json</exclude>
+        <exclude>appConfig-default.json</exclude>
+        <exclude>appConfig-secured-default.json</exclude>
         <exclude>metainfo.xml</exclude>
       </excludes>
       <fileMode>0755</fileMode>
       <directoryMode>0755</directoryMode>
     </fileSet>
-
-    <fileSet>
-      <directory>${project.build.directory}/${work.dir}</directory>
-      <outputDirectory>package/files</outputDirectory>
-      <includes>
-        <include>accumulo-${accumulo.version}-bin.tar.gz</include>
-      </includes>
-      <fileMode>0755</fileMode>
-      <directoryMode>0755</directoryMode>
-    </fileSet>
-
   </fileSets>
 </assembly>
diff --git a/app-packages/accumulo/src/main/java/org/apache/slider/accumulo/CustomAuthenticator.java b/app-packages/accumulo/src/main/java/org/apache/slider/accumulo/CustomAuthenticator.java
new file mode 100644
index 0000000..0f50838
--- /dev/null
+++ b/app-packages/accumulo/src/main/java/org/apache/slider/accumulo/CustomAuthenticator.java
@@ -0,0 +1,118 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.slider.accumulo;
+
+import org.apache.accumulo.core.client.AccumuloSecurityException;
+import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
+import org.apache.accumulo.core.conf.DefaultConfiguration;
+import org.apache.accumulo.core.conf.Property;
+import org.apache.accumulo.core.conf.SiteConfiguration;
+import org.apache.accumulo.core.security.thrift.TCredentials;
+import org.apache.accumulo.server.security.handler.Authenticator;
+import org.apache.accumulo.server.security.handler.Authorizor;
+import org.apache.accumulo.server.security.handler.PermissionHandler;
+import org.apache.accumulo.server.security.handler.ZKAuthenticator;
+
+import java.io.IOException;
+import java.nio.charset.Charset;
+import java.util.Set;
+
+public final class CustomAuthenticator implements Authenticator {
+  public static final String ROOT_INITIAL_PASSWORD_PROPERTY =
+      "root.initial.password";
+  private static ZKAuthenticator zkAuthenticator = null;
+
+  public CustomAuthenticator() {
+    zkAuthenticator = new ZKAuthenticator();
+  }
+
+  @Override
+  public void initialize(String instanceId, boolean initialize) {
+    zkAuthenticator.initialize(instanceId, initialize);
+  }
+
+  @Override
+  public void initializeSecurity(TCredentials credentials, String principal,
+      byte[] token) throws AccumuloSecurityException {
+    String pass = null;
+    SiteConfiguration siteconf = SiteConfiguration.getInstance(
+        DefaultConfiguration.getInstance());
+    String jksFile = siteconf.get(
+        Property.GENERAL_SECURITY_CREDENTIAL_PROVIDER_PATHS);
+
+    if (jksFile == null) {
+      throw new RuntimeException(
+          Property.GENERAL_SECURITY_CREDENTIAL_PROVIDER_PATHS +
+              " not specified in accumulo-site.xml");
+    }
+    try {
+      pass = new String(ProviderUtil.getPassword(jksFile,
+          ROOT_INITIAL_PASSWORD_PROPERTY));
+    } catch (IOException ioe) {
+      throw new RuntimeException("Can't get key " +
+          ROOT_INITIAL_PASSWORD_PROPERTY + " from " + jksFile, ioe);
+    }
+    zkAuthenticator.initializeSecurity(credentials, principal,
+        pass.getBytes(Charset.forName("UTF-8")));
+  }
+
+  @Override
+  public Set<String> listUsers() {
+    return zkAuthenticator.listUsers();
+  }
+
+  @Override
+  public void createUser(String principal, AuthenticationToken token) throws AccumuloSecurityException {
+    zkAuthenticator.createUser(principal, token);
+  }
+
+  @Override
+  public void dropUser(String user) throws AccumuloSecurityException {
+    zkAuthenticator.dropUser(user);
+  }
+
+  @Override
+  public void changePassword(String principal, AuthenticationToken token) throws AccumuloSecurityException {
+    zkAuthenticator.changePassword(principal, token);
+  }
+
+  @Override
+  public boolean userExists(String user) {
+    return zkAuthenticator.userExists(user);
+  }
+
+  @Override
+  public boolean validSecurityHandlers(Authorizor auth, PermissionHandler pm) {
+    return true;
+  }
+
+  @Override
+  public boolean authenticateUser(String principal, AuthenticationToken token) throws AccumuloSecurityException {
+    return zkAuthenticator.authenticateUser(principal, token);
+  }
+
+  @Override
+  public Set<Class<? extends AuthenticationToken>> getSupportedTokenTypes() {
+    return zkAuthenticator.getSupportedTokenTypes();
+  }
+
+  @Override
+  public boolean validTokenClass(String tokenClass) {
+    return zkAuthenticator.validTokenClass(tokenClass);
+  }
+}
diff --git a/app-packages/accumulo/src/main/java/org/apache/slider/accumulo/ProviderUtil.java b/app-packages/accumulo/src/main/java/org/apache/slider/accumulo/ProviderUtil.java
new file mode 100644
index 0000000..ee5a781
--- /dev/null
+++ b/app-packages/accumulo/src/main/java/org/apache/slider/accumulo/ProviderUtil.java
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.slider.accumulo;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.security.alias.CredentialProvider;
+import org.apache.hadoop.security.alias.CredentialProvider.CredentialEntry;
+import org.apache.hadoop.security.alias.CredentialProviderFactory;
+
+import java.io.IOException;
+import java.util.List;
+
+public class ProviderUtil {
+  public static char[] getPassword(String credentialProvider, String alias)
+      throws IOException {
+    Configuration conf = new Configuration();
+    conf.set(CredentialProviderFactory.CREDENTIAL_PROVIDER_PATH,
+        credentialProvider);
+    return conf.getPassword(alias);
+  }
+}
diff --git a/app-packages/accumulo/src/test/groovy/org/apache/slider/funtest/accumulo/AccumuloAgentCommandTestBase.groovy b/app-packages/accumulo/src/test/groovy/org/apache/slider/funtest/accumulo/AccumuloAgentCommandTestBase.groovy
index 50ecfcd..b619f7e 100644
--- a/app-packages/accumulo/src/test/groovy/org/apache/slider/funtest/accumulo/AccumuloAgentCommandTestBase.groovy
+++ b/app-packages/accumulo/src/test/groovy/org/apache/slider/funtest/accumulo/AccumuloAgentCommandTestBase.groovy
@@ -25,17 +25,19 @@
 @Slf4j
 abstract class AccumuloAgentCommandTestBase extends AgentCommandTestBase {
   protected static final int ACCUMULO_LAUNCH_WAIT_TIME
-  protected static final int ACCUMULO_GO_LIVE_TIME = 60000
+  protected static final int ACCUMULO_GO_LIVE_TIME
 
-  // parameters must match those found in the default appConfig.json
-  protected static final String INSTANCE_NAME = "instancename"
   protected static final String USER = "root"
-  protected static final String PASSWORD = "secret"
+  protected static final String PASSWORD = "secret_password"
+  protected static final String INSTANCE_SECRET = "other_secret_password"
 
   static {
     ACCUMULO_LAUNCH_WAIT_TIME = getTimeOptionMillis(SLIDER_CONFIG,
       KEY_ACCUMULO_LAUNCH_TIME,
       1000 * DEFAULT_ACCUMULO_LAUNCH_TIME_SECONDS)
+    ACCUMULO_GO_LIVE_TIME = getTimeOptionMillis(SLIDER_CONFIG,
+      KEY_ACCUMULO_GO_LIVE_TIME,
+      1000 * DEFAULT_ACCUMULO_LIVE_TIME_SECONDS)
   }
 
   abstract public String getClusterName();
diff --git a/app-packages/accumulo/src/test/groovy/org/apache/slider/funtest/accumulo/AccumuloBasicIT.groovy b/app-packages/accumulo/src/test/groovy/org/apache/slider/funtest/accumulo/AccumuloBasicIT.groovy
index bcb952b..bb9abba 100644
--- a/app-packages/accumulo/src/test/groovy/org/apache/slider/funtest/accumulo/AccumuloBasicIT.groovy
+++ b/app-packages/accumulo/src/test/groovy/org/apache/slider/funtest/accumulo/AccumuloBasicIT.groovy
@@ -17,18 +17,102 @@
 package org.apache.slider.funtest.accumulo
 
 import groovy.util.logging.Slf4j
+import org.apache.accumulo.core.conf.Property
+import org.apache.hadoop.conf.Configuration
+import org.apache.hadoop.fs.FileSystem
+import org.apache.hadoop.fs.Path
+import org.apache.hadoop.security.ProviderUtils
+import org.apache.hadoop.security.UserGroupInformation
+import org.apache.hadoop.security.alias.CredentialProvider
+import org.apache.hadoop.security.alias.CredentialProviderFactory
+import org.apache.hadoop.registry.client.types.ServiceRecord
+import org.apache.slider.accumulo.CustomAuthenticator
 import org.apache.slider.api.ClusterDescription
 import org.apache.slider.client.SliderClient
 import org.apache.slider.common.SliderKeys
+import org.apache.slider.core.conf.ConfTree
+import org.apache.slider.core.persist.ConfTreeSerDeser
 import org.apache.slider.core.registry.docstore.PublishedConfiguration
-import org.apache.slider.core.registry.info.ServiceInstanceData
 import org.apache.slider.core.registry.retrieve.RegistryRetriever
 import org.apache.slider.funtest.framework.SliderShell
-import org.apache.slider.server.services.curator.CuratorServiceInstance
+
+import org.junit.Before
 import org.junit.Test
 
+import static org.apache.hadoop.registry.client.binding.RegistryUtils.currentUser
+import static org.apache.hadoop.registry.client.binding.RegistryUtils.servicePath
+
 @Slf4j
 class AccumuloBasicIT extends AccumuloAgentCommandTestBase {
+  protected static final String PROVIDER_PROPERTY = "site.accumulo-site." +
+    Property.GENERAL_SECURITY_CREDENTIAL_PROVIDER_PATHS
+  protected static final String KEY_PASS = "keypass"
+  protected static final String TRUST_PASS = "trustpass"
+  protected ConfTree tree
+
+  protected String getAppResource() {
+    return sysprop("test.app.resources.dir") + "/resources.json"
+  }
+
+  protected String getAppTemplate() {
+    String appTemplateFile = templateName()
+    Configuration conf = new Configuration()
+    FileSystem fs = FileSystem.getLocal(conf)
+    InputStream stream = new FileInputStream(sysprop("test.app.resources.dir") + "/appConfig-default.json")
+    assert stream!=null, "Couldn't pull appConfig.json from app pkg"
+    ConfTreeSerDeser c = new ConfTreeSerDeser()
+    ConfTree t = c.fromStream(stream)
+    t = modifyTemplate(t)
+    c.save(fs, new Path(appTemplateFile), t, true)
+    return appTemplateFile
+  }
+
+  protected String templateName() {
+    return sysprop("test.app.resources.dir") + "/appConfig.json"
+  }
+
+  protected ConfTree modifyTemplate(ConfTree original) {
+    return original
+  }
+
+  @Before
+  public void createKeyStore() {
+    ConfTreeSerDeser c = new ConfTreeSerDeser()
+    tree = c.fromFile(new File(APP_TEMPLATE))
+    assume tree.credentials.size() > 0, "No credentials requested, " +
+      "skipping creation of credentials"
+    SliderClient.replaceTokens(tree, UserGroupInformation.getCurrentUser()
+      .getShortUserName(), getClusterName())
+    String jks = tree.global.get(PROVIDER_PROPERTY)
+    def keys = tree.credentials.get(jks)
+    assert keys!=null, "jks specified in $PROVIDER_PROPERTY wasn't requested " +
+      "in credentials"
+    Path jksPath = ProviderUtils.unnestUri(new URI(jks))
+    if (clusterFS.exists(jksPath)) {
+      clusterFS.delete(jksPath, false)
+    }
+    Configuration conf = loadSliderConf()
+    conf.set(CredentialProviderFactory.CREDENTIAL_PROVIDER_PATH, jks)
+    CredentialProvider provider =
+      CredentialProviderFactory.getProviders(conf).get(0)
+    provider.createCredentialEntry(
+      CustomAuthenticator.ROOT_INITIAL_PASSWORD_PROPERTY, PASSWORD.toCharArray())
+    provider.createCredentialEntry(Property.INSTANCE_SECRET.toString(),
+      INSTANCE_SECRET.toCharArray())
+    provider.createCredentialEntry(Property.TRACE_TOKEN_PROPERTY_PREFIX
+      .toString() + "password", PASSWORD.toCharArray())
+    provider.createCredentialEntry(Property.RPC_SSL_KEYSTORE_PASSWORD
+      .toString(), KEY_PASS.toCharArray())
+    provider.createCredentialEntry(Property.RPC_SSL_TRUSTSTORE_PASSWORD
+      .toString(), TRUST_PASS.toCharArray())
+    provider.createCredentialEntry(Property.MONITOR_SSL_KEYSTOREPASS
+      .toString(), KEY_PASS.toCharArray())
+    provider.createCredentialEntry(Property.MONITOR_SSL_TRUSTSTOREPASS
+      .toString(), TRUST_PASS.toCharArray())
+    provider.flush()
+    assert clusterFS.exists(jksPath), "jks $jks not created"
+    log.info("Created credential provider $jks for test")
+  }
 
   @Override
   public String getClusterName() {
@@ -46,7 +130,6 @@
     SliderShell shell = slider(EXIT_SUCCESS,
       [
         ACTION_CREATE, getClusterName(),
-        ARG_IMAGE, agentTarballPath.toString(),
         ARG_TEMPLATE, APP_TEMPLATE,
         ARG_RESOURCES, APP_RESOURCE
       ])
@@ -85,25 +168,39 @@
   }
 
   public static String getMonitorUrl(SliderClient sliderClient, String clusterName) {
-    CuratorServiceInstance<ServiceInstanceData> instance =
-      sliderClient.getRegistry().queryForInstance(SliderKeys.APP_TYPE, clusterName)
-    ServiceInstanceData serviceInstanceData = instance.payload
-    RegistryRetriever retriever = new RegistryRetriever(serviceInstanceData)
-    PublishedConfiguration configuration = retriever.retrieveConfiguration(
-      retriever.getConfigurations(true), "quicklinks", true)
+    int tries = 5
+    Exception caught;
+    while (true) {
+      try {
+        String path = servicePath(currentUser(),
+            SliderKeys.APP_TYPE,
+            clusterName);
+        ServiceRecord instance = sliderClient.resolve(path)
+        RegistryRetriever retriever = new RegistryRetriever(instance)
+        PublishedConfiguration configuration = retriever.retrieveConfiguration(
+          retriever.getConfigurations(true), "quicklinks", true)
 
-    // must match name set in metainfo.xml
-    String monitorUrl = configuration.entries.get("org.apache.slider.monitor")
-
-    assertNotNull monitorUrl
-    return monitorUrl
+        // must match name set in metainfo.xml
+        String monitorUrl = configuration.entries.get("org.apache.slider.monitor")
+        assertNotNull monitorUrl
+        return monitorUrl
+      } catch (Exception e) {
+        caught = e;
+        log.info("Got exception trying to read quicklinks")
+        if (tries-- == 0) {
+          break
+        }
+        sleep(20000)
+      }
+    }
+    throw caught;
   }
 
   public static void checkMonitorPage(String monitorUrl) {
     String monitor = fetchWebPageWithoutError(monitorUrl);
-    assume monitor != null, "Monitor page null"
-    assume monitor.length() > 100, "Monitor page too short"
-    assume monitor.contains("Accumulo Overview"), "Monitor page didn't contain expected text"
+    assert monitor != null, "Monitor page null"
+    assert monitor.length() > 100, "Monitor page too short"
+    assert monitor.contains("Accumulo Overview"), "Monitor page didn't contain expected text"
   }
 
   /**
diff --git a/app-packages/accumulo/src/test/groovy/org/apache/slider/funtest/accumulo/AccumuloMonitorSSLIT.groovy b/app-packages/accumulo/src/test/groovy/org/apache/slider/funtest/accumulo/AccumuloMonitorSSLIT.groovy
index 6f68e13..12f89e0 100644
--- a/app-packages/accumulo/src/test/groovy/org/apache/slider/funtest/accumulo/AccumuloMonitorSSLIT.groovy
+++ b/app-packages/accumulo/src/test/groovy/org/apache/slider/funtest/accumulo/AccumuloMonitorSSLIT.groovy
@@ -20,19 +20,21 @@
 import groovy.util.logging.Slf4j
 import org.apache.slider.api.ClusterDescription
 import org.apache.slider.client.SliderClient
-
-import javax.net.ssl.KeyManager
-import javax.net.ssl.SSLContext
-import javax.net.ssl.TrustManager
-import javax.net.ssl.X509TrustManager
-import java.security.SecureRandom
-import java.security.cert.CertificateException
-import java.security.cert.X509Certificate
+import org.apache.slider.core.conf.ConfTree
 
 @Slf4j
-class AccumuloMonitorSSLIT extends AccumuloBasicIT {
-  AccumuloMonitorSSLIT() {
-    APP_TEMPLATE = "target/test-config/appConfig_monitor_ssl.json"
+class AccumuloMonitorSSLIT extends AccumuloSSLTestBase {
+  protected String templateName() {
+    return sysprop("test.app.resources.dir") + "/appConfig_monitor_ssl.json"
+  }
+
+  protected ConfTree modifyTemplate(ConfTree confTree) {
+    confTree.global.put("site.global.monitor_protocol", "https")
+    String jks = confTree.global.get(PROVIDER_PROPERTY)
+    def keys = confTree.credentials.get(jks)
+    keys.add("monitor.ssl.keyStorePassword")
+    keys.add("monitor.ssl.trustStorePassword")
+    return confTree
   }
 
   @Override
@@ -49,25 +51,6 @@
   public void clusterLoadOperations(ClusterDescription cd, SliderClient sliderClient) {
     String monitorUrl = getMonitorUrl(sliderClient, getClusterName())
     assert monitorUrl.startsWith("https://"), "Monitor URL didn't have expected protocol"
-
-    SSLContext ctx = SSLContext.getInstance("SSL");
-    TrustManager[] t = new TrustManager[1];
-    t[0] = new DefaultTrustManager();
-    ctx.init(new KeyManager[0], t, new SecureRandom());
-    SSLContext.setDefault(ctx);
     checkMonitorPage(monitorUrl)
   }
-
-  private static class DefaultTrustManager implements X509TrustManager {
-    @Override
-    public void checkClientTrusted(X509Certificate[] arg0, String arg1) throws CertificateException {}
-
-    @Override
-    public void checkServerTrusted(X509Certificate[] arg0, String arg1) throws CertificateException {}
-
-    @Override
-    public X509Certificate[] getAcceptedIssuers() {
-      return null;
-    }
-  }
-}
\ No newline at end of file
+}
diff --git a/app-packages/accumulo/src/test/groovy/org/apache/slider/funtest/accumulo/AccumuloReadWriteIT.groovy b/app-packages/accumulo/src/test/groovy/org/apache/slider/funtest/accumulo/AccumuloReadWriteIT.groovy
index cdbbcce..b4118d2 100644
--- a/app-packages/accumulo/src/test/groovy/org/apache/slider/funtest/accumulo/AccumuloReadWriteIT.groovy
+++ b/app-packages/accumulo/src/test/groovy/org/apache/slider/funtest/accumulo/AccumuloReadWriteIT.groovy
@@ -25,6 +25,7 @@
 import org.apache.accumulo.core.client.security.tokens.PasswordToken
 import org.apache.accumulo.test.TestIngest
 import org.apache.accumulo.test.VerifyIngest
+import org.apache.hadoop.registry.client.api.RegistryConstants
 import org.apache.slider.api.ClusterDescription
 import org.apache.slider.client.SliderClient
 import org.apache.slider.common.SliderXmlConfKeys
@@ -48,10 +49,12 @@
   @Override
   public void clusterLoadOperations(ClusterDescription cd, SliderClient sliderClient) {
     try {
-      String zookeepers = SLIDER_CONFIG.get(SliderXmlConfKeys.REGISTRY_ZK_QUORUM,
+      String zookeepers = SLIDER_CONFIG.get(
+          RegistryConstants.KEY_REGISTRY_ZK_QUORUM,
         FuntestProperties.DEFAULT_SLIDER_ZK_HOSTS)
 
-      ZooKeeperInstance instance = new ZooKeeperInstance(INSTANCE_NAME, zookeepers)
+      ZooKeeperInstance instance = new ZooKeeperInstance(
+        tree.global.get("site.client.instance.name"), zookeepers)
       Connector connector = instance.getConnector(USER, new PasswordToken(PASSWORD))
 
       ingest(connector, 200000, 1, 50, 0);
@@ -77,7 +80,7 @@
     TestIngest.ingest(connector, opts, new BatchWriterOpts());
   }
 
-  private static void verify(Connector connector, int rows, int cols, int width, int offset) throws Exception {
+  public static void verify(Connector connector, int rows, int cols, int width, int offset) throws Exception {
     ScannerOpts scannerOpts = new ScannerOpts();
     VerifyIngest.Opts opts = new VerifyIngest.Opts();
     opts.rows = rows;
@@ -88,7 +91,7 @@
     VerifyIngest.verifyIngest(connector, opts, scannerOpts);
   }
 
-  static void interleaveTest(final Connector connector) throws Exception {
+  public static void interleaveTest(final Connector connector) throws Exception {
     final int ROWS = 200000;
     final AtomicBoolean fail = new AtomicBoolean(false);
     final int CHUNKSIZE = ROWS / 10;
diff --git a/app-packages/accumulo/src/test/groovy/org/apache/slider/funtest/accumulo/AccumuloReadWriteSSLIT.groovy b/app-packages/accumulo/src/test/groovy/org/apache/slider/funtest/accumulo/AccumuloReadWriteSSLIT.groovy
new file mode 100644
index 0000000..0464cec
--- /dev/null
+++ b/app-packages/accumulo/src/test/groovy/org/apache/slider/funtest/accumulo/AccumuloReadWriteSSLIT.groovy
@@ -0,0 +1,75 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.slider.funtest.accumulo
+
+import org.apache.accumulo.core.client.ClientConfiguration
+import org.apache.accumulo.core.client.Connector
+import org.apache.accumulo.core.client.ZooKeeperInstance
+import org.apache.accumulo.core.client.security.tokens.PasswordToken
+import org.apache.hadoop.registry.client.api.RegistryConstants
+import org.apache.slider.api.ClusterDescription
+import org.apache.slider.client.SliderClient
+import org.apache.slider.common.SliderXmlConfKeys
+import org.apache.slider.funtest.framework.FuntestProperties
+
+import static org.apache.slider.funtest.accumulo.AccumuloReadWriteIT.ingest
+import static org.apache.slider.funtest.accumulo.AccumuloReadWriteIT.interleaveTest
+import static org.apache.slider.funtest.accumulo.AccumuloReadWriteIT.verify
+
+class AccumuloReadWriteSSLIT extends AccumuloSSLTestBase {
+  @Override
+  public String getClusterName() {
+    return "test_read_write_ssl";
+  }
+
+  @Override
+  public String getDescription() {
+    return "Test reading and writing to Accumulo cluster SSL $clusterName"
+  }
+
+  public ZooKeeperInstance getInstance() {
+    String zookeepers = SLIDER_CONFIG.get(
+        RegistryConstants.KEY_REGISTRY_ZK_QUORUM,
+      FuntestProperties.DEFAULT_SLIDER_ZK_HOSTS)
+    ClientConfiguration conf = new ClientConfiguration()
+      .withInstance(tree.global.get("site.client.instance.name"))
+      .withZkHosts(zookeepers)
+      .withSsl(true)
+      .withKeystore(clientKeyStoreFile.toString(), KEY_PASS, null)
+      .withTruststore(trustStoreFile.toString(), TRUST_PASS, null)
+    return new ZooKeeperInstance(conf)
+  }
+
+  @Override
+  public void clusterLoadOperations(ClusterDescription cd, SliderClient sliderClient) {
+    try {
+      ZooKeeperInstance instance = getInstance()
+      Connector connector = instance.getConnector(USER, new PasswordToken(PASSWORD))
+
+      ingest(connector, 200000, 1, 50, 0);
+      verify(connector, 200000, 1, 50, 0);
+
+      ingest(connector, 2, 1, 500000, 0);
+      verify(connector, 2, 1, 500000, 0);
+
+      interleaveTest(connector);
+    } catch (Exception e) {
+      fail("Got exception connecting/reading/writing "+e)
+    }
+  }
+}
diff --git a/app-packages/accumulo/src/test/groovy/org/apache/slider/funtest/accumulo/AccumuloSSLTestBase.groovy b/app-packages/accumulo/src/test/groovy/org/apache/slider/funtest/accumulo/AccumuloSSLTestBase.groovy
new file mode 100644
index 0000000..d3165dc
--- /dev/null
+++ b/app-packages/accumulo/src/test/groovy/org/apache/slider/funtest/accumulo/AccumuloSSLTestBase.groovy
@@ -0,0 +1,154 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.slider.funtest.accumulo
+
+import groovy.json.JsonSlurper
+import org.apache.accumulo.core.conf.Property
+import org.apache.hadoop.conf.Configuration
+import org.apache.hadoop.fs.Path
+import org.apache.hadoop.yarn.conf.YarnConfiguration
+import org.apache.slider.core.conf.ConfTree
+import org.apache.slider.funtest.framework.AgentUploads
+import org.junit.Before
+import org.junit.BeforeClass
+
+import javax.net.ssl.KeyManager
+import javax.net.ssl.SSLContext
+import javax.net.ssl.TrustManager
+import javax.net.ssl.X509TrustManager
+import java.security.SecureRandom
+import java.security.cert.CertificateException
+import java.security.cert.X509Certificate
+
+class AccumuloSSLTestBase extends AccumuloBasicIT {
+  protected static final File trustStoreFile = new File(TEST_APP_PKG_DIR, "truststore.jks")
+  protected static final File clientKeyStoreFile = new File(TEST_APP_PKG_DIR, "keystore.jks")
+
+  protected String templateName() {
+    return sysprop("test.app.resources.dir") + "/appConfig_ssl.json"
+  }
+
+  protected ConfTree modifyTemplate(ConfTree confTree) {
+    confTree.global.put("site.accumulo-site.instance.rpc.ssl.enabled", "true")
+    confTree.global.put("site.accumulo-site.instance.rpc.ssl.clientAuth", "true")
+    String jks = confTree.global.get(PROVIDER_PROPERTY)
+    def keys = confTree.credentials.get(jks)
+    keys.add("rpc.javax.net.ssl.keyStorePassword")
+    keys.add("rpc.javax.net.ssl.trustStorePassword")
+    return confTree
+  }
+
+  @Override
+  public String getClusterName() {
+    return "test_ssl";
+  }
+
+  @Override
+  public String getDescription() {
+    return "Test enable SSL $clusterName"
+  }
+
+  @BeforeClass
+  public static void initHttps() {
+    SSLContext ctx = SSLContext.getInstance("SSL");
+    TrustManager[] t = new TrustManager[1];
+    t[0] = new DefaultTrustManager();
+    ctx.init(new KeyManager[0], t, new SecureRandom());
+    SSLContext.setDefault(ctx);
+  }
+
+  @Before
+  public void createCerts() {
+    Path certDir = new Path(clusterFS.homeDirectory,
+      tree.global.get("site.global.ssl_cert_dir"))
+    if (clusterFS.exists(certDir)) {
+      clusterFS.delete(certDir, true)
+    }
+    clusterFS.mkdirs(certDir)
+
+    Configuration conf = loadSliderConf()
+    String provider = tree.global.get(PROVIDER_PROPERTY)
+    provider = provider.replace("hdfs/user",
+      conf.get("fs.defaultFS").replace("://", "@") + "/user")
+    System.out.println("provider after "+provider)
+    File rootKeyStoreFile = new File(TEST_APP_PKG_DIR, "root.jks")
+
+    if (!rootKeyStoreFile.exists() && !trustStoreFile.exists()) {
+      CertUtil.createRootKeyPair(rootKeyStoreFile.toString(),
+        Property.INSTANCE_SECRET.toString(), trustStoreFile.toString(),
+        Property.RPC_SSL_TRUSTSTORE_PASSWORD.toString(), provider);
+    }
+
+    AgentUploads agentUploads = new AgentUploads(SLIDER_CONFIG)
+    agentUploads.uploader.copyIfOutOfDate(trustStoreFile, new Path(certDir,
+      "truststore.jks"), false)
+
+    for (node in getNodeList(conf)) {
+      File keyStoreFile = new File(TEST_APP_PKG_DIR, node + ".jks")
+      if (!keyStoreFile.exists()) {
+        CertUtil.createServerKeyPair(keyStoreFile.toString(),
+          Property.RPC_SSL_KEYSTORE_PASSWORD.toString(),
+          rootKeyStoreFile.toString(), Property.INSTANCE_SECRET.toString(),
+          provider, node);
+      }
+      agentUploads.uploader.copyIfOutOfDate(keyStoreFile, new Path(certDir,
+        node + ".jks"), false)
+    }
+
+    if (!clientKeyStoreFile.exists()) {
+      CertUtil.createServerKeyPair(clientKeyStoreFile.toString(),
+        Property.RPC_SSL_KEYSTORE_PASSWORD.toString(),
+        rootKeyStoreFile.toString(), Property.INSTANCE_SECRET.toString(),
+        provider, InetAddress.getLocalHost().getHostName());
+    }
+  }
+
+  def getNodeList(Configuration conf) {
+    String address
+    if (YarnConfiguration.useHttps(conf)) {
+      address = "https://" + conf.get(YarnConfiguration.RM_WEBAPP_HTTPS_ADDRESS,
+        YarnConfiguration.DEFAULT_RM_WEBAPP_HTTPS_ADDRESS);
+    } else {
+      address = "http://" + conf.get(YarnConfiguration.RM_WEBAPP_ADDRESS,
+        YarnConfiguration.DEFAULT_RM_WEBAPP_ADDRESS);
+    }
+    address = address.replace("0.0.0.0", conf.get(YarnConfiguration.RM_ADDRESS)
+      .split(":")[0])
+    address = address + "/ws/v1/cluster/nodes"
+    def slurper = new JsonSlurper()
+    def result = slurper.parse(new URL(address))
+    def hosts = []
+    for (host in result.nodes.node) {
+      hosts.add(host.nodeHostName)
+    }
+    return hosts.unique()
+  }
+
+  private static class DefaultTrustManager implements X509TrustManager {
+    @Override
+    public void checkClientTrusted(X509Certificate[] arg0, String arg1) throws CertificateException {}
+
+    @Override
+    public void checkServerTrusted(X509Certificate[] arg0, String arg1) throws CertificateException {}
+
+    @Override
+    public X509Certificate[] getAcceptedIssuers() {
+      return null;
+    }
+  }
+}
diff --git a/app-packages/accumulo/src/test/java/org/apache/slider/funtest/accumulo/CertUtil.java b/app-packages/accumulo/src/test/java/org/apache/slider/funtest/accumulo/CertUtil.java
new file mode 100644
index 0000000..8bac58f
--- /dev/null
+++ b/app-packages/accumulo/src/test/java/org/apache/slider/funtest/accumulo/CertUtil.java
@@ -0,0 +1,275 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.slider.funtest.accumulo;
+
+
+import org.apache.slider.accumulo.ProviderUtil;
+import sun.security.x509.AlgorithmId;
+import sun.security.x509.CertificateAlgorithmId;
+import sun.security.x509.CertificateIssuerName;
+import sun.security.x509.CertificateSerialNumber;
+import sun.security.x509.CertificateSubjectName;
+import sun.security.x509.CertificateValidity;
+import sun.security.x509.CertificateVersion;
+import sun.security.x509.CertificateX509Key;
+import sun.security.x509.X500Name;
+import sun.security.x509.X509CertImpl;
+import sun.security.x509.X509CertInfo;
+
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.math.BigInteger;
+import java.security.InvalidKeyException;
+import java.security.KeyPair;
+import java.security.KeyPairGenerator;
+import java.security.KeyStore;
+import java.security.KeyStoreException;
+import java.security.NoSuchAlgorithmException;
+import java.security.NoSuchProviderException;
+import java.security.PrivateKey;
+import java.security.PublicKey;
+import java.security.SecureRandom;
+import java.security.SignatureException;
+import java.security.UnrecoverableKeyException;
+import java.security.cert.Certificate;
+import java.security.cert.CertificateException;
+import java.security.cert.X509Certificate;
+import java.util.Date;
+import java.util.Enumeration;
+
+public class CertUtil {
+
+  public static void createRootKeyPair(String keyStoreFile,
+      String keyStorePasswordProperty, String trustStoreFile,
+      String trustStorePasswordProperty, String credentialProvider)
+      throws Exception {
+    char[] keyStorePassword = ProviderUtil.getPassword(credentialProvider,
+        keyStorePasswordProperty);
+    char[] trustStorePassword = ProviderUtil.getPassword(credentialProvider,
+        trustStorePasswordProperty);
+
+    createSelfSignedCert(keyStoreFile, "root", keyStorePassword);
+    createPublicCert(trustStoreFile, "root", keyStoreFile, keyStorePassword,
+        trustStorePassword);
+  }
+
+  public static void createServerKeyPair(String keyStoreFile,
+      String keyStorePasswordProperty, String rootKeyStoreFile,
+      String rootKeyStorePasswordProperty, String credentialProvider,
+      String hostname)
+      throws Exception {
+    char[] keyStorePassword = ProviderUtil.getPassword(credentialProvider,
+        keyStorePasswordProperty);
+    char[] rootKeyStorePassword = ProviderUtil.getPassword(credentialProvider,
+        rootKeyStorePasswordProperty);
+
+    createSignedCert(keyStoreFile, "server", hostname, keyStorePassword,
+        rootKeyStoreFile, rootKeyStorePassword);
+  }
+
+
+  private static final String keystoreType = "JKS";
+  private static final int keysize = 2048;
+  private static final String encryptionAlgorithm = "RSA";
+  private static final String signingAlgorithm = "SHA256WITHRSA";
+  private static final String issuerDirString = ",O=Apache Slider";
+
+  public static void createPublicCert(String targetKeystoreFile, String keyName,
+      String rootKeystorePath, char[] rootKeystorePassword,
+      char[] truststorePassword) throws KeyStoreException,
+      IOException, CertificateException, NoSuchAlgorithmException {
+    KeyStore signerKeystore = KeyStore.getInstance(keystoreType);
+    char[] signerPasswordArray = rootKeystorePassword;
+    FileInputStream rootKeystoreInputStream = null;
+    try{
+        rootKeystoreInputStream = new FileInputStream(rootKeystorePath);
+        signerKeystore.load(rootKeystoreInputStream, signerPasswordArray);
+    } finally {
+        if(rootKeystoreInputStream != null) {
+            rootKeystoreInputStream.close();
+        }
+    }
+    Certificate rootCert = findCert(signerKeystore);
+
+    KeyStore keystore = KeyStore.getInstance(keystoreType);
+    keystore.load(null, null);
+    keystore.setCertificateEntry(keyName + "Cert", rootCert);
+    FileOutputStream targetKeystoreOutputStream = null;
+    try{
+        targetKeystoreOutputStream = new FileOutputStream(targetKeystoreFile);
+        keystore.store(targetKeystoreOutputStream, truststorePassword);
+    } finally {
+        if(targetKeystoreOutputStream != null) {
+            targetKeystoreOutputStream.close();
+        }
+    }
+  }
+
+  public static void createSignedCert(String targetKeystoreFile,
+      String keyName, String hostname, char[] keystorePassword,
+      String signerKeystorePath, char[] signerKeystorePassword)
+      throws Exception {
+    KeyStore signerKeystore = KeyStore.getInstance(keystoreType);
+    char[] signerPasswordArray = signerKeystorePassword;
+    FileInputStream signerKeystoreInputStream = null;
+    try{
+        signerKeystoreInputStream = new FileInputStream(signerKeystorePath);
+        signerKeystore.load(signerKeystoreInputStream, signerPasswordArray);
+    } finally {
+        if (signerKeystoreInputStream != null) {
+            signerKeystoreInputStream.close();
+        }
+    }
+    Certificate signerCert = findCert(signerKeystore);
+    PrivateKey signerKey = findPrivateKey(signerKeystore, signerPasswordArray);
+
+    KeyPair kp = generateKeyPair();
+    Certificate cert = generateCert(hostname, kp, false,
+        signerCert.getPublicKey(), signerKey);
+
+    char[] password = keystorePassword;
+    KeyStore keystore = KeyStore.getInstance(keystoreType);
+    keystore.load(null, null);
+    keystore.setCertificateEntry(keyName + "Cert", cert);
+    keystore.setKeyEntry(keyName + "Key", kp.getPrivate(), password, new Certificate[] {cert, signerCert});
+    FileOutputStream targetKeystoreOutputStream = null;
+    try{
+        targetKeystoreOutputStream = new FileOutputStream(targetKeystoreFile);
+        keystore.store(targetKeystoreOutputStream, password);
+    } finally {
+        if (targetKeystoreOutputStream != null){
+            targetKeystoreOutputStream.close();
+        }
+    }
+  }
+
+  public static void createSelfSignedCert(String targetKeystoreFileName,
+      String keyName, char[] keystorePassword)
+      throws IOException, NoSuchAlgorithmException, CertificateException,
+      NoSuchProviderException, InvalidKeyException, SignatureException,
+      KeyStoreException {
+    File targetKeystoreFile = new File(targetKeystoreFileName);
+    if (targetKeystoreFile.exists()) {
+      throw new IOException("File exists: "+targetKeystoreFile);
+    }
+
+    KeyPair kp = generateKeyPair();
+
+    Certificate cert = generateCert(null, kp, true,
+        kp.getPublic(), kp.getPrivate());
+
+    char[] password = keystorePassword;
+    KeyStore keystore = KeyStore.getInstance(keystoreType);
+    keystore.load(null, null);
+    keystore.setCertificateEntry(keyName + "Cert", cert);
+    keystore.setKeyEntry(keyName + "Key", kp.getPrivate(), password, new Certificate[] {cert});
+    FileOutputStream targetKeystoreOutputStream = null;
+    try{
+        targetKeystoreOutputStream = new FileOutputStream(targetKeystoreFile);
+        keystore.store(targetKeystoreOutputStream, password);
+    } finally {
+        if (targetKeystoreOutputStream != null) {
+            targetKeystoreOutputStream.close();
+        }
+    }
+  }
+
+  private static KeyPair generateKeyPair() throws NoSuchAlgorithmException {
+    KeyPairGenerator gen = KeyPairGenerator.getInstance(encryptionAlgorithm);
+    gen.initialize(keysize);
+    return gen.generateKeyPair();
+  }
+
+  private static X509Certificate generateCert(
+      String hostname, KeyPair kp, boolean isCertAuthority,
+      PublicKey signerPublicKey, PrivateKey signerPrivateKey)
+      throws IOException, CertificateException, NoSuchProviderException,
+      NoSuchAlgorithmException, InvalidKeyException, SignatureException {
+    X500Name issuer = new X500Name("CN=root" + issuerDirString);
+    X500Name subject;
+    if (hostname == null) {
+      subject = issuer;
+    } else {
+      subject = new X500Name("CN=" + hostname + issuerDirString);
+    }
+
+    X509CertInfo info = new X509CertInfo();
+    Date from = new Date();
+    Date to = new Date(from.getTime() + 365 * 86400000l);
+    CertificateValidity interval = new CertificateValidity(from, to);
+    BigInteger sn = new BigInteger(64, new SecureRandom());
+
+    info.set(X509CertInfo.VALIDITY, interval);
+    info.set(X509CertInfo.SERIAL_NUMBER, new CertificateSerialNumber(sn));
+    info.set(X509CertInfo.SUBJECT, new CertificateSubjectName(subject));
+    info.set(X509CertInfo.ISSUER, new CertificateIssuerName(issuer));
+    info.set(X509CertInfo.KEY, new CertificateX509Key(kp.getPublic()));
+    info.set(X509CertInfo.VERSION, new CertificateVersion(CertificateVersion.V3));
+    AlgorithmId algo = new AlgorithmId(AlgorithmId.md5WithRSAEncryption_oid);
+    info.set(X509CertInfo.ALGORITHM_ID, new CertificateAlgorithmId(algo));
+
+    // Sign the cert to identify the algorithm that's used.
+    X509CertImpl cert = new X509CertImpl(info);
+    cert.sign(signerPrivateKey, signingAlgorithm);
+
+    // Update the algorithm, and resign.
+    algo = (AlgorithmId)cert.get(X509CertImpl.SIG_ALG);
+    info.set(CertificateAlgorithmId.NAME + "." + CertificateAlgorithmId.ALGORITHM, algo);
+    cert = new X509CertImpl(info);
+    cert.sign(signerPrivateKey, signingAlgorithm);
+    return cert;
+  }
+
+  private static Certificate findCert(KeyStore keyStore) throws KeyStoreException {
+    Enumeration<String> aliases = keyStore.aliases();
+    Certificate cert = null;
+    while (aliases.hasMoreElements()) {
+      String alias = aliases.nextElement();
+      if (keyStore.isCertificateEntry(alias)) {
+        // assume only one cert
+        cert = keyStore.getCertificate(alias);
+        break;
+      }
+    }
+    if (cert == null) {
+      throw new KeyStoreException("Could not find cert in keystore");
+    }
+    return cert;
+  }
+
+  private static PrivateKey findPrivateKey(KeyStore keyStore, char[] keystorePassword)
+      throws KeyStoreException, UnrecoverableKeyException, NoSuchAlgorithmException {
+    Enumeration<String> aliases = keyStore.aliases();
+    PrivateKey key = null;
+    while (aliases.hasMoreElements()) {
+      String alias = aliases.nextElement();
+      if (keyStore.isKeyEntry(alias)) {
+        // assume only one key
+        key = (PrivateKey) keyStore.getKey(alias, keystorePassword);
+        break;
+      }
+    }
+    if (key == null) {
+      throw new KeyStoreException("Could not find private key in keystore");
+    }
+    return key;
+  }
+
+}
diff --git a/app-packages/accumulo/src/test/resources/appConfig_monitor_ssl.json b/app-packages/accumulo/src/test/resources/appConfig_monitor_ssl.json
deleted file mode 100644
index 8b63d06..0000000
--- a/app-packages/accumulo/src/test/resources/appConfig_monitor_ssl.json
+++ /dev/null
@@ -1,62 +0,0 @@
-{
-  "schema": "http://example.org/specification/v2.0.0",
-  "metadata": {
-  },
-  "global": {
-    "agent.conf": "agent.ini",
-    "application.def": "${app.package.name}.zip",
-    "config_types": "accumulo-site",
-    "java_home": "/usr/jdk64/jdk1.7.0_45",
-    "package_list": "files/accumulo-${accumulo.version}-bin.tar.gz",
-    "site.global.app_user": "yarn",
-    "site.global.app_log_dir": "${AGENT_LOG_ROOT}/app/log",
-    "site.global.app_pid_dir": "${AGENT_WORK_ROOT}/app/run",
-    "site.global.app_root": "${AGENT_WORK_ROOT}/app/install/accumulo-${accumulo.version}",
-    "site.global.app_install_dir": "${AGENT_WORK_ROOT}/app/install",
-    "site.global.tserver_heapsize": "128m",
-    "site.global.master_heapsize": "128m",
-    "site.global.monitor_heapsize": "64m",
-    "site.global.gc_heapsize": "64m",
-    "site.global.other_heapsize": "128m",
-    "site.global.hadoop_prefix": "/usr/lib/hadoop",
-    "site.global.hadoop_conf_dir": "/etc/hadoop/conf",
-    "site.global.zookeeper_home": "/usr/lib/zookeeper",
-    "site.global.accumulo_instance_name": "instancename",
-    "site.global.accumulo_root_password": "secret",
-    "site.global.user_group": "hadoop",
-    "site.global.security_enabled": "false",
-    "site.global.monitor_protocol": "https",
-    "site.accumulo-site.instance.volumes": "${DEFAULT_DATA_DIR}/data",
-    "site.accumulo-site.instance.zookeeper.host": "${ZK_HOST}",
-    "site.accumulo-site.instance.secret": "DEFAULT",
-    "site.accumulo-site.tserver.memory.maps.max": "80M",
-    "site.accumulo-site.tserver.cache.data.size": "7M",
-    "site.accumulo-site.tserver.cache.index.size": "20M",
-    "site.accumulo-site.trace.token.property.password": "secret",
-    "site.accumulo-site.trace.user": "root",
-    "site.accumulo-site.tserver.sort.buffer.size": "50M",
-    "site.accumulo-site.tserver.walog.max.size": "100M",
-    "site.accumulo-site.master.port.client": "0",
-    "site.accumulo-site.trace.port.client": "0",
-    "site.accumulo-site.tserver.port.client": "0",
-    "site.accumulo-site.gc.port.client": "0",
-    "site.accumulo-site.monitor.port.client": "${ACCUMULO_MONITOR.ALLOCATED_PORT}",
-    "site.accumulo-site.monitor.port.log4j": "0",
-    "site.accumulo-site.general.classpaths": "$ACCUMULO_HOME/lib/accumulo-server.jar,\n$ACCUMULO_HOME/lib/accumulo-core.jar,\n$ACCUMULO_HOME/lib/accumulo-start.jar,\n$ACCUMULO_HOME/lib/accumulo-fate.jar,\n$ACCUMULO_HOME/lib/accumulo-proxy.jar,\n$ACCUMULO_HOME/lib/[^.].*.jar,\n$ZOOKEEPER_HOME/zookeeper[^.].*.jar,\n$HADOOP_CONF_DIR,\n$HADOOP_PREFIX/[^.].*.jar,\n$HADOOP_PREFIX/lib/[^.].*.jar,\n$HADOOP_PREFIX/share/hadoop/common/.*.jar,\n$HADOOP_PREFIX/share/hadoop/common/lib/.*.jar,\n$HADOOP_PREFIX/share/hadoop/hdfs/.*.jar,\n$HADOOP_PREFIX/share/hadoop/mapreduce/.*.jar,\n$HADOOP_PREFIX/share/hadoop/yarn/.*.jar,\n/usr/lib/hadoop/.*.jar,\n/usr/lib/hadoop/lib/.*.jar,\n/usr/lib/hadoop-hdfs/.*.jar,\n/usr/lib/hadoop-mapreduce/.*.jar,\n/usr/lib/hadoop-yarn/.*.jar,"
-  },
-  "components": {
-    "ACCUMULO_MASTER": {
-    },
-    "slider-appmaster": {
-      "jvm.heapsize": "256M"
-    },
-    "ACCUMULO_TSERVER": {
-    },
-    "ACCUMULO_MONITOR": {
-    },
-    "ACCUMULO_GC": {
-    },
-    "ACCUMULO_TRACER": {
-    }
-  }
-}
diff --git a/app-packages/accumulo/src/test/resources/resources.json b/app-packages/accumulo/src/test/resources/resources.json
index 0d536aa..1c5dd97 100644
--- a/app-packages/accumulo/src/test/resources/resources.json
+++ b/app-packages/accumulo/src/test/resources/resources.json
@@ -3,12 +3,14 @@
   "metadata": {
   },
   "global": {
+    "yarn.log.include.patterns": "",
+    "yarn.log.exclude.patterns": ""
   },
   "components": {
     "ACCUMULO_MASTER": {
       "yarn.role.priority": "1",
       "yarn.component.instances": "1",
-      "yarn.memory": "256"
+      "yarn.memory": "64"
     },
     "slider-appmaster": {
     },
@@ -20,7 +22,7 @@
     "ACCUMULO_MONITOR": {
       "yarn.role.priority": "3",
       "yarn.component.instances": "1",
-      "yarn.memory": "128"
+      "yarn.memory": "64"
     },
     "ACCUMULO_GC": {
       "yarn.role.priority": "4",
diff --git a/app-packages/accumulo/resources.json b/app-packages/accumulo/src/test/resources/resources_with_client.json
similarity index 78%
copy from app-packages/accumulo/resources.json
copy to app-packages/accumulo/src/test/resources/resources_with_client.json
index f876901..297a232 100644
--- a/app-packages/accumulo/resources.json
+++ b/app-packages/accumulo/src/test/resources/resources_with_client.json
@@ -14,7 +14,7 @@
     },
     "ACCUMULO_TSERVER": {
       "yarn.role.priority": "2",
-      "yarn.component.instances": "1",
+      "yarn.component.instances": "2",
       "yarn.memory": "256"
     },
     "ACCUMULO_MONITOR": {
@@ -30,7 +30,12 @@
     "ACCUMULO_TRACER": {
       "yarn.role.priority": "5",
       "yarn.component.instances": "1",
-      "yarn.memory": "256"
+      "yarn.memory": "128"
+    },
+    "ACCUMULO_CLIENT": {
+      "yarn.role.priority": "6",
+      "yarn.component.instances": "0",
+      "yarn.memory": "128"
     }
   }
 }
diff --git a/app-packages/app-pkg-template/README.txt b/app-packages/app-pkg-template/README.txt
index 00dfdbc..266f34f 100644
--- a/app-packages/app-pkg-template/README.txt
+++ b/app-packages/app-pkg-template/README.txt
@@ -28,7 +28,6 @@
 Verify the content using  
   zip -Tv myapp-1.0.0.zip
 
-While appConfig.json and resources.json are not required for the package they work
-well as the default configuration for Slider apps. So its advisable that when you
-create an application package for Slider, include sample/default resources.json and
-appConfig.json for a one-node Yarn cluster.
+appConfig-default.json and resources-default.json are not required to be packaged.
+These files are included as reference configuration for Slider apps and are suitable
+for a one-node cluster.
diff --git a/app-packages/app-pkg-template/appConfig.json b/app-packages/app-pkg-template/appConfig-default.json
similarity index 73%
rename from app-packages/app-pkg-template/appConfig.json
rename to app-packages/app-pkg-template/appConfig-default.json
index a6f61f9..cc65503 100644
--- a/app-packages/app-pkg-template/appConfig.json
+++ b/app-packages/app-pkg-template/appConfig-default.json
@@ -3,10 +3,9 @@
   "metadata": {
   },
   "global": {
-    "application.def": "package/myapp-1.0.0.zip",
-    "java_home": "/usr/jdk64/jdk1.7.0_45",
+    "application.def": "myapp-1.0.0.zip",
+    "java_home": "/usr/jdk64/jdk1.7.0_67",
 
-    "site.global.app_user": "yarn",
     "site.global.app_root": "${AGENT_WORK_ROOT}/app/install/myapp-1.0.0",
 
     "site.global.listen_port": "${MYAPP_COMPONENT.ALLOCATED_PORT}"
diff --git a/app-packages/app-pkg-template/metainfo.xml b/app-packages/app-pkg-template/metainfo.xml
index c6e1485..50c0fbd 100644
--- a/app-packages/app-pkg-template/metainfo.xml
+++ b/app-packages/app-pkg-template/metainfo.xml
@@ -28,12 +28,12 @@
       <component>
         <name>MYAPP_COMPONENT</name>
         <category>MASTER</category>
-        <exports>
-          <export>
+        <componentExports>
+          <componentExport>
             <name>host_port</name>
             <value>${THIS_HOST}:${site.global.listen_port}</value>
-          </export>
-        </exports>
+          </componentExport>
+        </componentExports>
         <commandScript>
           <script>scripts/myapp_component.py</script>
           <scriptType>PYTHON</scriptType>
diff --git a/app-packages/app-pkg-template/resources.json b/app-packages/app-pkg-template/resources-default.json
similarity index 100%
rename from app-packages/app-pkg-template/resources.json
rename to app-packages/app-pkg-template/resources-default.json
diff --git a/app-packages/command-logger/application-pkg/pom.xml b/app-packages/command-logger/application-pkg/pom.xml
index 71e4d82..2c1fd46 100644
--- a/app-packages/command-logger/application-pkg/pom.xml
+++ b/app-packages/command-logger/application-pkg/pom.xml
@@ -19,7 +19,7 @@
   <parent>
     <groupId>org.apache.slider</groupId>
     <artifactId>slider</artifactId>
-    <version>0.50.2-incubating</version>
+    <version>0.60.0-incubating</version>
     <relativePath>../../../pom.xml</relativePath>
   </parent>
   <modelVersion>4.0.0</modelVersion>
@@ -54,20 +54,8 @@
         </executions>
       </plugin>
 
-      <plugin>
-        <groupId>org.apache.rat</groupId>
-        <artifactId>apache-rat-plugin</artifactId>
-        <version>${apache-rat-plugin.version}</version>
-        <executions>
-          <execution>
-            <id>check-licenses</id>
-            <goals>
-              <goal>check</goal>
-            </goals>
-          </execution>
-        </executions>
-      </plugin>
     </plugins>
+    
     <extensions>
       <extension>
         <groupId>org.apache.maven.wagon</groupId>
@@ -75,4 +63,28 @@
       </extension>
     </extensions>
   </build>
+
+  <profiles>
+    <profile>
+      <id>rat</id>
+      <build>
+        <plugins>
+
+          <plugin>
+            <groupId>org.apache.rat</groupId>
+            <artifactId>apache-rat-plugin</artifactId>
+            <version>${apache-rat-plugin.version}</version>
+            <executions>
+              <execution>
+                <id>check-licenses</id>
+                <goals>
+                  <goal>check</goal>
+                </goals>
+              </execution>
+            </executions>
+          </plugin>
+        </plugins>
+      </build>
+    </profile>
+  </profiles>
 </project>
diff --git a/app-packages/command-logger/slider-pkg/appConfig.json b/app-packages/command-logger/slider-pkg/appConfig.json
index 1d92c59..d4082a8 100644
--- a/app-packages/command-logger/slider-pkg/appConfig.json
+++ b/app-packages/command-logger/slider-pkg/appConfig.json
@@ -3,18 +3,14 @@
     "metadata": {
     },
     "global": {
-        "application.def": "apache-slider-command-logger.zip",
-        "config_types": "cl-site",
-        "java_home": "/usr/jdk64/jdk1.7.0_45",
-        "package_list": "files/command-logger.tar",
-        "site.global.app_user": "yarn",
+        "application.def": ".slider/package/CMD_LOGGER/apache-slider-command-logger.zip",
+        "java_home": "/usr/jdk64/jdk1.7.0_67",
         "site.global.application_id": "CommandLogger",
-        "site.global.app_log_dir": "${AGENT_LOG_ROOT}/app/log",
-        "site.global.app_pid_dir": "${AGENT_WORK_ROOT}/app/run",
         "site.global.app_root": "${AGENT_WORK_ROOT}/app/install/command-logger",
-        "site.global.app_install_dir": "${AGENT_WORK_ROOT}/app/install",
-        "site.cl-site.logfile.location": "${AGENT_LOG_ROOT}/app/log/operations.log",
-        "site.cl-site.datetime.format": "%A, %d. %B %Y %I:%M%p"
+
+        "site.cl-site.logfile.location": "${AGENT_WORK_ROOT}/app/install/command-logger-app/operations.log",
+        "site.cl-site.datetime.format": "%A, %d. %B %Y %I:%M%p",
+        "site.cl-site.pattern.for.test.to.verify": "verify this pattern"
     },
     "components": {
         "COMMAND_LOGGER": {
diff --git a/app-packages/command-logger/slider-pkg/metainfo.xml b/app-packages/command-logger/slider-pkg/metainfo.xml
index e17413d..5de2c37 100644
--- a/app-packages/command-logger/slider-pkg/metainfo.xml
+++ b/app-packages/command-logger/slider-pkg/metainfo.xml
@@ -24,10 +24,12 @@
       log file. When stopped it renames the file.
     </comment>
     <version>0.1.0</version>
+    <exportedConfigs>cl-site</exportedConfigs>
     <components>
       <component>
         <name>COMMAND_LOGGER</name>
         <category>MASTER</category>
+        <publishConfig>true</publishConfig>
         <commandScript>
           <script>scripts/cl.py</script>
           <scriptType>PYTHON</scriptType>
@@ -42,11 +44,19 @@
         <packages>
           <package>
             <type>tarball</type>
-            <name>files/command_log.tar</name>
+            <name>files/command-logger.tar</name>
           </package>
         </packages>
       </osSpecific>
     </osSpecifics>
 
+    <configFiles>
+      <configFile>
+        <type>xml</type>
+        <fileName>cl-site.xml</fileName>
+        <dictionaryName>cl-site</dictionaryName>
+      </configFile>
+    </configFiles>
+
   </application>
 </metainfo>
diff --git a/app-packages/command-logger/slider-pkg/package/scripts/cl.py b/app-packages/command-logger/slider-pkg/package/scripts/cl.py
index 6b18faa..b15bbfd 100644
--- a/app-packages/command-logger/slider-pkg/package/scripts/cl.py
+++ b/app-packages/command-logger/slider-pkg/package/scripts/cl.py
@@ -81,7 +81,6 @@
 
     file_location = params.file_location
     TemplateConfig( file_location,
-                    owner = params.app_user,
                     template_tag = None
     )
 
diff --git a/app-packages/command-logger/slider-pkg/package/scripts/params.py b/app-packages/command-logger/slider-pkg/package/scripts/params.py
index 3d388ae..b135539 100644
--- a/app-packages/command-logger/slider-pkg/package/scripts/params.py
+++ b/app-packages/command-logger/slider-pkg/package/scripts/params.py
@@ -25,7 +25,6 @@
 
 container_id = config['hostLevelParams']['container_id']
 application_id = config['configurations']['global']['application_id']
-app_user = config['configurations']['global']['app_user']
 
 datetime_format = config['configurations']['cl-site']['datetime.format']
 file_location = config['configurations']['cl-site']['logfile.location']
diff --git a/app-packages/command-logger/slider-pkg/pom.xml b/app-packages/command-logger/slider-pkg/pom.xml
index bd46cbb..f7514dc 100644
--- a/app-packages/command-logger/slider-pkg/pom.xml
+++ b/app-packages/command-logger/slider-pkg/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.slider</groupId>
     <artifactId>slider</artifactId>
-    <version>0.50.2-incubating</version>
+    <version>0.60.0-incubating</version>
     <relativePath>../../../pom.xml</relativePath>
   </parent>
   <modelVersion>4.0.0</modelVersion>
@@ -93,20 +93,6 @@
         </executions>
       </plugin>
 
-      <plugin>
-        <groupId>org.apache.rat</groupId>
-        <artifactId>apache-rat-plugin</artifactId>
-        <version>${apache-rat-plugin.version}</version>
-        <executions>
-          <execution>
-            <id>check-licenses</id>
-            <goals>
-              <goal>check</goal>
-            </goals>
-          </execution>
-        </executions>
-      </plugin>
-
     </plugins>
   </build>
 
@@ -118,5 +104,34 @@
       <type>tar</type>
     </dependency>
   </dependencies>
+  
+  <profiles>
+    <profile>
+      <id>apache-release</id>
+      <build>
+        <plugins>
 
+          <plugin>
+            <groupId>org.apache.rat</groupId>
+            <artifactId>apache-rat-plugin</artifactId>
+            <version>${apache-rat-plugin.version}</version>
+            <executions>
+              <execution>
+                <id>check-licenses</id>
+                <goals>
+                  <goal>check</goal>
+                </goals>
+              </execution>
+            </executions>
+            <configuration>
+              <excludes>
+                <exclude>**/*.json</exclude>
+              </excludes>
+            </configuration>
+          </plugin>
+
+        </plugins>
+      </build>
+    </profile>
+  </profiles>
 </project>
diff --git a/app-packages/hbase-win/README.txt b/app-packages/hbase-win/README.txt
new file mode 100644
index 0000000..6389fb2
--- /dev/null
+++ b/app-packages/hbase-win/README.txt
@@ -0,0 +1,38 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+Create Slider App Package for HBase on Windows
+
+appConfig-default.json and resources-default.json are not required to be packaged.
+These files are included as reference configuration for Slider apps and are suitable
+for a one-node cluster.
+
+
+To create the app package you will need the HBase tarball and invoke mvn command
+with appropriate parameters.
+
+Command:
+mvn clean package -Phbase-app-package-win -Dpkg.version=<version>
+   -Dpkg.name=<file name of app zip file> -Dpkg.src=<folder location where the pkg is available>
+
+Example:
+mvn clean package -Phbase-app-package-win -Dpkg.version=0.98.5-hadoop2
+  -Dpkg.name=hbase-0.98.5-hadoop2-bin.zip
+  -Dpkg.src=/Users/user1/Downloads
+
+App package can be found in
+  app-packages/hbase/target/slider-hbase-app-win-package-${pkg.version}.zip
diff --git a/app-packages/hbase-win/appConfig-default.json b/app-packages/hbase-win/appConfig-default.json
new file mode 100644
index 0000000..04cb9a9
--- /dev/null
+++ b/app-packages/hbase-win/appConfig-default.json
@@ -0,0 +1,38 @@
+{
+    "schema": "http://example.org/specification/v2.0.0",
+    "metadata": {
+    },
+    "global": {
+        "application.def": ".slider/package/HBASE/slider-hbase-app-win-package-${pkg.version}.zip",
+        "create.default.zookeeper.node": "true",
+        "java_home": "C:\\java",
+
+        "site.global.app_user": "hadoop",
+        "site.global.app_root": "${AGENT_WORK_ROOT}/app/install/hbase-${pkg.version}",
+        "site.global.hbase_instance_name": "instancename",
+        "site.global.user_group": "hadoop",
+        "site.global.hbase_additional_cp": "c:\\java\\lib\\tools.jar;",
+        "site.global.java_library_path": "c:\\hdp\\hadoop\\bin",
+        "site.global.hbase_rest_port": "17000",
+        "site.global.hbase_thrift_port": "9090",
+        "site.global.hbase_thrift2_port": "9091",
+
+        "site.hbase-env.hbase_master_heapsize": "1024m",
+        "site.hbase-env.hbase_regionserver_heapsize": "1024m",
+        "site.hbase-site.hbase.rootdir": "${DEFAULT_DATA_DIR}",
+        "site.hbase-site.hbase.superuser": "hadoop",
+        "site.hbase-site.hbase.tmp.dir": "${AGENT_WORK_ROOT}/work/app/tmp",
+        "site.hbase-site.hbase.local.dir": "${hbase.tmp.dir}/local",
+        "site.hbase-site.hbase.zookeeper.quorum": "${ZK_HOST}",
+        "site.hbase-site.zookeeper.znode.parent": "${DEFAULT_ZK_PATH}",
+        "site.hbase-site.hbase.regionserver.info.port": "0",
+        "site.hbase-site.hbase.master.info.port": "${HBASE_MASTER.ALLOCATED_PORT}",
+        "site.hbase-site.hbase.regionserver.port": "0",
+        "site.hbase-site.hbase.master.port": "0"
+    },
+    "components": {
+        "slider-appmaster": {
+            "jvm.heapsize": "256M"
+        }
+    }
+}
diff --git a/app-packages/hbase-win/configuration/hbase-env.xml b/app-packages/hbase-win/configuration/hbase-env.xml
new file mode 100644
index 0000000..fa5686f
--- /dev/null
+++ b/app-packages/hbase-win/configuration/hbase-env.xml
@@ -0,0 +1,54 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+
+<configuration>
+  <property>
+    <name>hbase_regionserver_heapsize</name>
+    <value>1024</value>
+    <description>HBase RegionServer Heap Size.</description>
+  </property>
+  <property>
+    <name>hbase_regionserver_xmn_max</name>
+    <value>512</value>
+    <description>HBase RegionServer maximum value for minimum heap size.</description>
+  </property>
+  <property>
+    <name>hbase_regionserver_xmn_ratio</name>
+    <value>0.2</value>
+    <description>HBase RegionServer minimum heap size is calculated as a percentage of max heap size.</description>
+  </property>
+  <property>
+    <name>hbase_master_heapsize</name>
+    <value>1024</value>
+    <description>HBase Master Heap Size</description>
+  </property>
+
+  <!-- hbase-env.sh -->
+  <property>
+    <name>content</name>
+    <description>This is the jinja template for start command</description>
+    <value>
+     -Xmx{{heap_size}} "-XX:+UseConcMarkSweepGC" "-XX:CMSInitiatingOccupancyFraction=70" "-Djava.net.preferIPv4Stack=true" "-XX:+ForceTimeHighResolution" "-verbose:gc" "-XX:+PrintGCDetails" "-XX:+PrintGCDateStamps"  -Xloggc:"{{log_dir}}\hbase-{{role_user}}.gc" -Dhbase.log.dir="{{log_dir}}" -Dhbase.log.file="hbase-{{role_user}}.log" -Dhbase.home.dir="{{hbase_root}}" -Dhbase.id.str="{{hbase_instance_name}}" -XX:OnOutOfMemoryError="taskkill /F /PID p" -Dhbase.root.logger="INFO,DRFA" -Djava.library.path="{{java_library_path}}" -Dhbase.security.logger="INFO,DRFAS" -classpath "{{conf_dir}};{{hbase_root}}\lib\*;{{hbase_additional_cp}}"
+    </value>
+  </property>
+
+</configuration>
diff --git a/app-packages/hbase-win/configuration/hbase-log4j.xml b/app-packages/hbase-win/configuration/hbase-log4j.xml
new file mode 100644
index 0000000..d488c4e
--- /dev/null
+++ b/app-packages/hbase-win/configuration/hbase-log4j.xml
@@ -0,0 +1,143 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+
+<configuration>
+
+  <property>
+    <name>content</name>
+    <description>Custom log4j.properties</description>
+    <value>
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+# Define some default values that can be overridden by system properties
+hbase.root.logger=INFO,console
+hbase.security.logger=INFO,console
+hbase.log.dir=.
+hbase.log.file=hbase.log
+
+# Define the root logger to the system property "hbase.root.logger".
+log4j.rootLogger=${hbase.root.logger}
+
+# Logging Threshold
+log4j.threshold=ALL
+
+#
+# Daily Rolling File Appender
+#
+log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender
+log4j.appender.DRFA.File=${hbase.log.dir}/${hbase.log.file}
+
+# Rollver at midnight
+log4j.appender.DRFA.DatePattern=.yyyy-MM-dd
+
+# 30-day backup
+#log4j.appender.DRFA.MaxBackupIndex=30
+log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout
+
+# Pattern format: Date LogLevel LoggerName LogMessage
+log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n
+
+# Rolling File Appender properties
+hbase.log.maxfilesize=256MB
+hbase.log.maxbackupindex=20
+
+# Rolling File Appender
+log4j.appender.RFA=org.apache.log4j.RollingFileAppender
+log4j.appender.RFA.File=${hbase.log.dir}/${hbase.log.file}
+
+log4j.appender.RFA.MaxFileSize=${hbase.log.maxfilesize}
+log4j.appender.RFA.MaxBackupIndex=${hbase.log.maxbackupindex}
+
+log4j.appender.RFA.layout=org.apache.log4j.PatternLayout
+log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n
+
+#
+# Security audit appender
+#
+hbase.security.log.file=SecurityAuth.audit
+hbase.security.log.maxfilesize=256MB
+hbase.security.log.maxbackupindex=20
+log4j.appender.RFAS=org.apache.log4j.RollingFileAppender
+log4j.appender.RFAS.File=${hbase.log.dir}/${hbase.security.log.file}
+log4j.appender.RFAS.MaxFileSize=${hbase.security.log.maxfilesize}
+log4j.appender.RFAS.MaxBackupIndex=${hbase.security.log.maxbackupindex}
+log4j.appender.RFAS.layout=org.apache.log4j.PatternLayout
+log4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
+log4j.category.SecurityLogger=${hbase.security.logger}
+log4j.additivity.SecurityLogger=false
+#log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController=TRACE
+
+#
+# Null Appender
+#
+log4j.appender.NullAppender=org.apache.log4j.varia.NullAppender
+
+#
+# console
+# Add "console" to rootlogger above if you want to use this
+#
+log4j.appender.console=org.apache.log4j.ConsoleAppender
+log4j.appender.console.target=System.err
+log4j.appender.console.layout=org.apache.log4j.PatternLayout
+log4j.appender.console.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n
+
+# Custom Logging levels
+
+log4j.logger.org.apache.zookeeper=INFO
+#log4j.logger.org.apache.hadoop.fs.FSNamesystem=DEBUG
+log4j.logger.org.apache.hadoop.hbase=DEBUG
+# Make these two classes INFO-level. Make them DEBUG to see more zk debug.
+log4j.logger.org.apache.hadoop.hbase.zookeeper.ZKUtil=INFO
+log4j.logger.org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher=INFO
+#log4j.logger.org.apache.hadoop.dfs=DEBUG
+# Set this class to log INFO only otherwise its OTT
+# Enable this to get detailed connection error/retry logging.
+# log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation=TRACE
+
+
+# Uncomment this line to enable tracing on _every_ RPC call (this can be a lot of output)
+#log4j.logger.org.apache.hadoop.ipc.HBaseServer.trace=DEBUG
+
+# Uncomment the below if you want to remove logging of client region caching'
+# and scan of .META. messages
+# log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation=INFO
+# log4j.logger.org.apache.hadoop.hbase.client.MetaScanner=INFO
+
+    </value>
+  </property>
+
+</configuration>
diff --git a/app-packages/hbase-win/configuration/hbase-policy.xml b/app-packages/hbase-win/configuration/hbase-policy.xml
new file mode 100644
index 0000000..e45f23c
--- /dev/null
+++ b/app-packages/hbase-win/configuration/hbase-policy.xml
@@ -0,0 +1,53 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+
+<configuration>
+  <property>
+    <name>security.client.protocol.acl</name>
+    <value>*</value>
+    <description>ACL for HRegionInterface protocol implementations (ie. 
+    clients talking to HRegionServers)
+    The ACL is a comma-separated list of user and group names. The user and 
+    group list is separated by a blank. For e.g. "alice,bob users,wheel". 
+    A special value of "*" means all users are allowed.</description>
+  </property>
+
+  <property>
+    <name>security.admin.protocol.acl</name>
+    <value>*</value>
+    <description>ACL for HMasterInterface protocol implementation (ie. 
+    clients talking to HMaster for admin operations).
+    The ACL is a comma-separated list of user and group names. The user and 
+    group list is separated by a blank. For e.g. "alice,bob users,wheel". 
+    A special value of "*" means all users are allowed.</description>
+  </property>
+
+  <property>
+    <name>security.masterregion.protocol.acl</name>
+    <value>*</value>
+    <description>ACL for HMasterRegionInterface protocol implementations
+    (for HRegionServers communicating with HMaster)
+    The ACL is a comma-separated list of user and group names. The user and 
+    group list is separated by a blank. For e.g. "alice,bob users,wheel". 
+    A special value of "*" means all users are allowed.</description>
+  </property>
+</configuration>
diff --git a/app-packages/hbase-win/configuration/hbase-site.xml b/app-packages/hbase-win/configuration/hbase-site.xml
new file mode 100644
index 0000000..a9711d3
--- /dev/null
+++ b/app-packages/hbase-win/configuration/hbase-site.xml
@@ -0,0 +1,370 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+<configuration>
+  <property>
+    <name>hbase.rootdir</name>
+    <value>hdfs://localhost:8020/apps/hbase/data</value>
+    <description>The directory shared by region servers and into
+    which HBase persists.  The URL should be 'fully-qualified'
+    to include the filesystem scheme.  For example, to specify the
+    HDFS directory '/hbase' where the HDFS instance's namenode is
+    running at namenode.example.org on port 9000, set this value to:
+    hdfs://namenode.example.org:9000/hbase.  By default HBase writes
+    into /tmp.  Change this configuration else all data will be lost
+    on machine restart.
+    </description>
+  </property>
+  <property>
+    <name>hbase.cluster.distributed</name>
+    <value>true</value>
+    <description>The mode the cluster will be in. Possible values are
+      false for standalone mode and true for distributed mode.  If
+      false, startup will run all HBase and ZooKeeper daemons together
+      in the one JVM.
+    </description>
+  </property>
+  <property>
+    <name>hbase.master.port</name>
+    <value>60000</value>
+    <description>The port the HBase Master should bind to.</description>
+  </property>
+  <property>
+    <name>hbase.tmp.dir</name>
+    <value>/hadoop/hbase</value>
+    <description>Temporary directory on the local filesystem.
+    Change this setting to point to a location more permanent
+    than '/tmp' (The '/tmp' directory is often cleared on
+    machine restart).
+    </description>
+  </property>
+  <property>
+    <name>hbase.local.dir</name>
+    <value>${hbase.tmp.dir}/local</value>
+    <description>Directory on the local filesystem to be used as a local storage
+    </description>
+  </property>
+  <property>
+    <name>hbase.master.info.bindAddress</name>
+    <value>0.0.0.0</value>
+    <description>The bind address for the HBase Master web UI
+    </description>
+  </property>
+  <property>
+    <name>hbase.master.info.port</name>
+    <value>60010</value>
+    <description>The port for the HBase Master web UI.</description>
+  </property>
+  <property>
+    <name>hbase.regionserver.info.port</name>
+    <value>60030</value>
+    <description>The port for the HBase RegionServer web UI.</description>
+  </property>
+  <property>
+    <name>hbase.regionserver.global.memstore.upperLimit</name>
+    <value>0.4</value>
+    <description>Maximum size of all memstores in a region server before new
+      updates are blocked and flushes are forced. Defaults to 40% of heap
+    </description>
+  </property>
+  <property>
+    <name>hbase.regionserver.handler.count</name>
+    <value>60</value>
+    <description>Count of RPC Listener instances spun up on RegionServers.
+    Same property is used by the Master for count of master handlers.
+    Default is 10.
+    </description>
+  </property>
+  <property>
+    <name>hbase.hregion.majorcompaction</name>
+    <value>86400000</value>
+    <description>The time (in milliseconds) between 'major' compactions of all
+    HStoreFiles in a region.  Default: 1 day.
+    Set to 0 to disable automated major compactions.
+    </description>
+  </property>
+  
+  <property>
+    <name>hbase.regionserver.global.memstore.lowerLimit</name>
+    <value>0.38</value>
+    <description>When memstores are being forced to flush to make room in
+      memory, keep flushing until we hit this mark. Defaults to 35% of heap.
+      This value equal to hbase.regionserver.global.memstore.upperLimit causes
+      the minimum possible flushing to occur when updates are blocked due to
+      memstore limiting.
+    </description>
+  </property>
+  <property>
+    <name>hbase.hregion.memstore.block.multiplier</name>
+    <value>2</value>
+    <description>Block updates if memstore has hbase.hregion.memstore.block.multiplier
+    time hbase.hregion.flush.size bytes.  Useful preventing
+    runaway memstore during spikes in update traffic.  Without an
+    upper-bound, memstore fills such that when it flushes the
+    resultant flush files take a long time to compact or split, or
+    worse, we OOME
+    </description>
+  </property>
+  <property>
+    <name>hbase.hregion.memstore.flush.size</name>
+    <value>134217728</value>
+    <description>
+    Memstore will be flushed to disk if size of the memstore
+    exceeds this number of bytes.  Value is checked by a thread that runs
+    every hbase.server.thread.wakefrequency.
+    </description>
+  </property>
+  <property>
+    <name>hbase.hregion.memstore.mslab.enabled</name>
+    <value>true</value>
+    <description>
+      Enables the MemStore-Local Allocation Buffer,
+      a feature which works to prevent heap fragmentation under
+      heavy write loads. This can reduce the frequency of stop-the-world
+      GC pauses on large heaps.
+    </description>
+  </property>
+  <property>
+    <name>hbase.hregion.max.filesize</name>
+    <value>10737418240</value>
+    <description>
+    Maximum HStoreFile size. If any one of a column families' HStoreFiles has
+    grown to exceed this value, the hosting HRegion is split in two.
+    Default: 1G.
+    </description>
+  </property>
+  <property>
+    <name>hbase.client.scanner.caching</name>
+    <value>100</value>
+    <description>Number of rows that will be fetched when calling next
+    on a scanner if it is not served from (local, client) memory. Higher
+    caching values will enable faster scanners but will eat up more memory
+    and some calls of next may take longer and longer times when the cache is empty.
+    Do not set this value such that the time between invocations is greater
+    than the scanner timeout; i.e. hbase.regionserver.lease.period
+    </description>
+  </property>
+  <property>
+    <name>zookeeper.session.timeout</name>
+    <value>30000</value>
+    <description>ZooKeeper session timeout.
+      HBase passes this to the zk quorum as suggested maximum time for a
+      session (This setting becomes zookeeper's 'maxSessionTimeout').  See
+      http://hadoop.apache.org/zookeeper/docs/current/zookeeperProgrammers.html#ch_zkSessions
+      "The client sends a requested timeout, the server responds with the
+      timeout that it can give the client. " In milliseconds.
+    </description>
+  </property>
+  <property>
+    <name>hbase.client.keyvalue.maxsize</name>
+    <value>10485760</value>
+    <description>Specifies the combined maximum allowed size of a KeyValue
+    instance. This is to set an upper boundary for a single entry saved in a
+    storage file. Since they cannot be split it helps avoiding that a region
+    cannot be split any further because the data is too large. It seems wise
+    to set this to a fraction of the maximum region size. Setting it to zero
+    or less disables the check.
+    </description>
+  </property>
+  <property>
+    <name>hbase.hstore.compactionThreshold</name>
+    <value>3</value>
+    <description>
+    If more than this number of HStoreFiles in any one HStore
+    (one HStoreFile is written per flush of memstore) then a compaction
+    is run to rewrite all HStoreFiles files as one.  Larger numbers
+    put off compaction but when it runs, it takes longer to complete.
+    </description>
+  </property>
+  <property>
+    <name>hbase.hstore.flush.retries.number</name>
+    <value>120</value>
+    <description>
+    The number of times the region flush operation will be retried.
+    </description>
+  </property>
+  
+  <property>
+    <name>hbase.hstore.blockingStoreFiles</name>
+    <value>10</value>
+    <description>
+    If more than this number of StoreFiles in any one Store
+    (one StoreFile is written per flush of MemStore) then updates are
+    blocked for this HRegion until a compaction is completed, or
+    until hbase.hstore.blockingWaitTime has been exceeded.
+    </description>
+  </property>
+  <property>
+    <name>hfile.block.cache.size</name>
+    <value>0.40</value>
+    <description>
+        Percentage of maximum heap (-Xmx setting) to allocate to block cache
+        used by HFile/StoreFile. Default of 0.25 means allocate 25%.
+        Set to 0 to disable but it's not recommended.
+    </description>
+  </property>
+
+  <!-- The following properties configure authentication information for
+       HBase processes when using Kerberos security.  There are no default
+       values, included here for documentation purposes -->
+  <property>
+    <name>hbase.master.keytab.file</name>
+    <value>/etc/security/keytabs/hbase.service.keytab</value>
+    <description>Full path to the kerberos keytab file to use for logging in
+    the configured HMaster server principal.
+    </description>
+  </property>
+  <property>
+    <name>hbase.master.kerberos.principal</name>
+    <value>hbase/_HOST@EXAMPLE.COM</value>
+    <description>Ex. "hbase/_HOST@EXAMPLE.COM".  The kerberos principal name
+    that should be used to run the HMaster process.  The principal name should
+    be in the form: user/hostname@DOMAIN.  If "_HOST" is used as the hostname
+    portion, it will be replaced with the actual hostname of the running
+    instance.
+    </description>
+  </property>
+  <property>
+    <name>hbase.regionserver.keytab.file</name>
+    <value>/etc/security/keytabs/hbase.service.keytab</value>
+    <description>Full path to the kerberos keytab file to use for logging in
+    the configured HRegionServer server principal.
+    </description>
+  </property>
+  <property>
+    <name>hbase.regionserver.kerberos.principal</name>
+    <value>hbase/_HOST@EXAMPLE.COM</value>
+    <description>Ex. "hbase/_HOST@EXAMPLE.COM".  The kerberos principal name
+    that should be used to run the HRegionServer process.  The principal name
+    should be in the form: user/hostname@DOMAIN.  If "_HOST" is used as the
+    hostname portion, it will be replaced with the actual hostname of the
+    running instance.  An entry for this principal must exist in the file
+    specified in hbase.regionserver.keytab.file
+    </description>
+  </property>
+
+  <!-- Additional configuration specific to HBase security -->
+  <property>
+    <name>hbase.superuser</name>
+    <value>hbase</value>
+    <description>List of users or groups (comma-separated), who are allowed
+    full privileges, regardless of stored ACLs, across the cluster.
+    Only used when HBase security is enabled.
+    </description>
+  </property>
+
+  <property>
+    <name>hbase.security.authentication</name>
+    <value>simple</value>
+    <description>  Controls whether or not secure authentication is enabled for HBase. Possible values are 'simple'
+      (no authentication), and 'kerberos'.
+    </description>
+  </property>
+
+  <property>
+    <name>hbase.security.authorization</name>
+    <value>false</value>
+    <description>Enables HBase authorization. Set the value of this property to false to disable HBase authorization.
+    </description>
+  </property>
+
+  <property>
+    <name>hbase.coprocessor.region.classes</name>
+    <value></value>
+    <description>A comma-separated list of Coprocessors that are loaded by
+    default on all tables. For any override coprocessor method, these classes
+    will be called in order. After implementing your own Coprocessor, just put
+    it in HBase's classpath and add the fully qualified class name here.
+    A coprocessor can also be loaded on demand by setting HTableDescriptor.
+    </description>
+  </property>
+
+  <property>
+    <name>hbase.coprocessor.master.classes</name>
+    <value></value>
+    <description>A comma-separated list of
+      org.apache.hadoop.hbase.coprocessor.MasterObserver coprocessors that are
+      loaded by default on the active HMaster process. For any implemented
+      coprocessor methods, the listed classes will be called in order. After
+      implementing your own MasterObserver, just put it in HBase's classpath
+      and add the fully qualified class name here.
+    </description>
+  </property>
+
+  <property>
+    <name>hbase.zookeeper.property.clientPort</name>
+    <value>2181</value>
+    <description>Property from ZooKeeper's config zoo.cfg.
+    The port at which the clients will connect.
+    </description>
+  </property>
+
+  <!--
+  The following three properties are used together to create the list of
+  host:peer_port:leader_port quorum servers for ZooKeeper.
+  -->
+  <property>
+    <name>hbase.zookeeper.quorum</name>
+    <value>localhost</value>
+    <description>Comma separated list of servers in the ZooKeeper Quorum.
+    For example, "host1.mydomain.com,host2.mydomain.com,host3.mydomain.com".
+    By default this is set to localhost for local and pseudo-distributed modes
+    of operation. For a fully-distributed setup, this should be set to a full
+    list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh
+    this is the list of servers which we will start/stop ZooKeeper on.
+    </description>
+  </property>
+  <!-- End of properties used to generate ZooKeeper host:port quorum list. -->
+
+  <property>
+    <name>hbase.zookeeper.useMulti</name>
+    <value>true</value>
+    <description>Instructs HBase to make use of ZooKeeper's multi-update functionality.
+    This allows certain ZooKeeper operations to complete more quickly and prevents some issues
+    with rare Replication failure scenarios (see the release note of HBASE-2611 for an example).·
+    IMPORTANT: only set this to true if all ZooKeeper servers in the cluster are on version 3.4+
+    and will not be downgraded.  ZooKeeper versions before 3.4 do not support multi-update and will
+    not fail gracefully if multi-update is invoked (see ZOOKEEPER-1495).
+    </description>
+  </property>
+  <property>
+    <name>zookeeper.znode.parent</name>
+    <value>/hbase-unsecure</value>
+    <description>Root ZNode for HBase in ZooKeeper. All of HBase's ZooKeeper
+      files that are configured with a relative path will go under this node.
+      By default, all of HBase's ZooKeeper file path are configured with a
+      relative path, so they will all go under this directory unless changed.
+    </description>
+  </property>
+
+  <property>
+    <name>hbase.defaults.for.version.skip</name>
+    <value>true</value>
+    <description>Disables version verification.</description>
+  </property>
+
+  <property>
+    <name>dfs.domain.socket.path</name>
+    <value>/var/lib/hadoop-hdfs/dn_socket</value>
+    <description>Path to domain socket.</description>
+  </property>
+
+</configuration>
diff --git a/app-packages/hbase-win/jmx_metrics.json b/app-packages/hbase-win/jmx_metrics.json
new file mode 100644
index 0000000..ac0640e
--- /dev/null
+++ b/app-packages/hbase-win/jmx_metrics.json
@@ -0,0 +1,56 @@
+{
+    "Component": {
+        "HBASE_MASTER": {
+            "MetricAverageLoad": {
+                "metric": "Hadoop:service=HBase,name=Master,sub=Server.averageLoad",
+                "pointInTime": true,
+                "temporal": false
+            },
+            "DeadRegionServers": {
+                "metric": "Hadoop:service=HBase,name=Master,sub=Server.numDeadRegionServers",
+                "pointInTime": true,
+                "temporal": false
+            },
+            "ClusterId": {
+                "metric": "Hadoop:service=HBase,name=Master,sub=Server.tag.clusterId",
+                "pointInTime": true,
+                "temporal": false
+            },
+            "IsActiveMaster": {
+                "metric": "Hadoop:service=HBase,name=Master,sub=Server.tag.isActiveMaster",
+                "pointInTime": true,
+                "temporal": false
+            },
+            "MasterActiveTime": {
+                "metric": "Hadoop:service=HBase,name=Master,sub=Server.masterActiveTime",
+                "pointInTime": true,
+                "temporal": false
+            },
+            "MasterStartTime": {
+                "metric": "Hadoop:service=HBase,name=Master,sub=Server.masterStartTime",
+                "pointInTime": true,
+                "temporal": false
+            },
+            "RegionServers": {
+                "metric": "Hadoop:service=HBase,name=Master,sub=Server.numRegionServers",
+                "pointInTime": true,
+                "temporal": false
+            },
+            "ServerName": {
+                "metric": "Hadoop:service=HBase,name=Master,sub=Server.tag.serverName",
+                "pointInTime": true,
+                "temporal": false
+            },
+            "ZookeeperQuorum": {
+                "metric": "Hadoop:service=HBase,name=Master,sub=Server.tag.zookeeperQuorum",
+                "pointInTime": true,
+                "temporal": false
+            },
+            "ClusterRequests": {
+                "metric": "Hadoop:service=HBase,name=Master,sub=Server.clusterRequests",
+                "pointInTime": true,
+                "temporal": false
+            }
+        }
+    }
+}
diff --git a/app-packages/hbase-win/metainfo.xml b/app-packages/hbase-win/metainfo.xml
new file mode 100644
index 0000000..da6121d
--- /dev/null
+++ b/app-packages/hbase-win/metainfo.xml
@@ -0,0 +1,170 @@
+<?xml version="1.0"?>
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+<metainfo>
+  <schemaVersion>2.0</schemaVersion>
+  <application>
+    <name>HBASE</name>
+    <comment>
+      Apache HBase is the Hadoop database, a distributed, scalable, big data store.
+      Requirements:
+      1. Ensure parent dir for path (hbase-site/hbase.rootdir) is accessible to the App owner.
+      2. Ensure ZK root (hbase-site/zookeeper.znode.parent) is unique for the App instance.
+    </comment>
+    <version>${pkg.version}</version>
+    <type>YARN-APP</type>
+    <minHadoopVersion>2.1.0</minHadoopVersion>
+    <exportedConfigs>hbase-site</exportedConfigs>
+    <exportGroups>
+      <exportGroup>
+        <name>QuickLinks</name>
+        <exports>
+          <export>
+            <name>org.apache.slider.jmx</name>
+            <value>http://${HBASE_MASTER_HOST}:${site.hbase-site.hbase.master.info.port}/jmx</value>
+          </export>
+          <export>
+            <name>org.apache.slider.monitor</name>
+            <value>http://${HBASE_MASTER_HOST}:${site.hbase-site.hbase.master.info.port}/master-status</value>
+          </export>
+          <export>
+            <name>org.apache.slider.hbase.rest</name>
+            <value>http://${HBASE_REST_HOST}:${site.global.hbase_rest_port}</value>
+          </export>
+          <export>
+            <name>org.apache.slider.hbase.thrift2</name>
+            <value>http://${HBASE_THRIFT2_HOST}:${site.global.hbase_thrift2_port}</value>
+          </export>
+          <export>
+            <name>org.apache.slider.hbase.thrift</name>
+            <value>http://${HBASE_THRIFT_HOST}:${site.global.hbase_thrift_port}</value>
+          </export>
+        </exports>
+      </exportGroup>
+    </exportGroups>
+    <commandOrders>
+      <commandOrder>
+        <command>HBASE_REGIONSERVER-START</command>
+        <requires>HBASE_MASTER-STARTED</requires>
+      </commandOrder>
+      <commandOrder>
+        <command>HBASE_MASTER-START</command>
+        <requires>HBASE_REST-INSTALLED</requires>
+      </commandOrder>
+    </commandOrders>
+    <components>
+      <component>
+        <name>HBASE_MASTER</name>
+        <category>MASTER</category>
+        <minInstanceCount>1</minInstanceCount>
+        <appExports>QuickLinks-org.apache.slider.jmx,QuickLinks-org.apache.slider.monitor</appExports>
+        <componentExports>
+          <componentExport>
+            <name>org.apache.slider.jmx</name>
+            <value>${THIS_HOST}:${site.hbase-site.hbase.master.info.port}/jmx</value>
+          </componentExport>
+          <componentExport>
+            <name>org.apache.slider.monitor</name>
+            <value>${THIS_HOST}:${site.hbase-site.hbase.master.info.port}/master-status</value>
+          </componentExport>
+        </componentExports>
+        <commandScript>
+          <script>scripts/hbase_master.py</script>
+          <scriptType>PYTHON</scriptType>
+          <timeout>600</timeout>
+        </commandScript>
+      </component>
+
+      <component>
+        <name>HBASE_REGIONSERVER</name>
+        <category>SLAVE</category>
+        <minInstanceCount>0</minInstanceCount>
+        <commandScript>
+          <script>scripts/hbase_regionserver.py</script>
+          <scriptType>PYTHON</scriptType>
+        </commandScript>
+      </component>
+
+      <component>
+        <name>HBASE_REST</name>
+        <category>MASTER</category>
+        <appExports>QuickLinks-org.apache.slider.hbase.rest</appExports>
+        <commandScript>
+          <script>scripts/hbase_rest.py</script>
+          <scriptType>PYTHON</scriptType>
+        </commandScript>
+      </component>
+
+      <component>
+        <name>HBASE_THRIFT</name>
+        <category>MASTER</category>
+        <appExports>QuickLinks-org.apache.slider.hbase.thrift</appExports>
+        <commandScript>
+          <script>scripts/hbase_thrift.py</script>
+          <scriptType>PYTHON</scriptType>
+        </commandScript>
+      </component>
+
+      <component>
+        <name>HBASE_THRIFT2</name>
+        <category>MASTER</category>
+        <minInstanceCount>0</minInstanceCount>
+        <appExports>QuickLinks-org.apache.slider.hbase.thrift2</appExports>
+        <commandScript>
+          <script>scripts/hbase_thrift2.py</script>
+          <scriptType>PYTHON</scriptType>
+        </commandScript>
+      </component>
+    </components>
+
+    <osSpecifics>
+      <osSpecific>
+        <osType>any</osType>
+        <packages>
+          <package>
+            <type>zip</type>
+            <name>files/${pkg.name}</name>
+          </package>
+        </packages>
+      </osSpecific>
+    </osSpecifics>
+
+    <configFiles>
+      <configFile>
+        <type>xml</type>
+        <fileName>hbase-site.xml</fileName>
+        <dictionaryName>hbase-site</dictionaryName>
+      </configFile>
+      <configFile>
+        <type>env</type>
+        <fileName>hbase-env.sh</fileName>
+        <dictionaryName>hbase-env</dictionaryName>
+      </configFile>
+      <configFile>
+        <type>env</type>
+        <fileName>hbase-log4j.properties</fileName>
+        <dictionaryName>hbase-log4j</dictionaryName>
+      </configFile>
+      <configFile>
+        <type>env</type>
+        <fileName>hbase-policy.xml</fileName>
+        <dictionaryName>hbase-policy</dictionaryName>
+      </configFile>
+    </configFiles>
+
+  </application>
+</metainfo>
diff --git a/app-packages/hbase-win/package/scripts/__init__.py b/app-packages/hbase-win/package/scripts/__init__.py
new file mode 100644
index 0000000..5561e10
--- /dev/null
+++ b/app-packages/hbase-win/package/scripts/__init__.py
@@ -0,0 +1,19 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
diff --git a/app-packages/hbase-win/package/scripts/functions.py b/app-packages/hbase-win/package/scripts/functions.py
new file mode 100644
index 0000000..e6e7fb9
--- /dev/null
+++ b/app-packages/hbase-win/package/scripts/functions.py
@@ -0,0 +1,40 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import os
+import re
+import math
+import datetime
+
+from resource_management.core.shell import checked_call
+
+def calc_xmn_from_xms(heapsize_str, xmn_percent, xmn_max):
+  """
+  @param heapsize_str: str (e.g '1000m')
+  @param xmn_percent: float (e.g 0.2)
+  @param xmn_max: integer (e.g 512)
+  """
+  heapsize = int(re.search('\d+',heapsize_str).group(0))
+  heapsize_unit = re.search('\D+',heapsize_str).group(0)
+  xmn_val = int(math.floor(heapsize*xmn_percent))
+  xmn_val -= xmn_val % 8
+  
+  result_xmn_val = xmn_max if xmn_val > xmn_max else xmn_val
+  return str(result_xmn_val) + heapsize_unit
diff --git a/app-packages/hbase-win/package/scripts/hbase.py b/app-packages/hbase-win/package/scripts/hbase.py
new file mode 100644
index 0000000..0962149
--- /dev/null
+++ b/app-packages/hbase-win/package/scripts/hbase.py
@@ -0,0 +1,61 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+import os
+
+from resource_management import *
+import sys
+import shutil
+
+def hbase(name=None # 'master' or 'regionserver'
+              ):
+  import params
+
+  Directory( params.conf_dir,
+      recursive = True,
+      content = params.input_conf_files_dir
+  )
+
+  Directory (params.tmp_dir,
+             recursive = True
+  )
+
+  Directory (os.path.join(params.local_dir, "jars"),
+             recursive = True
+  )
+
+  XmlConfig( "hbase-site.xml",
+            conf_dir = params.conf_dir,
+            configurations = params.config['configurations']['hbase-site'],
+            owner = params.hbase_user,
+            group = params.user_group
+  )
+
+ 
+  if (params.log4j_props != None):
+    File(format("{params.conf_dir}/log4j.properties"),
+         group=params.user_group,
+         owner=params.hbase_user,
+         content=params.log4j_props
+    )
+  elif (os.path.exists(format("{conf_dir}/log4j.properties"))):
+    File(format("{params.conf_dir}/log4j.properties"),
+      group=params.user_group,
+      owner=params.hbase_user
+    )
diff --git a/app-packages/hbase-win/package/scripts/hbase_master.py b/app-packages/hbase-win/package/scripts/hbase_master.py
new file mode 100644
index 0000000..47b2409
--- /dev/null
+++ b/app-packages/hbase-win/package/scripts/hbase_master.py
@@ -0,0 +1,63 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import sys
+from resource_management import *
+
+from hbase import hbase
+from hbase_service import hbase_service
+
+         
+class HbaseMaster(Script):
+  def install(self, env):
+    self.install_packages(env)
+    
+  def configure(self, env):
+    import params
+    env.set_params(params)
+
+    hbase(name='master')
+    
+  def start(self, env):
+    import params
+    env.set_params(params)
+    self.configure(env) # for security
+
+    hbase_service( 'master',
+      action = 'start'
+    )
+    
+  def stop(self, env):
+    import params
+    env.set_params(params)
+
+    hbase_service( 'master',
+      action = 'stop'
+    )
+
+  def status(self, env):
+    import status_params
+    env.set_params(status_params)
+    pid_file = format("{pid_dir}/hbase-{hbase_user}-master.pid")
+    check_process_status(pid_file)
+
+
+if __name__ == "__main__":
+  HbaseMaster().execute()
diff --git a/app-packages/hbase-win/package/scripts/hbase_regionserver.py b/app-packages/hbase-win/package/scripts/hbase_regionserver.py
new file mode 100644
index 0000000..daa5732
--- /dev/null
+++ b/app-packages/hbase-win/package/scripts/hbase_regionserver.py
@@ -0,0 +1,63 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import sys
+from resource_management import *
+
+from hbase import hbase
+from hbase_service import hbase_service
+
+         
+class HbaseRegionServer(Script):
+  def install(self, env):
+    self.install_packages(env)
+    
+  def configure(self, env):
+    import params
+    env.set_params(params)
+
+    hbase(name='regionserver')
+      
+  def start(self, env):
+    import params
+    env.set_params(params)
+    self.configure(env) # for security
+
+    hbase_service( 'regionserver',
+      action = 'start'
+    )
+    
+  def stop(self, env):
+    import params
+    env.set_params(params)
+
+    hbase_service( 'regionserver',
+      action = 'stop'
+    )
+
+  def status(self, env):
+    import status_params
+    env.set_params(status_params)
+    pid_file = format("{pid_dir}/hbase-{hbase_user}-regionserver.pid")
+    check_process_status(pid_file)
+    
+
+if __name__ == "__main__":
+  HbaseRegionServer().execute()
diff --git a/app-packages/hbase-win/package/scripts/hbase_rest.py b/app-packages/hbase-win/package/scripts/hbase_rest.py
new file mode 100644
index 0000000..36b51f9
--- /dev/null
+++ b/app-packages/hbase-win/package/scripts/hbase_rest.py
@@ -0,0 +1,62 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import sys
+from resource_management import *
+
+from hbase import hbase
+from hbase_service import hbase_service
+
+         
+class HbaseRest(Script):
+  def install(self, env):
+    self.install_packages(env)
+    
+  def configure(self, env):
+    import params
+    env.set_params(params)
+
+    hbase(name='rest')
+      
+  def start(self, env):
+    import params
+    env.set_params(params)
+    self.configure(env) # for security
+
+    hbase_service( 'rest',
+      action = 'start'
+    )
+    
+  def stop(self, env):
+    import params
+    env.set_params(params)
+
+    hbase_service( 'rest',
+      action = 'stop'
+    )
+
+  def status(self, env):
+    import status_params
+    env.set_params(status_params)
+    pid_file = format("{pid_dir}/hbase-{hbase_user}-rest.pid")
+    check_process_status(pid_file)
+    
+if __name__ == "__main__":
+  HbaseRest().execute()
diff --git a/app-packages/hbase-win/package/scripts/hbase_service.py b/app-packages/hbase-win/package/scripts/hbase_service.py
new file mode 100644
index 0000000..e269531
--- /dev/null
+++ b/app-packages/hbase-win/package/scripts/hbase_service.py
@@ -0,0 +1,69 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+from resource_management import *
+
+
+def hbase_service(
+    name,
+    action='start'):  # 'start' or 'stop' or 'status'
+
+  import params
+
+  pid_file = format("{pid_dir}/hbase-{hbase_user}-{name}.pid")
+  custom_port = None
+  custom_info_port = None
+  heap_size = params.master_heapsize
+  main_class = "org.apache.hadoop.hbase.master.HMaster"
+  if name == "regionserver":
+    heap_size = params.regionserver_heapsize
+    main_class = "org.apache.hadoop.hbase.regionserver.HRegionServer"
+  if name == "rest":
+    heap_size = params.restserver_heapsize
+    main_class = "org.apache.hadoop.hbase.rest.RESTServer"
+    custom_port = params.rest_port
+  if name == "thrift":
+    heap_size = params.thriftserver_heapsize
+    main_class = "org.apache.hadoop.hbase.thrift.ThriftServer"
+    custom_port = params.thrift_port
+    custom_info_port = params.thrift_info_port
+  if name == "thrift2":
+    heap_size = params.thrift2server_heapsize
+    main_class = "org.apache.hadoop.hbase.thrift2.ThriftServer"
+    custom_port = params.thrift2_port
+    custom_info_port = params.thrift2_info_port
+
+  role_user = format("{hbase_user}-{name}")
+
+  rest_of_the_command = InlineTemplate(params.hbase_env_sh_template, [], heap_size=heap_size, role_user=role_user)()
+
+  process_cmd = format("{java64_home}\\bin\\java {rest_of_the_command} {main_class} {action}")
+
+  if custom_port:
+    process_cmd = format("{process_cmd} -p {custom_port}")
+
+  if custom_info_port:
+    process_cmd = format("{process_cmd} --infoport {custom_info_port}")
+
+  Execute(process_cmd,
+          logoutput=False,
+          wait_for_finish=False,
+          pid_file=pid_file
+  )
\ No newline at end of file
diff --git a/app-packages/hbase-win/package/scripts/hbase_thrift.py b/app-packages/hbase-win/package/scripts/hbase_thrift.py
new file mode 100644
index 0000000..84bfc62
--- /dev/null
+++ b/app-packages/hbase-win/package/scripts/hbase_thrift.py
@@ -0,0 +1,62 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import sys
+from resource_management import *
+
+from hbase import hbase
+from hbase_service import hbase_service
+
+         
+class HbaseThrift(Script):
+  def install(self, env):
+    self.install_packages(env)
+    
+  def configure(self, env):
+    import params
+    env.set_params(params)
+
+    hbase(name='thrift')
+      
+  def start(self, env):
+    import params
+    env.set_params(params)
+    self.configure(env) # for security
+
+    hbase_service( 'thrift',
+      action = 'start'
+    )
+    
+  def stop(self, env):
+    import params
+    env.set_params(params)
+
+    hbase_service( 'thrift',
+      action = 'stop'
+    )
+
+  def status(self, env):
+    import status_params
+    env.set_params(status_params)
+    pid_file = format("{pid_dir}/hbase-{hbase_user}-thrift.pid")
+    check_process_status(pid_file)
+    
+if __name__ == "__main__":
+  HbaseThrift().execute()
diff --git a/app-packages/hbase-win/package/scripts/hbase_thrift2.py b/app-packages/hbase-win/package/scripts/hbase_thrift2.py
new file mode 100644
index 0000000..b72196c
--- /dev/null
+++ b/app-packages/hbase-win/package/scripts/hbase_thrift2.py
@@ -0,0 +1,62 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import sys
+from resource_management import *
+
+from hbase import hbase
+from hbase_service import hbase_service
+
+         
+class HbaseThrift2(Script):
+  def install(self, env):
+    self.install_packages(env)
+    
+  def configure(self, env):
+    import params
+    env.set_params(params)
+
+    hbase(name='thrift2')
+      
+  def start(self, env):
+    import params
+    env.set_params(params)
+    self.configure(env) # for security
+
+    hbase_service( 'thrift2',
+      action = 'start'
+    )
+    
+  def stop(self, env):
+    import params
+    env.set_params(params)
+
+    hbase_service( 'thrift2',
+      action = 'stop'
+    )
+
+  def status(self, env):
+    import status_params
+    env.set_params(status_params)
+    pid_file = format("{pid_dir}/hbase-{hbase_user}-thrift2.pid")
+    check_process_status(pid_file)
+    
+if __name__ == "__main__":
+  HbaseThrift2().execute()
diff --git a/app-packages/hbase-win/package/scripts/params.py b/app-packages/hbase-win/package/scripts/params.py
new file mode 100644
index 0000000..5a54e25
--- /dev/null
+++ b/app-packages/hbase-win/package/scripts/params.py
@@ -0,0 +1,74 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+from functions import calc_xmn_from_xms
+from resource_management import *
+import status_params
+
+# server configurations
+config = Script.get_config()
+
+java64_home = config['hostLevelParams']['java_home']
+hbase_root = config['configurations']['global']['app_root']
+hbase_instance_name = config['configurations']['global']['hbase_instance_name']
+hbase_user = status_params.hbase_user
+user_group = config['configurations']['global']['user_group']
+
+pid_dir = status_params.pid_dir
+tmp_dir = config['configurations']['hbase-site']['hbase.tmp.dir']
+local_dir = substitute_vars(config['configurations']['hbase-site']['hbase.local.dir'],
+                            config['configurations']['hbase-site'])
+conf_dir = format("{hbase_root}/conf")
+log_dir = config['configurations']['global']['app_log_dir']
+input_conf_files_dir = config['configurations']['global']['app_input_conf_dir']
+
+hbase_hdfs_root_dir = config['configurations']['hbase-site']['hbase.rootdir']
+
+"""
+Read various ports
+"""
+rest_port = default("/configurations/global/hbase_rest_port", 1700)
+thrift_port = default("/configurations/global/hbase_thrift_port", 9090)
+thrift2_port = default("/configurations/global/hbase_thrift2_port", 9091)
+thrift_info_port = default("/configurations/global/hbase_info_thrift_port", 9095)
+thrift2_info_port = default("/configurations/global/hbase_info_thrift2_port", 9096)
+
+"""
+Compute or read various heap sizes
+"""
+master_heapsize = config['configurations']['hbase-env']['hbase_master_heapsize']
+regionserver_heapsize = config['configurations']['hbase-env']['hbase_regionserver_heapsize']
+regionserver_xmn_max = config['configurations']['hbase-env']['hbase_regionserver_xmn_max']
+regionserver_xmn_percent = config['configurations']['hbase-env']['hbase_regionserver_xmn_ratio']
+regionserver_xmn_size = calc_xmn_from_xms(regionserver_heapsize, regionserver_xmn_percent, regionserver_xmn_max)
+
+restserver_heapsize =  default("/configurations/hbase-env/hbase_restserver_heapsize", "512m")
+thriftserver_heapsize =  default("/configurations/hbase-env/hbase_thriftserver_heapsize", "512m")
+thrift2server_heapsize =  default("/configurations/hbase-env/hbase_thrift2server_heapsize", "512m")
+
+hbase_env_sh_template = config['configurations']['hbase-env']['content']
+java_library_path = config['configurations']['global']['java_library_path']
+hbase_additional_cp = config['configurations']['global']['hbase_additional_cp']
+
+# log4j.properties
+if (('hbase-log4j' in config['configurations']) and ('content' in config['configurations']['hbase-log4j'])):
+  log4j_props = config['configurations']['hbase-log4j']['content']
+else:
+  log4j_props = None
\ No newline at end of file
diff --git a/app-packages/hbase-win/package/scripts/status_params.py b/app-packages/hbase-win/package/scripts/status_params.py
new file mode 100644
index 0000000..c18cbb9
--- /dev/null
+++ b/app-packages/hbase-win/package/scripts/status_params.py
@@ -0,0 +1,26 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+from resource_management import *
+
+config = Script.get_config()
+
+pid_dir = config['configurations']['global']['app_pid_dir']
+hbase_user = config['configurations']['global']['app_user']
diff --git a/slider-core/src/main/java/org/apache/slider/core/registry/info/CommonRegistryConstants.java b/app-packages/hbase-win/package/templates/hbase_client_jaas.conf.j2
similarity index 83%
rename from slider-core/src/main/java/org/apache/slider/core/registry/info/CommonRegistryConstants.java
rename to app-packages/hbase-win/package/templates/hbase_client_jaas.conf.j2
index a286ba4..bb4279c 100644
--- a/slider-core/src/main/java/org/apache/slider/core/registry/info/CommonRegistryConstants.java
+++ b/app-packages/hbase-win/package/templates/hbase_client_jaas.conf.j2
@@ -15,11 +15,8 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-
-package org.apache.slider.core.registry.info;
-
-public class CommonRegistryConstants {
-
-  public static final String WEB_UI = "org.apache.http.UI";
-  
-}
+Client {
+com.sun.security.auth.module.Krb5LoginModule required
+useKeyTab=false
+useTicketCache=true;
+};
diff --git a/slider-core/src/main/java/org/apache/slider/core/registry/info/CommonRegistryConstants.java b/app-packages/hbase-win/package/templates/hbase_master_jaas.conf.j2
similarity index 80%
copy from slider-core/src/main/java/org/apache/slider/core/registry/info/CommonRegistryConstants.java
copy to app-packages/hbase-win/package/templates/hbase_master_jaas.conf.j2
index a286ba4..91ce3ef 100644
--- a/slider-core/src/main/java/org/apache/slider/core/registry/info/CommonRegistryConstants.java
+++ b/app-packages/hbase-win/package/templates/hbase_master_jaas.conf.j2
@@ -15,11 +15,11 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-
-package org.apache.slider.core.registry.info;
-
-public class CommonRegistryConstants {
-
-  public static final String WEB_UI = "org.apache.http.UI";
-  
-}
+Client {
+com.sun.security.auth.module.Krb5LoginModule required
+useKeyTab=true
+storeKey=true
+useTicketCache=false
+keyTab="{{master_keytab_path}}"
+principal="{{master_jaas_princ}}";
+};
diff --git a/slider-core/src/main/java/org/apache/slider/core/registry/info/CommonRegistryConstants.java b/app-packages/hbase-win/package/templates/hbase_regionserver_jaas.conf.j2
similarity index 79%
copy from slider-core/src/main/java/org/apache/slider/core/registry/info/CommonRegistryConstants.java
copy to app-packages/hbase-win/package/templates/hbase_regionserver_jaas.conf.j2
index a286ba4..2a9b9f3 100644
--- a/slider-core/src/main/java/org/apache/slider/core/registry/info/CommonRegistryConstants.java
+++ b/app-packages/hbase-win/package/templates/hbase_regionserver_jaas.conf.j2
@@ -15,11 +15,11 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-
-package org.apache.slider.core.registry.info;
-
-public class CommonRegistryConstants {
-
-  public static final String WEB_UI = "org.apache.http.UI";
-  
-}
+Client {
+com.sun.security.auth.module.Krb5LoginModule required
+useKeyTab=true
+storeKey=true
+useTicketCache=false
+keyTab="{{regionserver_keytab_path}}"
+principal="{{regionserver_jaas_princ}}";
+};
diff --git a/app-packages/hbase-win/pom.xml b/app-packages/hbase-win/pom.xml
new file mode 100644
index 0000000..55c4c53
--- /dev/null
+++ b/app-packages/hbase-win/pom.xml
@@ -0,0 +1,91 @@
+<?xml version="1.0"?>
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
+  <!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+  <parent>
+    <groupId>org.apache.slider</groupId>
+    <artifactId>slider</artifactId>
+    <version>0.60.0-incubating</version>
+    <relativePath>../../pom.xml</relativePath>
+  </parent>
+  <modelVersion>4.0.0</modelVersion>
+  <artifactId>slider-hbase-app-win-package</artifactId>
+  <packaging>pom</packaging>
+  <name>Slider HBase App Package for Windows</name>
+  <description>Slider HBase App Package for Windows</description>
+  <version>${pkg.version}</version>
+  <properties>
+    <work.dir>package-tmp</work.dir>
+    <app.package.name>${project.artifactId}-${project.version}</app.package.name>
+  </properties>
+
+  <profiles>
+    <profile>
+      <id>hbase-app-package-win</id>
+      <build>
+        <plugins>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-antrun-plugin</artifactId>
+            <version>1.7</version>
+            <executions>
+              <execution>
+                <id>copy</id>
+                <phase>validate</phase>
+                <configuration>
+                  <target name="copy and rename file">
+                    <copy file="${pkg.src}/${pkg.name}" tofile="${project.build.directory}/${pkg.name}"/>
+                  </target>
+                </configuration>
+                <goals>
+                  <goal>run</goal>
+                </goals>
+              </execution>
+            </executions>
+          </plugin>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-assembly-plugin</artifactId>
+            <configuration>
+              <tarLongFileMode>gnu</tarLongFileMode>
+              <descriptor>src/assembly/hbase.xml</descriptor>
+              <appendAssemblyId>false</appendAssemblyId>
+            </configuration>
+            <executions>
+              <execution>
+                <id>build-tarball</id>
+                <phase>package</phase>
+                <goals>
+                  <goal>single</goal>
+                </goals>
+              </execution>
+            </executions>
+          </plugin>
+
+        </plugins>
+      </build>
+    </profile>
+  </profiles>
+
+  <build>
+  </build>
+
+  <dependencies>
+  </dependencies>
+
+</project>
diff --git a/app-packages/hbase/resources.json b/app-packages/hbase-win/resources-default.json
similarity index 69%
rename from app-packages/hbase/resources.json
rename to app-packages/hbase-win/resources-default.json
index d2fdbd8..93dc17c 100644
--- a/app-packages/hbase/resources.json
+++ b/app-packages/hbase-win/resources-default.json
@@ -3,34 +3,37 @@
   "metadata": {
   },
   "global": {
+    "yarn.log.include.patterns": "",
+    "yarn.log.exclude.patterns": ""
   },
   "components": {
     "HBASE_MASTER": {
       "yarn.role.priority": "1",
       "yarn.component.instances": "1",
-      "yarn.memory": "256"
+      "yarn.memory": "512"
     },
     "slider-appmaster": {
+      "yarn.memory": "1024"
     },
     "HBASE_REGIONSERVER": {
       "yarn.role.priority": "2",
       "yarn.component.instances": "1",
-      "yarn.memory": "256"
+      "yarn.memory": "512"
     },
     "HBASE_REST": {
       "yarn.role.priority": "3",
       "yarn.component.instances": "1",
-      "yarn.memory": "256"
+      "yarn.memory": "512"
     },
     "HBASE_THRIFT": {
       "yarn.role.priority": "4",
       "yarn.component.instances": "1",
-      "yarn.memory": "256"
+      "yarn.memory": "512"
     },
     "HBASE_THRIFT2": {
       "yarn.role.priority": "5",
-      "yarn.component.instances": "1",
-      "yarn.memory": "256"
+      "yarn.component.instances": "0",
+      "yarn.memory": "512"
     }
   }
 }
diff --git a/app-packages/hbase-win/src/assembly/hbase.xml b/app-packages/hbase-win/src/assembly/hbase.xml
new file mode 100644
index 0000000..a94c827
--- /dev/null
+++ b/app-packages/hbase-win/src/assembly/hbase.xml
@@ -0,0 +1,68 @@
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~  or more contributor license agreements.  See the NOTICE file
+  ~  distributed with this work for additional information
+  ~  regarding copyright ownership.  The ASF licenses this file
+  ~  to you under the Apache License, Version 2.0 (the
+  ~  "License"); you may not use this file except in compliance
+  ~  with the License.  You may obtain a copy of the License at
+  ~
+  ~       http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~  Unless required by applicable law or agreed to in writing, software
+  ~  distributed under the License is distributed on an "AS IS" BASIS,
+  ~  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  ~  See the License for the specific language governing permissions and
+  ~  limitations under the License.
+  -->
+
+
+<assembly
+  xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0 http://maven.apache.org/xsd/assembly-1.1.0.xsd">
+  <id>hbase_v${hbase.version}</id>
+  <formats>
+    <format>zip</format>
+    <format>dir</format>
+  </formats>
+  <includeBaseDirectory>false</includeBaseDirectory>
+
+  <files>
+    <file>
+      <source>appConfig-default.json</source>
+      <outputDirectory>/</outputDirectory>
+      <filtered>true</filtered>
+      <fileMode>0755</fileMode>
+    </file>
+    <file>
+      <source>metainfo.xml</source>
+      <outputDirectory>/</outputDirectory>
+      <filtered>true</filtered>
+      <fileMode>0755</fileMode>
+    </file>
+    <file>
+      <source>${pkg.src}/${pkg.name}</source>
+      <outputDirectory>package/files</outputDirectory>
+      <filtered>false</filtered>
+      <fileMode>0755</fileMode>
+    </file>
+  </files>
+
+  <fileSets>
+    <fileSet>
+      <directory>${project.basedir}</directory>
+      <outputDirectory>/</outputDirectory>
+      <excludes>
+        <exclude>pom.xml</exclude>
+        <exclude>src/**</exclude>
+        <exclude>target/**</exclude>
+        <exclude>appConfig-default.json</exclude>
+        <exclude>metainfo.xml</exclude>
+      </excludes>
+      <fileMode>0755</fileMode>
+      <directoryMode>0755</directoryMode>
+    </fileSet>
+
+  </fileSets>
+</assembly>
diff --git a/app-packages/hbase/README.md b/app-packages/hbase/README.md
new file mode 100644
index 0000000..2d52fc9
--- /dev/null
+++ b/app-packages/hbase/README.md
@@ -0,0 +1,84 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+# Create Slider App Package for HBase
+
+appConfig-default.json and resources-default.json are not required to be packaged.
+These files are included as reference configuration for Slider apps and are suitable
+for a one-node cluster.
+
+OPTION-I: Use a downloaded tarball
+OPTION-II: Use the tarball from the local repository
+
+## OPTION - I 
+
+To create the app package you will need the HBase tarball and invoke mvn command
+with appropriate parameters.
+
+Command:
+
+    mvn clean package -Phbase-app-package -Dpkg.version=<version>
+       -Dpkg.name=<file name of app tarball> -Dpkg.src=<folder location where the pkg is available>
+
+Example:
+
+    mvn clean package -Phbase-app-package -Dpkg.version=0.98.5-hadoop2
+      -Dpkg.name=hbase-0.98.5-hadoop2-bin.tar.gz
+      -Dpkg.src=/Users/user1/Downloads/0.98.5-hadoop2
+
+App package can be found in
+  app-packages/hbase/target/slider-hbase-app-package-${pkg.version}.zip
+
+## OPTION - II 
+
+You need the HBase version available on local maven repo to create the Slider App Package for HBase.
+
+Download the tarball for HBase:
+  e.g. path to tarball `~/Downloads/hbase-0.98.3-hadoop2-bin.tar.gz`
+
+The version of HBase used for the app package can be adjusted by adding a
+flag such as
+
+    -Dhbase.version=0.98.3-hadoop2
+
+Use the following command to install HBase tarball locally (under local workspace of HBase repo):
+
+    mvn install:install-file -Dfile=<path-to-tarball> -DgroupId=org.apache.hbase -DartifactId=hbase -Dversion=0.98.3-hadoop2 -Dclassifier=bin -Dpackaging=tar.gz
+
+You may need to copy the hbase tarball to the following location if the above step doesn't publish the tarball:
+
+    ~/.m2/repository/org/apache/hbase/hbase/0.98.3-hadoop2/
+
+After HBase tarball is published locally in maven repository, you can use the following command:
+
+    mvn clean package -DskipTests -Phbase-app-package
+
+App package can be found in
+
+    app-packages/hbase/target/apache-slider-hbase-${hbase.version}-app-package-${slider.version}.zip
+
+If an HBase version older than 0.98.3 is desired, it must be installed in the local maven repo.
+
+A less descriptive file name can be specified with
+
+    -Dapp.package.name=HBase_98dot3 which would create a file HBase_98dot3.zip.
+
+## Verifying the content 
+
+Verify the content using
+
+    zip -Tv apache-slider-hbase-*.zip
diff --git a/app-packages/hbase/README.txt b/app-packages/hbase/README.txt
deleted file mode 100644
index 1d5c4bb..0000000
--- a/app-packages/hbase/README.txt
+++ /dev/null
@@ -1,75 +0,0 @@
-<!---
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-
-Create Slider App Package for HBase
-
-While appConfig.json and resources.json are not required for the package they
-work well as the default configuration for Slider apps. So it is advisable that
-when you create an application package for Slider, include sample/default
-resources.json and appConfig.json for a minimal Yarn cluster.
-
-OPTION-I: Use mvn command
-OPTION-II: Manual
-
-****** OPTION - I (use mvn command) **
-You need the HBase version available on local maven repo to create the Slider App Package for HBase.
-
-Download the tarball for HBase:
-  e.g. path to tarball ~/Downloads/hbase-0.98.3-hadoop2-bin.tar.gz
-
-The version of HBase used for the app package can be adjusted by adding a
-flag such as
-  -Dhbase.version=0.98.3-hadoop2
-
-Use the following command to install HBase tarball locally (under local workspace of HBase repo):
-  mvn install:install-file -Dfile=<path-to-tarball> -DgroupId=org.apache.hbase -DartifactId=hbase -Dversion=0.98.3-hadoop2 -Dclassifier=bin -Dpackaging=tar.gz
-
-You may need to copy the hbase tarball to the following location if the above step doesn't publish the tarball:
-~/.m2/repository/org/apache/hbase/hbase/0.98.3-hadoop2/
-
-After HBase tarball is published locally in maven repository, you can use the following command:
-  mvn clean package -DskipTests -Phbase-app-package
-
-App package can be found in
-  app-packages/hbase/target/apache-slider-hbase-${hbase.version}-app-package-${slider.version}.zip
-
-Verify the content using
-  zip -Tv apache-slider-hbase-*.zip
-
-If an HBase version older than 0.98.3 is desired, it must be installed in the local maven repo.
-
-A less descriptive file name can be specified with
-  -Dapp.package.name=HBase_98dot3 which would create a file HBase_98dot3.zip.
-
-****** OPTION - II (manual) **
-The Slider App Package for HBase can also be created manually.
-
-Download the tarball for HBase:
-  e.g. path to tarball ~/Downloads/hbase-0.98.3-hadoop2-bin.tar.gz
-
-Copy the hbase tarball to package/files
-  cp ~/Downloads/hbase-0.98.3-hadoop2-bin.tar.gz package/files
-
-Edit appConfig.json/metainfo.xml
-  Replace 4 occurrences of "${hbase.version}" with the hbase version values such as "0.98.3-hadoop2"
-  Replace 1 occurrence of "${app.package.name}" with the desired app package name, e.g. "hbase-v098"
-
-Create a zip package at the root of the package (<slider enlistment>/app-packages/hbase/)
-  zip -r hbase-v098.zip .
-
-Verify the content using
-  zip -Tv hbase-v098.zip
diff --git a/app-packages/hbase/appConfig-default.json b/app-packages/hbase/appConfig-default.json
new file mode 100644
index 0000000..52587e5
--- /dev/null
+++ b/app-packages/hbase/appConfig-default.json
@@ -0,0 +1,46 @@
+{
+    "schema": "http://example.org/specification/v2.0.0",
+    "metadata": {
+    },
+    "global": {
+        "application.def": ".slider/package/HBASE/slider-hbase-app-package-${pkg.version}.zip",
+        "create.default.zookeeper.node": "true",
+        "java_home": "/usr/jdk64/jdk1.7.0_67",
+        "system_configs": "core-site",
+
+        "site.global.app_user": "yarn",
+        "site.global.app_root": "${AGENT_WORK_ROOT}/app/install/hbase-${pkg.version}",
+
+        "site.global.ganglia_server_host": "${NN_HOST}",
+        "site.global.ganglia_server_port": "8667",
+        "site.global.ganglia_server_id": "Application1",
+        "site.global.ganglia_enabled":"true",
+
+        "site.global.hbase_instance_name": "instancename",
+        "site.global.hbase_root_password": "secret",
+        "site.global.user_group": "hadoop",
+        "site.global.monitor_protocol": "http",
+        "site.global.hbase_thrift_port": "${HBASE_THRIFT.ALLOCATED_PORT}",
+        "site.global.hbase_thrift2_port": "${HBASE_THRIFT2.ALLOCATED_PORT}",
+        "site.global.hbase_rest_port": "${HBASE_REST.ALLOCATED_PORT}",
+
+        "site.hbase-env.hbase_master_heapsize": "1024m",
+        "site.hbase-env.hbase_regionserver_heapsize": "1024m",
+
+        "site.hbase-site.hbase.rootdir": "${DEFAULT_DATA_DIR}",
+        "site.hbase-site.hbase.superuser": "${USER_NAME}",
+        "site.hbase-site.hbase.tmp.dir": "${AGENT_WORK_ROOT}/work/app/tmp",
+        "site.hbase-site.hbase.local.dir": "${hbase.tmp.dir}/local",
+        "site.hbase-site.hbase.zookeeper.quorum": "${ZK_HOST}",
+        "site.hbase-site.zookeeper.znode.parent": "${DEFAULT_ZK_PATH}",
+        "site.hbase-site.hbase.regionserver.info.port": "0",
+        "site.hbase-site.hbase.master.info.port": "${HBASE_MASTER.ALLOCATED_PORT}",
+        "site.hbase-site.hbase.regionserver.port": "0",
+        "site.hbase-site.hbase.master.port": "0"
+    },
+    "components": {
+        "slider-appmaster": {
+            "jvm.heapsize": "1024M"
+        }
+    }
+}
diff --git a/app-packages/hbase/appConfig-secured-default.json b/app-packages/hbase/appConfig-secured-default.json
new file mode 100644
index 0000000..2a2b08f
--- /dev/null
+++ b/app-packages/hbase/appConfig-secured-default.json
@@ -0,0 +1,63 @@
+{
+    "schema": "http://example.org/specification/v2.0.0",
+    "metadata": {
+    },
+    "global": {
+        "application.def": ".slider/package/HBASE/slider-hbase-app-package-${pkg.version}.zip",
+        "create.default.zookeeper.node": "true",
+        "java_home": "/usr/jdk64/jdk1.7.0_67",
+        "system_configs": "core-site,hdfs-site",
+
+        "site.global.app_user": "${USER_NAME}",
+        "site.global.app_root": "${AGENT_WORK_ROOT}/app/install/hbase-${pkg.version}",
+
+        "site.global.ganglia_server_host": "${NN_HOST}",
+        "site.global.ganglia_server_port": "8667",
+        "site.global.ganglia_server_id": "Application1",
+        "site.global.ganglia_enabled":"true",
+
+        "site.global.hbase_instance_name": "instancename",
+        "site.global.hbase_root_password": "secret",
+        "site.global.user_group": "hadoop",
+        "site.global.monitor_protocol": "http",
+        "site.global.hbase_thrift_port": "${HBASE_THRIFT.ALLOCATED_PORT}",
+        "site.global.hbase_thrift2_port": "${HBASE_THRIFT2.ALLOCATED_PORT}",
+        "site.global.hbase_rest_port": "${HBASE_REST.ALLOCATED_PORT}",
+
+        "site.hbase-env.hbase_master_heapsize": "1024m",
+        "site.hbase-env.hbase_regionserver_heapsize": "1024m",
+
+        "site.hbase-site.hbase.rootdir": "${DEFAULT_DATA_DIR}",
+        "site.hbase-site.hbase.superuser": "${USER_NAME}",
+        "site.hbase-site.hbase.tmp.dir": "${AGENT_WORK_ROOT}/work/app/tmp",
+        "site.hbase-site.hbase.local.dir": "${hbase.tmp.dir}/local",
+        "site.hbase-site.hbase.zookeeper.quorum": "${ZK_HOST}",
+        "site.hbase-site.zookeeper.znode.parent": "${DEFAULT_ZK_PATH}",
+        "site.hbase-site.hbase.regionserver.info.port": "0",
+        "site.hbase-site.hbase.master.info.port": "${HBASE_MASTER.ALLOCATED_PORT}",
+        "site.hbase-site.hbase.regionserver.port": "0",
+        "site.hbase-site.hbase.master.port": "0",
+
+        "site.hbase-site.hbase.security.authentication": "kerberos",
+        "site.hbase-site.hbase.security.authorization": "true",
+        "site.hbase-site.hbase.security.access.early_out": "true",
+        "site.hbase-site.hbase.coprocessor.master.classes": "org.apache.hadoop.hbase.security.access.AccessController",
+        "site.hbase-site.hbase.coprocessor.region.classes": "org.apache.hadoop.hbase.security.token.TokenProvider,org.apache.hadoop.hbase.security.access.AccessController",
+        "site.hbase-site.hbase.regionserver.kerberos.principal": "${USER_NAME}/_HOST@EXAMPLE.COM",
+        "site.hbase-site.hbase.regionserver.keytab.file": "${AGENT_WORK_ROOT}/keytabs/${USER_NAME}.HBASE.service.keytab",
+        "site.hbase-site.hbase.master.kerberos.principal": "${USER_NAME}/_HOST@EXAMPLE.COM",
+        "site.hbase-site.hbase.master.keytab.file": "${AGENT_WORK_ROOT}/keytabs/${USER_NAME}.HBASE.service.keytab",
+        "site.hbase-site.hbase.rest.kerberos.principal": "${USER_NAME}/_HOST@EXAMPLE.COM",
+        "site.hbase-site.hbase.rest.keytab.file": "${AGENT_WORK_ROOT}/keytabs/${USER_NAME}.HBASE.service.keytab",
+        "site.hbase-site.hbase.thrift.kerberos.principal": "${USER_NAME}/_HOST@EXAMPLE.COM",
+        "site.hbase-site.hbase.thrift.keytab.file": "${AGENT_WORK_ROOT}/keytabs/${USER_NAME}.HBASE.service.keytab"
+    },
+    "components": {
+        "slider-appmaster": {
+            "jvm.heapsize": "1024M",
+            "slider.hdfs.keytab.dir": ".slider/keytabs/hbase",
+            "slider.am.login.keytab.name": "${USER_NAME}.headless.keytab",
+            "slider.keytab.principal.name": "${USER_NAME}"
+        }
+    }
+}
diff --git a/app-packages/hbase/appConfig.json b/app-packages/hbase/appConfig.json
deleted file mode 100644
index d00ae6d..0000000
--- a/app-packages/hbase/appConfig.json
+++ /dev/null
@@ -1,70 +0,0 @@
-{
-  "schema": "http://example.org/specification/v2.0.0",
-  "metadata": {
-  },
-  "global": {
-    "application.def": "${app.package.name}.zip",
-    "create.default.zookeeper.node": "true",
-    "config_types": "core-site,hdfs-site,hbase-site",
-    "java_home": "/usr/jdk64/jdk1.7.0_45",
-    "package_list": "files/hbase-${hbase.version}-bin.tar.gz",
-    "site.global.app_user": "yarn",
-    "site.global.app_log_dir": "${AGENT_LOG_ROOT}",
-    "site.global.app_pid_dir": "${AGENT_WORK_ROOT}/app/run",
-    "site.global.app_root": "${AGENT_WORK_ROOT}/app/install/hbase-${hbase.version}",
-    "site.global.app_install_dir": "${AGENT_WORK_ROOT}/app/install",
-    "site.global.hbase_master_heapsize": "1024m",
-    "site.global.hbase_regionserver_heapsize": "1024m",
-    "site.global.hbase_instance_name": "instancename",
-    "site.global.hbase_root_password": "secret",
-    "site.global.user_group": "hadoop",
-    "site.global.security_enabled": "false",
-    "site.global.monitor_protocol": "http",
-    "site.global.ganglia_server_host": "${NN_HOST}",
-    "site.global.ganglia_server_port": "8667",
-    "site.global.ganglia_server_id": "Application1",
-    "site.global.hbase_thrift_port": "${HBASE_THRIFT.ALLOCATED_PORT}",
-    "site.global.hbase_thrift2_port": "${HBASE_THRIFT2.ALLOCATED_PORT}",
-    "site.global.hbase_rest_port": "${HBASE_REST.ALLOCATED_PORT}",
-    "site.hbase-site.hbase.hstore.flush.retries.number": "120",
-    "site.hbase-site.hbase.client.keyvalue.maxsize": "10485760",
-    "site.hbase-site.hbase.hstore.compactionThreshold": "3",
-    "site.hbase-site.hbase.rootdir": "${DEFAULT_DATA_DIR}/data",
-    "site.hbase-site.hbase.stagingdir": "${DEFAULT_DATA_DIR}/staging",
-    "site.hbase-site.hbase.regionserver.handler.count": "60",
-    "site.hbase-site.hbase.regionserver.global.memstore.lowerLimit": "0.38",
-    "site.hbase-site.hbase.hregion.memstore.block.multiplier": "2",
-    "site.hbase-site.hbase.hregion.memstore.flush.size": "134217728",
-    "site.hbase-site.hbase.superuser": "yarn",
-    "site.hbase-site.hbase.zookeeper.property.clientPort": "2181",
-    "site.hbase-site.hbase.regionserver.global.memstore.upperLimit": "0.4",
-    "site.hbase-site.zookeeper.session.timeout": "30000",
-    "site.hbase-site.hbase.tmp.dir": "${AGENT_WORK_ROOT}/work/app/tmp",
-    "site.hbase-site.hbase.local.dir": "${hbase.tmp.dir}/local",
-    "site.hbase-site.hbase.hregion.max.filesize": "10737418240",
-    "site.hbase-site.hfile.block.cache.size": "0.40",
-    "site.hbase-site.hbase.security.authentication": "simple",
-    "site.hbase-site.hbase.defaults.for.version.skip": "true",
-    "site.hbase-site.hbase.zookeeper.quorum": "${ZK_HOST}",
-    "site.hbase-site.zookeeper.znode.parent": "${DEF_ZK_PATH}",
-    "site.hbase-site.hbase.hstore.blockingStoreFiles": "10",
-    "site.hbase-site.hbase.hregion.majorcompaction": "86400000",
-    "site.hbase-site.hbase.security.authorization": "false",
-    "site.hbase-site.hbase.cluster.distributed": "true",
-    "site.hbase-site.hbase.hregion.memstore.mslab.enabled": "true",
-    "site.hbase-site.hbase.client.scanner.caching": "100",
-    "site.hbase-site.hbase.zookeeper.useMulti": "true",
-    "site.hbase-site.hbase.regionserver.info.port": "0",
-    "site.hbase-site.hbase.master.info.port": "${HBASE_MASTER.ALLOCATED_PORT}",
-    "site.hbase-site.hbase.regionserver.port": "0"
-  },
-  "components": {
-    "HBASE_MASTER": {
-    },
-    "slider-appmaster": {
-      "jvm.heapsize": "256M"
-    },
-    "HBASE_REGIONSERVER": {
-    }
-  }
-}
diff --git a/app-packages/hbase/configuration/global.xml b/app-packages/hbase/configuration/global.xml
deleted file mode 100644
index b2c57bd..0000000
--- a/app-packages/hbase/configuration/global.xml
+++ /dev/null
@@ -1,160 +0,0 @@
-<?xml version="1.0"?>
-<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
-<!--
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
--->
-
-<configuration>
-  <property>
-    <name>hbasemaster_host</name>
-    <value></value>
-    <description>HBase Master Host.</description>
-  </property>
-  <property>
-    <name>regionserver_hosts</name>
-    <value></value>
-    <description>Region Server Hosts</description>
-  </property>
-  <property>
-    <name>hbase_log_dir</name>
-    <value>/var/log/hbase</value>
-    <description>Log Directories for HBase.</description>
-  </property>
-  <property>
-    <name>hbase_pid_dir</name>
-    <value>/var/run/hbase</value>
-    <description>Log Directories for HBase.</description>
-  </property>
-  <property>
-    <name>hbase_log_dir</name>
-    <value>/var/log/hbase</value>
-    <description>Log Directories for HBase.</description>
-  </property>
-  <property>
-    <name>hbase_regionserver_heapsize</name>
-    <value>1024</value>
-    <description>Log Directories for HBase.</description>
-  </property>
-  <property>
-    <name>hbase_master_heapsize</name>
-    <value>1024</value>
-    <description>HBase Master Heap Size</description>
-  </property>
-  <property>
-    <name>hstore_compactionthreshold</name>
-    <value>3</value>
-    <description>HBase HStore compaction threshold.</description>
-  </property>
-  <property>
-    <name>hfile_blockcache_size</name>
-    <value>0.40</value>
-    <description>HFile block cache size.</description>
-  </property>
-  <property>
-    <name>hstorefile_maxsize</name>
-    <value>10737418240</value>
-    <description>Maximum HStoreFile Size</description>
-  </property>
-    <property>
-    <name>regionserver_handlers</name>
-    <value>60</value>
-    <description>HBase RegionServer Handler</description>
-  </property>
-    <property>
-    <name>hregion_majorcompaction</name>
-    <value>604800000</value>
-    <description>The time between major compactions of all HStoreFiles in a region. Set to 0 to disable automated major compactions.</description>
-  </property>
-    <property>
-    <name>hregion_blockmultiplier</name>
-    <value>2</value>
-    <description>HBase Region Block Multiplier</description>
-  </property>
-    <property>
-    <name>hregion_memstoreflushsize</name>
-    <value></value>
-    <description>HBase Region MemStore Flush Size.</description>
-  </property>
-    <property>
-    <name>client_scannercaching</name>
-    <value>100</value>
-    <description>Base Client Scanner Caching</description>
-  </property>
-    <property>
-    <name>zookeeper_sessiontimeout</name>
-    <value>30000</value>
-    <description>ZooKeeper Session Timeout</description>
-  </property>
-    <property>
-    <name>hfile_max_keyvalue_size</name>
-    <value>10485760</value>
-    <description>HBase Client Maximum key-value Size</description>
-  </property>
-  <property>
-    <name>hbase_hdfs_root_dir</name>
-    <value>/apps/hbase/data</value>
-    <description>HBase Relative Path to HDFS.</description>
-  </property>
-   <property>
-    <name>hbase_conf_dir</name>
-    <value>/etc/hbase</value>
-    <description>Config Directory for HBase.</description>
-  </property>
-   <property>
-    <name>hdfs_enable_shortcircuit_read</name>
-    <value>true</value>
-    <description>HDFS Short Circuit Read</description>
-  </property>
-   <property>
-    <name>hdfs_support_append</name>
-    <value>true</value>
-    <description>HDFS append support</description>
-  </property>
-   <property>
-    <name>hstore_blockingstorefiles</name>
-    <value>10</value>
-    <description>HStore blocking storefiles.</description>
-  </property>
-   <property>
-    <name>regionserver_memstore_lab</name>
-    <value>true</value>
-    <description>Region Server memstore.</description>
-  </property>
-   <property>
-    <name>regionserver_memstore_lowerlimit</name>
-    <value>0.38</value>
-    <description>Region Server memstore lower limit.</description>
-  </property>
-   <property>
-    <name>regionserver_memstore_upperlimit</name>
-    <value>0.4</value>
-    <description>Region Server memstore upper limit.</description>
-  </property>
-   <property>
-    <name>hbase_conf_dir</name>
-    <value>/etc/hbase</value>
-    <description>HBase conf dir.</description>
-  </property>
-   <property>
-    <name>hbase_user</name>
-    <value>hbase</value>
-    <description>HBase User Name.</description>
-  </property>
-
-</configuration>
diff --git a/app-packages/hbase/package/templates/hbase-env.sh.j2 b/app-packages/hbase/configuration/hbase-env.xml
similarity index 62%
rename from app-packages/hbase/package/templates/hbase-env.sh.j2
rename to app-packages/hbase/configuration/hbase-env.xml
index 4aa79ad..554c3c5 100644
--- a/app-packages/hbase/package/templates/hbase-env.sh.j2
+++ b/app-packages/hbase/configuration/hbase-env.xml
@@ -1,20 +1,52 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements. See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership. The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License. You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
 
+<configuration>
+  <property>
+    <name>hbase_regionserver_heapsize</name>
+    <value>1024</value>
+    <description>HBase RegionServer Heap Size.</description>
+  </property>
+  <property>
+    <name>hbase_regionserver_xmn_max</name>
+    <value>512</value>
+    <description>HBase RegionServer maximum value for minimum heap size.</description>
+  </property>
+  <property>
+    <name>hbase_regionserver_xmn_ratio</name>
+    <value>0.2</value>
+    <description>HBase RegionServer minimum heap size is calculated as a percentage of max heap size.</description>
+  </property>
+  <property>
+    <name>hbase_master_heapsize</name>
+    <value>1024</value>
+    <description>HBase Master Heap Size</description>
+  </property>
+
+  <!-- hbase-env.sh -->
+  <property>
+    <name>content</name>
+    <description>This is the jinja template for hbase-env.sh file</description>
+    <value>
 # Set environment variables here.
 
 # The java implementation to use. Java 1.6 required.
@@ -79,3 +111,7 @@
 export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -Djava.security.auth.login.config={{master_jaas_config_file}}"
 export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -Djava.security.auth.login.config={{regionserver_jaas_config_file}}"
 {% endif %}
+    </value>
+  </property>
+
+</configuration>
diff --git a/app-packages/hbase/configuration/hbase-log4j.xml b/app-packages/hbase/configuration/hbase-log4j.xml
index 3bbc549..d488c4e 100644
--- a/app-packages/hbase/configuration/hbase-log4j.xml
+++ b/app-packages/hbase/configuration/hbase-log4j.xml
@@ -24,6 +24,7 @@
 
   <property>
     <name>content</name>
+    <description>Custom log4j.properties</description>
     <value>
 # Licensed to the Apache Software Foundation (ASF) under one
 # or more contributor license agreements.  See the NOTICE file
diff --git a/app-packages/hbase/configuration/hbase-site.xml b/app-packages/hbase/configuration/hbase-site.xml
index cf9416e..a9711d3 100644
--- a/app-packages/hbase/configuration/hbase-site.xml
+++ b/app-packages/hbase/configuration/hbase-site.xml
@@ -43,6 +43,11 @@
     </description>
   </property>
   <property>
+    <name>hbase.master.port</name>
+    <value>60000</value>
+    <description>The port the HBase Master should bind to.</description>
+  </property>
+  <property>
     <name>hbase.tmp.dir</name>
     <value>/hadoop/hbase</value>
     <description>Temporary directory on the local filesystem.
@@ -59,18 +64,18 @@
   </property>
   <property>
     <name>hbase.master.info.bindAddress</name>
-    <value></value>
+    <value>0.0.0.0</value>
     <description>The bind address for the HBase Master web UI
     </description>
   </property>
   <property>
     <name>hbase.master.info.port</name>
-    <value></value>
+    <value>60010</value>
     <description>The port for the HBase Master web UI.</description>
   </property>
   <property>
     <name>hbase.regionserver.info.port</name>
-    <value></value>
+    <value>60030</value>
     <description>The port for the HBase RegionServer web UI.</description>
   </property>
   <property>
@@ -222,14 +227,14 @@
        values, included here for documentation purposes -->
   <property>
     <name>hbase.master.keytab.file</name>
-    <value></value>
+    <value>/etc/security/keytabs/hbase.service.keytab</value>
     <description>Full path to the kerberos keytab file to use for logging in
     the configured HMaster server principal.
     </description>
   </property>
   <property>
     <name>hbase.master.kerberos.principal</name>
-    <value></value>
+    <value>hbase/_HOST@EXAMPLE.COM</value>
     <description>Ex. "hbase/_HOST@EXAMPLE.COM".  The kerberos principal name
     that should be used to run the HMaster process.  The principal name should
     be in the form: user/hostname@DOMAIN.  If "_HOST" is used as the hostname
@@ -239,14 +244,14 @@
   </property>
   <property>
     <name>hbase.regionserver.keytab.file</name>
-    <value></value>
+    <value>/etc/security/keytabs/hbase.service.keytab</value>
     <description>Full path to the kerberos keytab file to use for logging in
     the configured HRegionServer server principal.
     </description>
   </property>
   <property>
     <name>hbase.regionserver.kerberos.principal</name>
-    <value></value>
+    <value>hbase/_HOST@EXAMPLE.COM</value>
     <description>Ex. "hbase/_HOST@EXAMPLE.COM".  The kerberos principal name
     that should be used to run the HRegionServer process.  The principal name
     should be in the form: user/hostname@DOMAIN.  If "_HOST" is used as the
diff --git a/app-packages/hbase/ganglia_metrics.json b/app-packages/hbase/ganglia_metrics.json
new file mode 100644
index 0000000..da73d48
--- /dev/null
+++ b/app-packages/hbase/ganglia_metrics.json
@@ -0,0 +1,38 @@
+{
+    "Component": {
+        "HBASE_REGIONSERVER": {
+            "readRequestsCount": {
+                "metric": "regionserver.Server.readRequestCount",
+                "pointInTime": false,
+                "temporal": true
+            },
+            "regions": {
+                "metric": "regionserver.Server.regionCount",
+                "pointInTime": false,
+                "temporal": true
+            },
+            "flushQueueSize": {
+                "metric": "regionserver.Server.flushQueueLength",
+                "pointInTime": false,
+                "temporal": true
+            }
+        },
+        "HBASE_MASTER": {
+            "cluster_requests": {
+                "metric": "master.Server.clusterRequests",
+                "pointInTime": false,
+                "temporal": true
+            },
+            "splitTime_avg_time": {
+                "metric": "master.FileSystem.HlogSplitTime_mean",
+                "pointInTime": false,
+                "temporal": true
+            },
+            "splitSize_avg_time": {
+                "metric": "master.FileSystem.HlogSplitSize_mean",
+                "pointInTime": false,
+                "temporal": true
+            }
+        }
+    }
+}
diff --git a/app-packages/hbase/metainfo.xml b/app-packages/hbase/metainfo.xml
index aae048d..d5e07a7 100644
--- a/app-packages/hbase/metainfo.xml
+++ b/app-packages/hbase/metainfo.xml
@@ -25,9 +25,10 @@
       1. Ensure parent dir for path (hbase-site/hbase.rootdir) is accessible to the App owner.
       2. Ensure ZK root (hbase-site/zookeeper.znode.parent) is unique for the App instance.
     </comment>
-    <version>${hbase.version}</version>
+    <version>${pkg.version}</version>
     <type>YARN-APP</type>
     <minHadoopVersion>2.1.0</minHadoopVersion>
+    <exportedConfigs>hbase-site</exportedConfigs>
     <exportGroups>
       <exportGroup>
         <name>QuickLinks</name>
@@ -53,11 +54,11 @@
             <value>http://${HBASE_THRIFT_HOST}:${site.global.hbase_thrift_port}</value>
           </export>
           <export>
-            <name>app.metrics</name>
+            <name>org.apache.slider.metrics</name>
             <value>http://${site.global.ganglia_server_host}/cgi-bin/rrd.py?c=${site.global.ganglia_server_id}</value>
           </export>
           <export>
-            <name>app.ganglia</name>
+            <name>org.apache.slider.metrics.ui</name>
             <value>http://${site.global.ganglia_server_host}/ganglia?c=${site.global.ganglia_server_id}</value>
           </export>
         </exports>
@@ -80,15 +81,14 @@
         <name>HBASE_MASTER</name>
         <category>MASTER</category>
         <minInstanceCount>1</minInstanceCount>
-        <maxInstanceCount>2</maxInstanceCount>
-        <appExports>QuickLinks-org.apache.slider.jmx,QuickLinks-org.apache.slider.monitor,QuickLinks-app.metrics,QuickLinks-app.ganglia</appExports>
+        <appExports>QuickLinks-org.apache.slider.jmx,QuickLinks-org.apache.slider.monitor,QuickLinks-org.apache.slider.metrics,QuickLinks-org.apache.slider.metrics.ui</appExports>
         <componentExports>
           <componentExport>
-            <name>app.jmx</name>
+            <name>org.apache.slider.jmx</name>
             <value>${THIS_HOST}:${site.hbase-site.hbase.master.info.port}/jmx</value>
           </componentExport>
           <componentExport>
-            <name>app.monitor</name>
+            <name>org.apache.slider.monitor</name>
             <value>${THIS_HOST}:${site.hbase-site.hbase.master.info.port}/master-status</value>
           </componentExport>
         </componentExports>
@@ -112,7 +112,6 @@
       <component>
         <name>HBASE_REST</name>
         <category>MASTER</category>
-        <minInstanceCount>0</minInstanceCount>
         <appExports>QuickLinks-org.apache.slider.hbase.rest</appExports>
         <commandScript>
           <script>scripts/hbase_rest.py</script>
@@ -123,7 +122,6 @@
       <component>
         <name>HBASE_THRIFT</name>
         <category>MASTER</category>
-        <minInstanceCount>0</minInstanceCount>
         <appExports>QuickLinks-org.apache.slider.hbase.thrift</appExports>
         <commandScript>
           <script>scripts/hbase_thrift.py</script>
@@ -145,7 +143,6 @@
       <component>
         <name>HBASE_CLIENT</name>
         <category>CLIENT</category>
-        <minInstanceCount>0</minInstanceCount>
         <commandScript>
           <script>scripts/hbase_client.py</script>
           <scriptType>PYTHON</scriptType>
@@ -159,11 +156,34 @@
         <packages>
           <package>
             <type>tarball</type>
-            <name>files/hbase-${hbase.version}-bin.tar.gz</name>
+            <name>files/hbase-${pkg.version}.tar.gz</name>
           </package>
         </packages>
       </osSpecific>
     </osSpecifics>
 
+    <configFiles>
+      <configFile>
+        <type>xml</type>
+        <fileName>hbase-site.xml</fileName>
+        <dictionaryName>hbase-site</dictionaryName>
+      </configFile>
+      <configFile>
+        <type>env</type>
+        <fileName>hbase-env.sh</fileName>
+        <dictionaryName>hbase-env</dictionaryName>
+      </configFile>
+      <configFile>
+        <type>env</type>
+        <fileName>hbase-log4j.properties</fileName>
+        <dictionaryName>hbase-log4j</dictionaryName>
+      </configFile>
+      <configFile>
+        <type>env</type>
+        <fileName>hbase-policy.xml</fileName>
+        <dictionaryName>hbase-policy</dictionaryName>
+      </configFile>
+    </configFiles>
+
   </application>
 </metainfo>
diff --git a/app-packages/hbase/package/scripts/hbase.py b/app-packages/hbase/package/scripts/hbase.py
index ed6ec51..35897df 100644
--- a/app-packages/hbase/package/scripts/hbase.py
+++ b/app-packages/hbase/package/scripts/hbase.py
@@ -21,11 +21,13 @@
 
 from resource_management import *
 import sys
+import shutil
 
 def hbase(name=None # 'master' or 'regionserver' or 'client'
               ):
   import params
 
+  """
   if name in ["master","regionserver"]:
     params.HdfsDirectory(params.hbase_hdfs_root_dir,
                          action="create_delayed"
@@ -35,10 +37,13 @@
                          mode=0711
     )
     params.HdfsDirectory(None, action="create")
+  """
+
   Directory( params.conf_dir,
       owner = params.hbase_user,
       group = params.user_group,
-      recursive = True
+      recursive = True,
+      content = params.input_conf_files_dir
   )
 
   Directory (params.tmp_dir,
@@ -60,18 +65,13 @@
             group = params.user_group
   )
 
-  XmlConfig( "hdfs-site.xml",
-            conf_dir = params.conf_dir,
-            configurations = params.config['configurations']['hdfs-site'],
-            owner = params.hbase_user,
-            group = params.user_group
-  )
-
+ 
   if 'hbase-policy' in params.config['configurations']:
     XmlConfig( "hbase-policy.xml",
-      configurations = params.config['configurations']['hbase-policy'],
-      owner = params.hbase_user,
-      group = params.user_group
+            conf_dir = params.conf_dir,
+            configurations = params.config['configurations']['hbase-policy'],
+            owner = params.hbase_user,
+            group = params.user_group
     )
   # Manually overriding ownership of file installed by hadoop package
   else: 
@@ -80,8 +80,10 @@
       group = params.user_group
     )
   
-  hbase_TemplateConfig( 'hbase-env.sh')
-
+  File(format("{conf_dir}/hbase-env.sh"),
+       owner = params.hbase_user,
+       content=InlineTemplate(params.hbase_env_sh_template)
+  )     
   hbase_TemplateConfig( params.metric_prop_file_name,
                         tag = 'GANGLIA-MASTER' if name == 'master' else 'GANGLIA-RS'
   )
@@ -107,7 +109,7 @@
          owner=params.hbase_user,
          content=params.log4j_props
     )
-  elif (os.path.exists(format("{params.conf_dir}/log4j.properties"))):
+  elif (os.path.exists(format("{conf_dir}/log4j.properties"))):
     File(format("{params.conf_dir}/log4j.properties"),
       mode=0644,
       group=params.user_group,
diff --git a/app-packages/hbase/package/scripts/hbase_service.py b/app-packages/hbase/package/scripts/hbase_service.py
index 96add84..db663b8 100644
--- a/app-packages/hbase/package/scripts/hbase_service.py
+++ b/app-packages/hbase/package/scripts/hbase_service.py
@@ -34,7 +34,7 @@
     no_op_test = None
     
     if action == 'start':
-      daemon_cmd = format("{cmd} start {role}")
+      daemon_cmd = format("env HBASE_IDENT_STRING={hbase_user} {cmd} start {role}")
       if name == 'rest':
         daemon_cmd = format("{daemon_cmd} -p {rest_port}")
       elif name == 'thrift':
@@ -43,7 +43,7 @@
         daemon_cmd = format("{daemon_cmd} -p {thrift2_port}")
       no_op_test = format("ls {pid_file} >/dev/null 2>&1 && ps `cat {pid_file}` >/dev/null 2>&1")
     elif action == 'stop':
-      daemon_cmd = format("{cmd} stop {role} && rm -f {pid_file}")
+      daemon_cmd = format("env HBASE_IDENT_STRING={hbase_user} {cmd} stop {role} && rm -f {pid_file}")
 
     if daemon_cmd is not None:
       Execute ( daemon_cmd,
diff --git a/app-packages/hbase/package/scripts/params.py b/app-packages/hbase/package/scripts/params.py
index 1f25f68..33e2bbf 100644
--- a/app-packages/hbase/package/scripts/params.py
+++ b/app-packages/hbase/package/scripts/params.py
@@ -41,14 +41,17 @@
 java64_home = config['hostLevelParams']['java_home']
 
 log_dir = config['configurations']['global']['app_log_dir']
-master_heapsize = config['configurations']['global']['hbase_master_heapsize']
+master_heapsize = config['configurations']['hbase-env']['hbase_master_heapsize']
 
-regionserver_heapsize = config['configurations']['global']['hbase_regionserver_heapsize']
-regionserver_xmn_size = calc_xmn_from_xms(regionserver_heapsize, 0.2, 512)
+regionserver_heapsize = config['configurations']['hbase-env']['hbase_regionserver_heapsize']
+regionserver_xmn_max = config['configurations']['hbase-env']['hbase_regionserver_xmn_max']
+regionserver_xmn_percent = config['configurations']['hbase-env']['hbase_regionserver_xmn_ratio']
+regionserver_xmn_size = calc_xmn_from_xms(regionserver_heapsize, regionserver_xmn_percent, regionserver_xmn_max)
 
 pid_dir = status_params.pid_dir
 tmp_dir = config['configurations']['hbase-site']['hbase.tmp.dir']
 local_dir = substitute_vars(config['configurations']['hbase-site']['hbase.local.dir'], config['configurations']['hbase-site'])
+input_conf_files_dir = config['configurations']['global']['app_input_conf_dir']
 
 client_jaas_config_file = default('hbase_client_jaas_config_file', format("{conf_dir}/hbase_client_jaas.conf"))
 master_jaas_config_file = default('hbase_master_jaas_config_file', format("{conf_dir}/hbase_master_jaas.conf"))
@@ -62,20 +65,10 @@
 thrift2_port = config['configurations']['global']['hbase_thrift2_port']
 
 if security_enabled:
-  
-  _use_hostname_in_principal = default('instance_name', True)
-  _master_primary_name = config['configurations']['global']['hbase_master_primary_name']
   _hostname_lowercase = config['hostname'].lower()
-  _kerberos_domain = config['configurations']['global']['kerberos_domain']
-  _master_principal_name = config['configurations']['global']['hbase_master_principal_name']
-  _regionserver_primary_name = config['configurations']['global']['hbase_regionserver_primary_name']
-  
-  if _use_hostname_in_principal:
-    master_jaas_princ = format("{_master_primary_name}/{_hostname_lowercase}@{_kerberos_domain}")
-    regionserver_jaas_princ = format("{_regionserver_primary_name}/{_hostname_lowercase}@{_kerberos_domain}")
-  else:
-    master_jaas_princ = format("{_master_principal_name}@{_kerberos_domain}")
-    regionserver_jaas_princ = format("{_regionserver_primary_name}@{_kerberos_domain}")
+  master_jaas_princ = config['configurations']['hbase-site']['hbase.master.kerberos.principal'].replace('_HOST',_hostname_lowercase)
+  regionserver_jaas_princ = config['configurations']['hbase-site']['hbase.regionserver.kerberos.principal'].replace('_HOST',_hostname_lowercase)
+
     
 master_keytab_path = config['configurations']['hbase-site']['hbase.master.keytab.file']
 regionserver_keytab_path = config['configurations']['hbase-site']['hbase.regionserver.keytab.file']
@@ -91,6 +84,7 @@
 else:
   log4j_props = None
 
+hbase_env_sh_template = config['configurations']['hbase-env']['content']
 
 hbase_hdfs_root_dir = config['configurations']['hbase-site']['hbase.rootdir']
 hbase_staging_dir = config['configurations']['hbase-site']['hbase.stagingdir']
diff --git a/app-packages/hbase/pom.xml b/app-packages/hbase/pom.xml
index 7dede6c..6caef05 100644
--- a/app-packages/hbase/pom.xml
+++ b/app-packages/hbase/pom.xml
@@ -1,5 +1,6 @@
 <?xml version="1.0"?>
-<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
   <!--
    Licensed to the Apache Software Foundation (ASF) under one or more
    contributor license agreements.  See the NOTICE file distributed with
@@ -19,17 +20,17 @@
   <parent>
     <groupId>org.apache.slider</groupId>
     <artifactId>slider</artifactId>
-    <version>0.41.0-incubating-SNAPSHOT</version>
+    <version>0.60.0-incubating</version>
     <relativePath>../../pom.xml</relativePath>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>slider-hbase-app-package</artifactId>
-  <packaging>jar</packaging>
+  <packaging>pom</packaging>
   <name>Slider HBase App Package</name>
   <description>Slider HBase App Package</description>
   <properties>
     <work.dir>package-tmp</work.dir>
-    <app.package.name>apache-slider-hbase-${hbase.version}-app-package-${project.version}</app.package.name>
+    <app.package.name>${project.artifactId}-${project.version}</app.package.name>
   </properties>
 
   <profiles>
@@ -39,6 +40,63 @@
         <plugins>
           <plugin>
             <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-antrun-plugin</artifactId>
+            <version>${maven-antrun-plugin.version}</version>
+            <executions>
+              <execution>
+                <id>copy</id>
+                <phase>validate</phase>
+                <configuration>
+                  <target name="copy and rename file">
+                    <copy file="${pkg.src}/${pkg.name}" tofile="${project.build.directory}/${pkg.name}"/>
+                  </target>
+                </configuration>
+                <goals>
+                  <goal>run</goal>
+                </goals>
+              </execution>
+            </executions>
+          </plugin>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-assembly-plugin</artifactId>
+            <configuration>
+              <tarLongFileMode>gnu</tarLongFileMode>
+              <descriptor>src/assembly/hbase.xml</descriptor>
+              <appendAssemblyId>false</appendAssemblyId>
+            </configuration>
+            <executions>
+              <execution>
+                <id>build-tarball</id>
+                <phase>package</phase>
+                <goals>
+                  <goal>single</goal>
+                </goals>
+              </execution>
+            </executions>
+          </plugin>
+
+        </plugins>
+      </build>
+    </profile>
+    <profile>
+      <id>hbase-app-package-it</id>
+      <properties>
+        <pkg.version>${hbase.version}</pkg.version>
+        <pkg.src>${project.build.directory}/${work.dir}</pkg.src>
+        <pkg.name>hbase-${hbase.version}-bin.tar.gz</pkg.name>
+      </properties>
+      <build>
+        <resources>
+          <resource>
+            <directory>src/test/resources</directory>
+            <filtering>true</filtering>
+            <targetPath>${project.build.directory}/test-config</targetPath>
+          </resource>
+        </resources>
+        <plugins>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
             <artifactId>maven-assembly-plugin</artifactId>
             <configuration>
               <descriptor>src/assembly/hbase.xml</descriptor>
@@ -95,151 +153,143 @@
                 <java.awt.headless>true</java.awt.headless>
                 <!-- this property must be supplied-->
                 <slider.conf.dir>${slider.conf.dir}</slider.conf.dir>
-                <slider.bin.dir>../../slider-assembly/target/slider-${project.version}-all/slider-${project.version}</slider.bin.dir>
+                <slider.bin.dir>../../slider-assembly/target/slider-${project.version}-all/slider-${project.version}
+                </slider.bin.dir>
                 <test.app.pkg.dir>target</test.app.pkg.dir>
                 <test.app.pkg.file>${app.package.name}.zip</test.app.pkg.file>
-                <test.app.resource>target/test-config/resources.json</test.app.resource>
-                <test.app.template>target/${app.package.name}/appConfig.json</test.app.template>
+                <test.app.resource>target/test-config/resources-default.json</test.app.resource>
+                <test.app.template>target/${app.package.name}/appConfig-default.json</test.app.template>
               </systemPropertyVariables>
             </configuration>
           </plugin>
+          <plugin>
+            <artifactId>maven-compiler-plugin</artifactId>
+            <dependencies>
+              <dependency>
+                <groupId>org.codehaus.groovy</groupId>
+                <artifactId>groovy-eclipse-compiler</artifactId>
+                <version>${groovy-eclipse-compiler.version}</version>
+              </dependency>
+              <dependency>
+                <groupId>org.codehaus.groovy</groupId>
+                <artifactId>groovy-eclipse-batch</artifactId>
+                <version>${groovy-eclipse-batch.version}</version>
+              </dependency>
+            </dependencies>
+          </plugin>
+
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-surefire-plugin</artifactId>
+            <configuration>
+              <!-- can't figure out how to get the surefire plugin not to pick up the ITs, so skip it entirely -->
+              <skip>true</skip>
+            </configuration>
+          </plugin>
         </plugins>
       </build>
+      <dependencies>
+        <dependency>
+          <groupId>org.apache.hbase</groupId>
+          <artifactId>hbase</artifactId>
+          <version>${hbase.version}</version>
+          <classifier>bin</classifier>
+          <type>tar.gz</type>
+        </dependency>
+        <dependency>
+          <groupId>junit</groupId>
+          <artifactId>junit</artifactId>
+          <scope>test</scope>
+        </dependency>
+        <dependency>
+          <groupId>org.apache.hbase</groupId>
+          <artifactId>hbase-client</artifactId>
+        </dependency>
+
+        <dependency>
+          <groupId>org.apache.hadoop</groupId>
+          <artifactId>hadoop-minicluster</artifactId>
+          <scope>test</scope>
+        </dependency>
+
+        <dependency>
+          <groupId>org.apache.hbase</groupId>
+          <artifactId>hbase-server</artifactId>
+        </dependency>
+
+        <dependency>
+          <groupId>org.apache.hbase</groupId>
+          <artifactId>hbase-protocol</artifactId>
+        </dependency>
+
+        <dependency>
+          <groupId>org.apache.hbase</groupId>
+          <artifactId>hbase-common</artifactId>
+        </dependency>
+
+        <dependency>
+          <groupId>org.apache.hbase</groupId>
+          <artifactId>hbase-common</artifactId>
+          <classifier>tests</classifier>
+          <scope>test</scope>
+        </dependency>
+
+        <dependency>
+          <groupId>org.apache.hbase</groupId>
+          <artifactId>hbase-it</artifactId>
+          <classifier>tests</classifier>
+          <exclusions>
+            <exclusion>
+              <groupId>org.apache.hadoop</groupId>
+              <artifactId>hadoop-client</artifactId>
+            </exclusion>
+          </exclusions>
+          <scope>test</scope>
+        </dependency>
+
+        <dependency>
+          <groupId>org.apache.hbase</groupId>
+          <artifactId>hbase-hadoop-compat</artifactId>
+          <classifier>tests</classifier>
+          <scope>test</scope>
+        </dependency>
+
+        <dependency>
+          <groupId>org.apache.hbase</groupId>
+          <artifactId>hbase-hadoop2-compat</artifactId>
+          <classifier>tests</classifier>
+          <scope>test</scope>
+        </dependency>
+
+        <dependency>
+          <groupId>org.apache.hbase</groupId>
+          <artifactId>hbase-server</artifactId>
+          <classifier>tests</classifier>
+          <scope>test</scope>
+        </dependency>
+        <dependency>
+          <groupId>org.apache.slider</groupId>
+          <artifactId>slider-core</artifactId>
+          <scope>test</scope>
+        </dependency>
+        <dependency>
+          <groupId>org.apache.slider</groupId>
+          <artifactId>slider-funtest</artifactId>
+          <scope>test</scope>
+        </dependency>
+        <dependency>
+          <groupId>org.codehaus.groovy</groupId>
+          <artifactId>groovy-all</artifactId>
+          <scope>test</scope>
+        </dependency>
+      </dependencies>
     </profile>
   </profiles>
 
   <build>
-    <!-- resources are filtered for dynamic updates. This gets build info in-->
-    <resources>
-      <resource>
-        <directory>src/test/resources</directory>
-        <filtering>true</filtering>
-        <targetPath>${project.build.directory}/test-config</targetPath>
-      </resource>
-    </resources>
-
-    <plugins>
-      <plugin>
-        <artifactId>maven-compiler-plugin</artifactId>
-        <dependencies>
-          <dependency>
-            <groupId>org.codehaus.groovy</groupId>
-            <artifactId>groovy-eclipse-compiler</artifactId>
-            <version>${groovy-eclipse-compiler.version}</version>
-          </dependency>
-          <dependency>
-            <groupId>org.codehaus.groovy</groupId>
-            <artifactId>groovy-eclipse-batch</artifactId>
-            <version>${groovy-eclipse-batch.version}</version>
-          </dependency>
-        </dependencies>
-      </plugin>
-
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-surefire-plugin</artifactId>
-        <configuration>
-          <!-- can't figure out how to get the surefire plugin not to pick up the ITs, so skip it entirely -->
-          <skip>true</skip>
-        </configuration>
-      </plugin>
-    </plugins>
   </build>
 
   <dependencies>
-    <dependency>
-      <groupId>org.apache.hbase</groupId>
-      <artifactId>hbase</artifactId>
-      <version>${hbase.version}</version>
-      <classifier>bin</classifier>
-      <type>tar.gz</type>
-    </dependency>
-    <dependency>
-      <groupId>junit</groupId>
-      <artifactId>junit</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-       <groupId>org.apache.hbase</groupId>
-       <artifactId>hbase-client</artifactId>
-    </dependency>
-
-    <dependency>
-      <groupId>org.apache.hadoop</groupId>
-      <artifactId>hadoop-minicluster</artifactId>
-      <scope>test</scope>
-    </dependency>
-    
-    <dependency>
-      <groupId>org.apache.hbase</groupId>
-      <artifactId>hbase-server</artifactId>
-    </dependency>
-
-    <dependency>
-      <groupId>org.apache.hbase</groupId>
-      <artifactId>hbase-protocol</artifactId>
-    </dependency>
-
-    <dependency>
-      <groupId>org.apache.hbase</groupId>
-      <artifactId>hbase-common</artifactId>
-    </dependency>
-
-    <dependency>
-      <groupId>org.apache.hbase</groupId>
-      <artifactId>hbase-common</artifactId>
-      <classifier>tests</classifier>
-      <scope>test</scope>
-    </dependency>
-
-    <dependency>
-      <groupId>org.apache.hbase</groupId>
-      <artifactId>hbase-it</artifactId>
-      <classifier>tests</classifier>
-        <exclusions>
-          <exclusion>
-            <groupId>org.apache.hadoop</groupId>
-            <artifactId>hadoop-client</artifactId>
-          </exclusion>
-        </exclusions>
-      <scope>test</scope>
-    </dependency>
-
-    <dependency>
-      <groupId>org.apache.hbase</groupId>
-      <artifactId>hbase-hadoop-compat</artifactId>
-      <classifier>tests</classifier>
-      <scope>test</scope>
-    </dependency>
-
-    <dependency>
-      <groupId>org.apache.hbase</groupId>
-      <artifactId>hbase-hadoop2-compat</artifactId>
-      <classifier>tests</classifier>
-      <scope>test</scope>
-    </dependency>
-
-    <dependency>
-      <groupId>org.apache.hbase</groupId>
-      <artifactId>hbase-server</artifactId>
-      <classifier>tests</classifier>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.slider</groupId>
-      <artifactId>slider-core</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.apache.slider</groupId>
-      <artifactId>slider-funtest</artifactId>
-      <scope>test</scope>
-    </dependency>
-    <dependency>
-      <groupId>org.codehaus.groovy</groupId>
-      <artifactId>groovy-all</artifactId>
-      <scope>test</scope>
-    </dependency>
   </dependencies>
 
 </project>
diff --git a/app-packages/hbase/resources.json b/app-packages/hbase/resources-default.json
similarity index 69%
copy from app-packages/hbase/resources.json
copy to app-packages/hbase/resources-default.json
index d2fdbd8..b1da1f7 100644
--- a/app-packages/hbase/resources.json
+++ b/app-packages/hbase/resources-default.json
@@ -3,34 +3,37 @@
   "metadata": {
   },
   "global": {
+    "yarn.log.include.patterns": "",
+    "yarn.log.exclude.patterns": ""
   },
   "components": {
     "HBASE_MASTER": {
       "yarn.role.priority": "1",
       "yarn.component.instances": "1",
-      "yarn.memory": "256"
+      "yarn.memory": "1500"
     },
     "slider-appmaster": {
+      "yarn.memory": "1024"
     },
     "HBASE_REGIONSERVER": {
       "yarn.role.priority": "2",
       "yarn.component.instances": "1",
-      "yarn.memory": "256"
+      "yarn.memory": "1500"
     },
     "HBASE_REST": {
       "yarn.role.priority": "3",
       "yarn.component.instances": "1",
-      "yarn.memory": "256"
+      "yarn.memory": "556"
     },
     "HBASE_THRIFT": {
       "yarn.role.priority": "4",
-      "yarn.component.instances": "1",
-      "yarn.memory": "256"
+      "yarn.component.instances": "0",
+      "yarn.memory": "556"
     },
     "HBASE_THRIFT2": {
       "yarn.role.priority": "5",
       "yarn.component.instances": "1",
-      "yarn.memory": "256"
+      "yarn.memory": "556"
     }
   }
 }
diff --git a/app-packages/hbase/src/assembly/hbase.xml b/app-packages/hbase/src/assembly/hbase.xml
index ff1c395..a74304c 100644
--- a/app-packages/hbase/src/assembly/hbase.xml
+++ b/app-packages/hbase/src/assembly/hbase.xml
@@ -30,7 +30,13 @@
 
   <files>
     <file>
-      <source>appConfig.json</source>
+      <source>appConfig-default.json</source>
+      <outputDirectory>/</outputDirectory>
+      <filtered>true</filtered>
+      <fileMode>0755</fileMode>
+    </file>
+    <file>
+      <source>appConfig-secured-default.json</source>
       <outputDirectory>/</outputDirectory>
       <filtered>true</filtered>
       <fileMode>0755</fileMode>
@@ -41,6 +47,12 @@
       <filtered>true</filtered>
       <fileMode>0755</fileMode>
     </file>
+    <file>
+      <source>${pkg.src}/${pkg.name}</source>
+      <outputDirectory>package/files</outputDirectory>
+      <filtered>false</filtered>
+      <fileMode>0755</fileMode>
+    </file>
   </files>
 
   <fileSets>
@@ -51,22 +63,13 @@
         <exclude>pom.xml</exclude>
         <exclude>src/**</exclude>
         <exclude>target/**</exclude>
-        <exclude>appConfig.json</exclude>
+        <exclude>appConfig-default.json</exclude>
+        <exclude>appConfig-secured-default.json</exclude>
         <exclude>metainfo.xml</exclude>
       </excludes>
       <fileMode>0755</fileMode>
       <directoryMode>0755</directoryMode>
     </fileSet>
 
-    <fileSet>
-      <directory>${project.build.directory}/${work.dir}</directory>
-      <outputDirectory>package/files</outputDirectory>
-      <includes>
-        <include>hbase-${hbase.version}-bin.tar.gz</include>
-      </includes>
-      <fileMode>0755</fileMode>
-      <directoryMode>0755</directoryMode>
-    </fileSet>
-
   </fileSets>
 </assembly>
diff --git a/app-packages/hbase/src/test/resources/appConfig_monitor_ssl.json b/app-packages/hbase/src/test/resources/appConfig_monitor_ssl.json
index 37d72d0..73b33ed 100644
--- a/app-packages/hbase/src/test/resources/appConfig_monitor_ssl.json
+++ b/app-packages/hbase/src/test/resources/appConfig_monitor_ssl.json
@@ -19,7 +19,6 @@
     "site.global.hbase_instance_name": "instancename",
     "site.global.hbase_root_password": "secret",
     "site.global.user_group": "hadoop",
-    "site.global.security_enabled": "false",
     "site.global.monitor_protocol": "https",
     "site.global.ganglia_server_host": "${NN_HOST}",
     "site.global.ganglia_server_port": "8667",
diff --git a/app-packages/hbase/src/test/resources/resources.json b/app-packages/hbase/src/test/resources/resources-default.json
similarity index 84%
rename from app-packages/hbase/src/test/resources/resources.json
rename to app-packages/hbase/src/test/resources/resources-default.json
index e0ff26f..4fedf01 100644
--- a/app-packages/hbase/src/test/resources/resources.json
+++ b/app-packages/hbase/src/test/resources/resources-default.json
@@ -3,6 +3,8 @@
   "metadata": {
   },
   "global": {
+    "yarn.log.include.patterns": "",
+    "yarn.log.exclude.patterns": ""
   },
   "components": {
     "HBASE_MASTER": {
diff --git a/app-packages/memcached-win/README.txt b/app-packages/memcached-win/README.txt
index 4d93b91..84d2728 100644
--- a/app-packages/memcached-win/README.txt
+++ b/app-packages/memcached-win/README.txt
@@ -30,7 +30,6 @@
 Verify the content using  
   unzip -l "$@" jmemcached-1.0.0.zip
 
-While appConfig.json and resources.json are not required for the package they work
-well as the default configuration for Slider apps. So its advisable that when you
-create an application package for Slider, include sample/default resources.json and
-appConfig.json for a minimal Yarn cluster.
+appConfig-default.json and resources-default.json are not required to be packaged.
+These files are included as reference configuration for Slider apps and are suitable
+for a one-node cluster.
diff --git a/app-packages/memcached-win/appConfig-default.json b/app-packages/memcached-win/appConfig-default.json
new file mode 100644
index 0000000..8a5ffd0
--- /dev/null
+++ b/app-packages/memcached-win/appConfig-default.json
@@ -0,0 +1,21 @@
+{
+  "schema": "http://example.org/specification/v2.0.0",
+  "metadata": {
+  },
+  "global": {
+    "application.def": ".slider/package/MEMCACHED/jmemcached-1.0.0.zip",
+    "java_home": "C:\\java",
+
+    "site.global.additional_cp": "C:\\hdp\\hadoop-2.4.0.2.1.3.0-1990\\share\\hadoop\\common\\lib\\*",
+    "site.global.xmx_val": "256m",
+    "site.global.xms_val": "128m",
+    "site.global.memory_val": "200M",
+    "site.global.listen_port": "${MEMCACHED.ALLOCATED_PORT}{PER_CONTAINER}"
+
+  },
+  "components": {
+    "slider-appmaster": {
+      "jvm.heapsize": "256M"
+    }
+  }
+}
diff --git a/app-packages/memcached-win/appConfig.json b/app-packages/memcached-win/appConfig.json
deleted file mode 100644
index b76ecde..0000000
--- a/app-packages/memcached-win/appConfig.json
+++ /dev/null
@@ -1,26 +0,0 @@
-{
-  "schema": "http://example.org/specification/v2.0.0",
-  "metadata": {
-  },
-  "global": {
-    "application.def": "/slider/jmemcached-1.0.0.zip",
-    "java_home": "C:\\java",
-
-    "site.global.app_user": "hadoop",
-    "site.global.app_root": "${AGENT_WORK_ROOT}\\app\\install",
-    "site.global.pid_file": "${AGENT_WORK_ROOT}\\app\\run\\component.pid",
-    "site.global.additional_cp": "C:\\hdp\\hadoop-2.4.0.2.1.3.0-1990\\share\\hadoop\\common\\lib\\*",
-    "site.global.xmx_val": "256m",
-    "site.global.xms_val": "128m",
-    "site.global.memory_val": "200M",
-    "site.global.listen_port": "${MEMCACHED.ALLOCATED_PORT}{DO_NOT_PROPAGATE}"
-
-  },
-  "components": {
-    "slider-appmaster": {
-      "jvm.heapsize": "256M"
-    },
-    "MEMCACHED": {
-    }
-  }
-}
diff --git a/app-packages/memcached-win/metainfo.xml b/app-packages/memcached-win/metainfo.xml
index d056c0a..093001b 100644
--- a/app-packages/memcached-win/metainfo.xml
+++ b/app-packages/memcached-win/metainfo.xml
@@ -23,17 +23,23 @@
     <comment>Memcache is a network accessible key/value storage system, often used as a distributed cache.</comment>
     <version>1.0.0</version>
     <exportedConfigs>None</exportedConfigs>
+    <exportGroups>
+      <exportGroup>
+        <name>Servers</name>
+        <exports>
+          <export>
+            <name>host_port</name>
+            <value>${MEMCACHED_HOST}:${site.global.listen_port}</value>
+          </export>
+        </exports>
+      </exportGroup>
+    </exportGroups>
 
     <components>
       <component>
         <name>MEMCACHED</name>
         <category>MASTER</category>
-        <exports>
-          <export>
-            <name>host_port</name>
-            <value>${THIS_HOST}:${site.global.listen_port}</value>
-          </export>
-        </exports>
+        <compExports>Servers-host_port</compExports>
         <commandScript>
           <script>scripts/memcached.py</script>
           <scriptType>PYTHON</scriptType>
diff --git a/app-packages/memcached-win/package/scripts/memcached.py b/app-packages/memcached-win/package/scripts/memcached.py
index bc9905d..c272e47 100644
--- a/app-packages/memcached-win/package/scripts/memcached.py
+++ b/app-packages/memcached-win/package/scripts/memcached.py
@@ -37,10 +37,10 @@
     process_cmd = format("{java64_home}\\bin\\java -Xmx{xmx_val} -Xms{xms_val} -classpath {app_root}\\*;{additional_cp} com.thimbleware.jmemcached.Main --memory={memory_val} --port={port}")
 
     Execute(process_cmd,
-        user=params.app_user,
         logoutput=False,
         wait_for_finish=False,
-        pid_file=params.pid_file
+        pid_file=params.pid_file,
+        poll_after = 5
     )
 
   def stop(self, env):
@@ -50,8 +50,7 @@
   def status(self, env):
     import params
     env.set_params(params)
-    #Check process status need to be changed for Windows
-    #check_process_status(params.pid_file)
+    check_process_status(params.pid_file)
 
 if __name__ == "__main__":
   Memcached().execute()
diff --git a/app-packages/memcached-win/package/scripts/params.py b/app-packages/memcached-win/package/scripts/params.py
index fab3714..056a3b9 100644
--- a/app-packages/memcached-win/package/scripts/params.py
+++ b/app-packages/memcached-win/package/scripts/params.py
@@ -25,8 +25,8 @@
 
 app_root = config['configurations']['global']['app_root']
 java64_home = config['hostLevelParams']['java_home']
-app_user = config['configurations']['global']['app_user']
 pid_file = config['configurations']['global']['pid_file']
+
 additional_cp = config['configurations']['global']['additional_cp']
 xmx_val = config['configurations']['global']['xmx_val']
 xms_val = config['configurations']['global']['xms_val']
diff --git a/app-packages/memcached-win/resources.json b/app-packages/memcached-win/resources-default.json
similarity index 100%
rename from app-packages/memcached-win/resources.json
rename to app-packages/memcached-win/resources-default.json
diff --git a/app-packages/memcached/README.txt b/app-packages/memcached/README.txt
index eed2954..fc0e4f3 100644
--- a/app-packages/memcached/README.txt
+++ b/app-packages/memcached/README.txt
@@ -19,7 +19,16 @@
 
 To create the app package you will need the Memcached tarball copied to a specific location.
 
-Replace the placeholder tarball for JMemcached.
+Replace the placeholder tarball for JMemcached. The tarball must have all the jar files at the
+root directory.
+Example:
+  tar -tvf jmemcached-1.0.0.tar
+  -rw-r--r--  ./jmemcached-cli-1.0.0.jar
+  -rwxr-xr-x  ./jmemcached-core-1.0.0.jar
+
+If not modify, appConfig.json to have correct application install root.
+  "site.global.app_root": "${AGENT_WORK_ROOT}/app/install/my_sub_root_for_jars",
+
   cp ~/Downloads/jmemcached-1.0.0.tar package/files/
   rm package/files/jmemcached-1.0.0.tar.REPLACE
 
@@ -29,7 +38,6 @@
 Verify the content using  
   unzip -l "$@" jmemcached-1.0.0.zip
 
-While appConfig.json and resources.json are not required for the package they work
-well as the default configuration for Slider apps. So its advisable that when you
-create an application package for Slider, include sample/default resources.json and
-appConfig.json for a minimal Yarn cluster.
+appConfig-default.json and resources-default.json are not required to be packaged.
+These files are included as reference configuration for Slider apps and are suitable
+for a one-node cluster.
diff --git a/app-packages/memcached/appConfig-default.json b/app-packages/memcached/appConfig-default.json
new file mode 100644
index 0000000..16dd931
--- /dev/null
+++ b/app-packages/memcached/appConfig-default.json
@@ -0,0 +1,20 @@
+{
+  "schema": "http://example.org/specification/v2.0.0",
+  "metadata": {
+  },
+  "global": {
+    "application.def": ".slider/package/MEMCACHED/jmemcached-1.0.0.zip",
+    "java_home": "/usr/jdk64/jdk1.7.0_67",
+
+    "site.global.additional_cp": "/usr/lib/hadoop/lib/*",
+    "site.global.xmx_val": "256m",
+    "site.global.xms_val": "128m",
+    "site.global.memory_val": "200M",
+    "site.global.listen_port": "${MEMCACHED.ALLOCATED_PORT}{PER_CONTAINER}"
+  },
+  "components": {
+    "slider-appmaster": {
+      "jvm.heapsize": "256M"
+    }
+  }
+}
diff --git a/app-packages/memcached/appConfig.json b/app-packages/memcached/appConfig.json
deleted file mode 100644
index 5f32030..0000000
--- a/app-packages/memcached/appConfig.json
+++ /dev/null
@@ -1,26 +0,0 @@
-{
-  "schema": "http://example.org/specification/v2.0.0",
-  "metadata": {
-  },
-  "global": {
-    "application.def": "package/jmemcached-1.0.0.zip",
-    "java_home": "/usr/jdk64/jdk1.7.0_45",
-
-    "site.global.app_user": "yarn",
-    "site.global.app_root": "${AGENT_WORK_ROOT}/app/install/jmemcached-1.0.0",
-    "site.global.pid_file": "${AGENT_WORK_ROOT}/app/run/component.pid",
-
-    "site.global.additional_cp": "/usr/lib/hadoop/lib/*",
-    "site.global.xmx_val": "256m",
-    "site.global.xms_val": "128m",
-    "site.global.memory_val": "200M",
-    "site.global.listen_port": "${MEMCACHED.ALLOCATED_PORT}{DO_NOT_PROPAGATE}"
-  },
-  "components": {
-    "slider-appmaster": {
-      "jvm.heapsize": "256M"
-    },
-    "MEMCACHED": {
-    }
-  }
-}
diff --git a/app-packages/memcached/metainfo.xml b/app-packages/memcached/metainfo.xml
index 525816e..0984dc9 100644
--- a/app-packages/memcached/metainfo.xml
+++ b/app-packages/memcached/metainfo.xml
@@ -23,17 +23,23 @@
     <comment>Memcache is a network accessible key/value storage system, often used as a distributed cache.</comment>
     <version>1.0.0</version>
     <exportedConfigs>None</exportedConfigs>
+    <exportGroups>
+      <exportGroup>
+        <name>Servers</name>
+        <exports>
+          <export>
+            <name>host_port</name>
+            <value>${MEMCACHED_HOST}:${site.global.listen_port}</value>
+          </export>
+        </exports>
+      </exportGroup>
+    </exportGroups>
 
     <components>
       <component>
         <name>MEMCACHED</name>
         <category>MASTER</category>
-        <exports>
-          <export>
-            <name>host_port</name>
-            <value>${THIS_HOST}:${site.global.listen_port}</value>
-          </export>
-        </exports>
+        <compExports>Servers-host_port</compExports>
         <commandScript>
           <script>scripts/memcached.py</script>
           <scriptType>PYTHON</scriptType>
diff --git a/app-packages/memcached/package/scripts/memcached.py b/app-packages/memcached/package/scripts/memcached.py
index 6e14e86..986b61e 100644
--- a/app-packages/memcached/package/scripts/memcached.py
+++ b/app-packages/memcached/package/scripts/memcached.py
@@ -37,10 +37,10 @@
     process_cmd = format("{java64_home}/bin/java -Xmx{xmx_val} -Xms{xms_val} -classpath {app_root}/*:{additional_cp} com.thimbleware.jmemcached.Main --memory={memory_val} --port={port}")
 
     Execute(process_cmd,
-        user=params.app_user,
         logoutput=False,
         wait_for_finish=False,
-        pid_file=params.pid_file
+        pid_file=params.pid_file,
+        poll_after = 5
     )
 
   def stop(self, env):
diff --git a/app-packages/memcached/package/scripts/params.py b/app-packages/memcached/package/scripts/params.py
index 25b4055..056a3b9 100644
--- a/app-packages/memcached/package/scripts/params.py
+++ b/app-packages/memcached/package/scripts/params.py
@@ -25,7 +25,6 @@
 
 app_root = config['configurations']['global']['app_root']
 java64_home = config['hostLevelParams']['java_home']
-app_user = config['configurations']['global']['app_user']
 pid_file = config['configurations']['global']['pid_file']
 
 additional_cp = config['configurations']['global']['additional_cp']
diff --git a/app-packages/memcached/resources.json b/app-packages/memcached/resources-default.json
similarity index 100%
rename from app-packages/memcached/resources.json
rename to app-packages/memcached/resources-default.json
diff --git a/app-packages/storm-win/README.txt b/app-packages/storm-win/README.txt
new file mode 100644
index 0000000..8631714
--- /dev/null
+++ b/app-packages/storm-win/README.txt
@@ -0,0 +1,36 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+How to create a Slider app package for Storm?
+
+To create the app package you will need the Storm tarball and invoke mvn command
+with appropriate parameters.
+
+Command:
+mvn clean package -Pstorm-app-package-win -Dpkg.version=<version>
+   -Dpkg.name=<file name of app tarball> -Dpkg.src=<folder location where the pkg is available>
+
+Example:
+mvn clean package -Pstorm-app-package-win -Dpkg.version=0.9.3
+   -Dpkg.name=storm-0.9.3.zip -Dpkg.src=/Users/user1/Downloads
+
+App package can be found in
+  app-packages/storm-win/target/slider-storm-app-win-package-${pkg.version}.zip
+
+appConfig-default.json and resources-default.json are not required to be packaged.
+These files are included as reference configuration for Slider apps and are suitable
+for a one-node cluster.
diff --git a/app-packages/storm-win/appConfig-default.json b/app-packages/storm-win/appConfig-default.json
new file mode 100644
index 0000000..a77f00d
--- /dev/null
+++ b/app-packages/storm-win/appConfig-default.json
@@ -0,0 +1,39 @@
+{
+  "schema": "http://example.org/specification/v2.0.0",
+  "metadata": {
+  },
+  "global": {
+    "application.def": ".slider/package/STORM/slider-storm-app-win-package-${pkg.version}.zip",
+    "java_home": "C:\\java",
+    "create.default.zookeeper.node": "true",
+
+    "site.global.app_user": "hadoop",
+    "site.global.app_root": "${AGENT_WORK_ROOT}/app/install/storm-${pkg.version}",
+    "site.global.user_group": "hadoop",
+
+    "site.storm-site.storm.log.dir" : "${AGENT_LOG_ROOT}",
+    "site.storm-site.storm.zookeeper.servers": "['${ZK_HOST}']",
+    "site.storm-site.nimbus.thrift.port": "${NIMBUS.ALLOCATED_PORT}",
+    "site.storm-site.storm.local.dir": "${AGENT_WORK_ROOT}/app/tmp/storm",
+    "site.storm-site.transactional.zookeeper.root": "/transactional",
+    "site.storm-site.storm.zookeeper.port": "2181",
+    "site.storm-site.nimbus.childopts": "-Xmx1024m",
+    "site.storm-site.worker.childopts": "-Xmx768m",
+    "site.storm-site.ui.childopts": "-Xmx768m",
+    "site.storm-site.dev.zookeeper.path": "${AGENT_WORK_ROOT}/app/tmp/dev-storm-zookeeper",
+    "site.storm-site.drpc.invocations.port": "0",
+    "site.storm-site.storm.zookeeper.root": "${DEFAULT_ZK_PATH}",
+    "site.storm-site.transactional.zookeeper.port": "null",
+    "site.storm-site.nimbus.host": "${NIMBUS_HOST}",
+    "site.storm-site.ui.port": "${STORM_UI_SERVER.ALLOCATED_PORT}",
+    "site.storm-site.supervisor.slots.ports": "[${SUPERVISOR.ALLOCATED_PORT}{PER_CONTAINER},${SUPERVISOR.ALLOCATED_PORT}{PER_CONTAINER}]",
+    "site.storm-site.supervisor.childopts": "-Xmx256m",
+    "site.storm-site.drpc.port": "0",
+    "site.storm-site.logviewer.port": "${SUPERVISOR.ALLOCATED_PORT}{PER_CONTAINER}"
+  },
+  "components": {
+    "slider-appmaster": {
+      "jvm.heapsize": "256M"
+    }
+  }
+}
diff --git a/app-packages/storm-win/configuration/storm-env.xml b/app-packages/storm-win/configuration/storm-env.xml
new file mode 100644
index 0000000..091c08d
--- /dev/null
+++ b/app-packages/storm-win/configuration/storm-env.xml
@@ -0,0 +1,65 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+
+<configuration>
+
+  <property>
+    <name>kerberos_domain</name>
+    <value></value>
+    <description>The kerberos domain to be used for this Storm cluster</description>
+  </property>
+  <property>
+    <name>storm_client_principal_name</name>
+    <value></value>
+    <description>The principal name for the Storm client to be used to communicate with Nimbus and Zookeeper</description>
+  </property>
+  <property>
+    <name>storm_server_principal_name</name>
+    <value></value>
+    <description>The principal name for the Storm server to be used by Nimbus</description>
+  </property>
+  <property>
+    <name>storm_client_keytab</name>
+    <value></value>
+    <description>The keytab file path for Storm client</description>
+  </property>
+  <property>
+    <name>storm_server_keytab</name>
+    <value></value>
+    <description>The keytab file path for Storm server</description>
+  </property>
+  <!-- storm-env.sh -->
+  <property>
+    <name>content</name>
+    <description>This is the jinja template for storm-env.sh file</description>
+    <value>
+#!/bin/bash
+
+# Set Storm specific environment variables here.
+
+# The java implementation to use.
+export JAVA_HOME={{java_home}}
+
+# export STORM_CONF_DIR=""
+    </value>
+  </property>
+</configuration>
diff --git a/app-packages/storm-win/configuration/storm-site.xml b/app-packages/storm-win/configuration/storm-site.xml
new file mode 100644
index 0000000..c09e29b
--- /dev/null
+++ b/app-packages/storm-win/configuration/storm-site.xml
@@ -0,0 +1,580 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+
+<configuration>
+  <property>
+    <name>java.library.path</name>
+    <value>/usr/local/lib:/opt/local/lib:/usr/lib</value>
+    <description>This value is passed to spawned JVMs (e.g., Nimbus, Supervisor, and Workers)
+       for the java.library.path value. java.library.path tells the JVM where
+       to look for native libraries. It is necessary to set this config correctly since
+       Storm uses the ZeroMQ and JZMQ native libs. </description>
+  </property>
+  <property>
+    <name>storm.local.dir</name>
+    <value>/hadoop/storm</value>
+    <description>A directory on the local filesystem used by Storm for any local
+       filesystem usage it needs. The directory must exist and the Storm daemons must
+       have permission to read/write from this location.</description>
+  </property>
+  <property>
+    <name>storm.zookeeper.servers</name>
+    <value>['localhost']</value>
+    <description>A list of hosts of ZooKeeper servers used to manage the cluster.</description>
+  </property>
+  <property>
+    <name>storm.zookeeper.port</name>
+    <value>2181</value>
+    <description>The port Storm will use to connect to each of the ZooKeeper servers.</description>
+  </property>
+  <property>
+    <name>storm.zookeeper.root</name>
+    <value>/storm</value>
+    <description>The root location at which Storm stores data in ZooKeeper.</description>
+  </property>
+  <property>
+    <name>storm.zookeeper.session.timeout</name>
+    <value>20000</value>
+    <description>The session timeout for clients to ZooKeeper.</description>
+  </property>
+  <property>
+    <name>storm.zookeeper.connection.timeout</name>
+    <value>15000</value>
+    <description>The connection timeout for clients to ZooKeeper.</description>
+  </property>
+  <property>
+    <name>storm.zookeeper.retry.times</name>
+    <value>5</value>
+    <description>The number of times to retry a Zookeeper operation.</description>
+  </property>
+  <property>
+    <name>storm.zookeeper.retry.interval</name>
+    <value>1000</value>
+    <description>The interval between retries of a Zookeeper operation.</description>
+  </property>
+  <property>
+    <name>storm.zookeeper.retry.intervalceiling.millis</name>
+    <value>30000</value>
+    <description>The ceiling of the interval between retries of a Zookeeper operation.</description>
+  </property>
+  <property>
+    <name>storm.cluster.mode</name>
+    <value>distributed</value>
+    <description>The mode this Storm cluster is running in. Either "distributed" or "local".</description>
+  </property>
+  <property>
+    <name>storm.local.mode.zmq</name>
+    <value>false</value>
+    <description>Whether or not to use ZeroMQ for messaging in local mode. If this is set
+       to false, then Storm will use a pure-Java messaging system. The purpose
+       of this flag is to make it easy to run Storm in local mode by eliminating
+       the need for native dependencies, which can be difficult to install.
+    </description>
+  </property>
+  <property>
+    <name>storm.thrift.transport</name>
+    <value>backtype.storm.security.auth.SimpleTransportPlugin</value>
+    <description>The transport plug-in for Thrift client/server communication.</description>
+  </property>
+  <property>
+    <name>storm.messaging.transport</name>
+    <value>backtype.storm.messaging.netty.Context</value>
+    <description>The transporter for communication among Storm tasks.</description>
+  </property>
+  <property>
+    <name>nimbus.host</name>
+    <value>localhost</value>
+    <description>The host that the master server is running on.</description>
+  </property>
+  <property>
+    <name>nimbus.thrift.port</name>
+    <value>6627</value>
+    <description> Which port the Thrift interface of Nimbus should run on. Clients should
+       connect to this port to upload jars and submit topologies.</description>
+  </property>
+  <property>
+    <name>nimbus.thrift.max_buffer_size</name>
+    <value>1048576</value>
+    <description>The maximum buffer size thrift should use when reading messages.</description>
+  </property>
+  <property>
+    <name>nimbus.childopts</name>
+    <value>-Xmx1024m</value>
+    <description>This parameter is used by the storm-deploy project to configure the jvm options for the nimbus daemon.</description>
+  </property>
+  <property>
+    <name>nimbus.task.timeout.secs</name>
+    <value>30</value>
+    <description>How long without heartbeating a task can go before nimbus will consider the task dead and reassign it to another location.</description>
+  </property>
+  <property>
+    <name>nimbus.supervisor.timeout.secs</name>
+    <value>60</value>
+    <description>How long before a supervisor can go without heartbeating before nimbus considers it dead and stops assigning new work to it.</description>
+  </property>
+  <property>
+    <name>nimbus.monitor.freq.secs</name>
+    <value>10</value>
+    <description>
+      How often nimbus should wake up to check heartbeats and do reassignments. Note
+       that if a machine ever goes down Nimbus will immediately wake up and take action.
+       This parameter is for checking for failures when there's no explicit event like that occuring.
+    </description>
+  </property>
+  <property>
+    <name>nimbus.cleanup.inbox.freq.secs</name>
+    <value>600</value>
+    <description>How often nimbus should wake the cleanup thread to clean the inbox.</description>
+  </property>
+  <property>
+    <name>nimbus.inbox.jar.expiration.secs</name>
+    <value>3600</value>
+    <description>
+      The length of time a jar file lives in the inbox before being deleted by the cleanup thread.
+
+       Probably keep this value greater than or equal to NIMBUS_CLEANUP_INBOX_JAR_EXPIRATION_SECS.
+       Note that the time it takes to delete an inbox jar file is going to be somewhat more than
+       NIMBUS_CLEANUP_INBOX_JAR_EXPIRATION_SECS (depending on how often NIMBUS_CLEANUP_FREQ_SECS is set to).
+      </description>
+  </property>
+  <property>
+    <name>nimbus.task.launch.secs</name>
+    <value>120</value>
+    <description>A special timeout used when a task is initially launched. During launch, this is the timeout
+       used until the first heartbeat, overriding nimbus.task.timeout.secs.</description>
+  </property>
+  <property>
+    <name>nimbus.reassign</name>
+    <value>true</value>
+    <description>Whether or not nimbus should reassign tasks if it detects that a task goes down.
+       Defaults to true, and it's not recommended to change this value.</description>
+  </property>
+  <property>
+    <name>nimbus.file.copy.expiration.secs</name>
+    <value>600</value>
+    <description>During upload/download with the master, how long an upload or download connection is idle
+       before nimbus considers it dead and drops the connection.</description>
+  </property>
+  <property>
+    <name>nimbus.topology.validator</name>
+    <value>backtype.storm.nimbus.DefaultTopologyValidator</value>
+    <description>A custom class that implements ITopologyValidator that is run whenever a
+       topology is submitted. Can be used to provide business-specific logic for
+       whether topologies are allowed to run or not.</description>
+  </property>
+  <property>
+    <name>ui.port</name>
+    <value>8744</value>
+    <description>Storm UI binds to this port.</description>
+  </property>
+  <property>
+    <name>ui.childopts</name>
+    <value>-Xmx768m</value>
+    <description>Childopts for Storm UI Java process.</description>
+  </property>
+  <property>
+    <name>logviewer.port</name>
+    <value>8000</value>
+    <description>HTTP UI port for log viewer.</description>
+  </property>
+  <property>
+    <name>logviewer.childopts</name>
+    <value>-Xmx128m</value>
+    <description>Childopts for log viewer java process.</description>
+  </property>
+  <property>
+    <name>logviewer.appender.name</name>
+    <value>A1</value>
+    <description>Appender name used by log viewer to determine log directory.</description>
+  </property>
+  <property>
+    <name>drpc.port</name>
+    <value>3772</value>
+    <description>This port is used by Storm DRPC for receiving DPRC requests from clients.</description>
+  </property>
+  <property>
+    <name>drpc.worker.threads</name>
+    <value>64</value>
+    <description>DRPC thrift server worker threads.</description>
+  </property>
+  <property>
+    <name>drpc.queue.size</name>
+    <value>128</value>
+    <description>DRPC thrift server queue size.</description>
+  </property>
+  <property>
+    <name>drpc.invocations.port</name>
+    <value>3773</value>
+    <description>This port on Storm DRPC is used by DRPC topologies to receive function invocations and send results back.</description>
+  </property>
+  <property>
+    <name>drpc.request.timeout.secs</name>
+    <value>600</value>
+    <description>The timeout on DRPC requests within the DRPC server. Defaults to 10 minutes. Note that requests can also
+       timeout based on the socket timeout on the DRPC client, and separately based on the topology message
+       timeout for the topology implementing the DRPC function.</description>
+  </property>
+  <property>
+    <name>drpc.childopts</name>
+    <value>-Xmx768m</value>
+    <description>Childopts for Storm DRPC Java process.</description>
+  </property>
+  <property>
+    <name>transactional.zookeeper.root</name>
+    <value>/transactional</value>
+    <description>The root directory in ZooKeeper for metadata about TransactionalSpouts.</description>
+  </property>
+  <property>
+    <name>transactional.zookeeper.servers</name>
+    <value>null</value>
+    <description>The list of zookeeper servers in which to keep the transactional state. If null (which is default),
+       will use storm.zookeeper.servers</description>
+  </property>
+  <property>
+    <name>transactional.zookeeper.port</name>
+    <value>null</value>
+    <description>The port to use to connect to the transactional zookeeper servers. If null (which is default),
+       will use storm.zookeeper.port</description>
+  </property>
+  <property>
+    <name>supervisor.slots.ports</name>
+    <value>[6700, 6701]</value>
+    <description>A list of ports that can run workers on this supervisor. Each worker uses one port, and
+       the supervisor will only run one worker per port. Use this configuration to tune
+       how many workers run on each machine.</description>
+  </property>
+  <property>
+    <name>supervisor.childopts</name>
+    <value>-Xmx256m</value>
+    <description>This parameter is used by the storm-deploy project to configure the jvm options for the supervisor daemon.</description>
+  </property>
+  <property>
+    <name>supervisor.worker.start.timeout.secs</name>
+    <value>120</value>
+    <description>How long a worker can go without heartbeating during the initial launch before
+       the supervisor tries to restart the worker process. This value override
+       supervisor.worker.timeout.secs during launch because there is additional
+       overhead to starting and configuring the JVM on launch.</description>
+  </property>
+  <property>
+    <name>supervisor.worker.timeout.secs</name>
+    <value>30</value>
+    <description>How long a worker can go without heartbeating before the supervisor tries to restart the worker process.</description>
+  </property>
+  <property>
+    <name>supervisor.monitor.frequency.secs</name>
+    <value>3</value>
+    <description>How often the supervisor checks the worker heartbeats to see if any of them need to be restarted.</description>
+  </property>
+  <property>
+    <name>supervisor.heartbeat.frequency.secs</name>
+    <value>5</value>
+    <description>How often the supervisor sends a heartbeat to the master.</description>
+  </property>
+  <property>
+    <name>worker.childopts</name>
+    <value>-Xmx768m</value>
+    <description>The jvm opts provided to workers launched by this supervisor. All \"%ID%\" substrings are replaced with an identifier for this worker.</description>
+  </property>
+  <property>
+    <name>worker.heartbeat.frequency.secs</name>
+    <value>1</value>
+    <description>How often this worker should heartbeat to the supervisor.</description>
+  </property>
+  <property>
+    <name>task.heartbeat.frequency.secs</name>
+    <value>3</value>
+    <description>How often a task should heartbeat its status to the master.</description>
+  </property>
+  <property>
+    <name>task.refresh.poll.secs</name>
+    <value>10</value>
+    <description>How often a task should sync its connections with other tasks (if a task is
+       reassigned, the other tasks sending messages to it need to refresh their connections).
+       In general though, when a reassignment happens other tasks will be notified
+       almost immediately. This configuration is here just in case that notification doesn't
+       come through.</description>
+  </property>
+  <property>
+    <name>zmq.threads</name>
+    <value>1</value>
+    <description>The number of threads that should be used by the zeromq context in each worker process.</description>
+  </property>
+  <property>
+    <name>zmq.linger.millis</name>
+    <value>5000</value>
+    <description>How long a connection should retry sending messages to a target host when
+       the connection is closed. This is an advanced configuration and can almost
+       certainly be ignored.</description>
+  </property>
+  <property>
+    <name>zmq.hwm</name>
+    <value>0</value>
+    <description>The high water for the ZeroMQ push sockets used for networking. Use this config to prevent buffer explosion
+       on the networking layer.</description>
+  </property>
+  <property>
+    <name>storm.messaging.netty.server_worker_threads</name>
+    <value>1</value>
+    <description>Netty based messaging: The # of worker threads for the server.</description>
+  </property>
+  <property>
+    <name>storm.messaging.netty.client_worker_threads</name>
+    <value>1</value>
+    <description>Netty based messaging: The # of worker threads for the client.</description>
+  </property>
+  <property>
+    <name>storm.messaging.netty.buffer_size</name>
+    <value>5242880</value>
+    <description>Netty based messaging: The buffer size for send/recv buffer.</description>
+  </property>
+  <property>
+    <name>storm.messaging.netty.max_retries</name>
+    <value>30</value>
+    <description>Netty based messaging: The max # of retries that a peer will perform when a remote is not accessible.</description>
+  </property>
+  <property>
+    <name>storm.messaging.netty.max_wait_ms</name>
+    <value>1000</value>
+    <description>Netty based messaging: The max # of milliseconds that a peer will wait.</description>
+  </property>
+  <property>
+    <name>storm.messaging.netty.min_wait_ms</name>
+    <value>100</value>
+    <description>Netty based messaging: The min # of milliseconds that a peer will wait.</description>
+  </property>
+  <property>
+    <name>topology.enable.message.timeouts</name>
+    <value>true</value>
+    <description>True if Storm should timeout messages or not. Defaults to true. This is meant to be used
+       in unit tests to prevent tuples from being accidentally timed out during the test.</description>
+  </property>
+  <property>
+    <name>topology.debug</name>
+    <value>false</value>
+    <description>When set to true, Storm will log every message that's emitted.</description>
+  </property>
+  <property>
+    <name>topology.optimize</name>
+    <value>true</value>
+    <description>Whether or not the master should optimize topologies by running multiple tasks in a single thread where appropriate.</description>
+  </property>
+  <property>
+    <name>topology.workers</name>
+    <value>1</value>
+    <description>How many processes should be spawned around the cluster to execute this
+       topology. Each process will execute some number of tasks as threads within
+       them. This parameter should be used in conjunction with the parallelism hints
+       on each component in the topology to tune the performance of a topology.</description>
+  </property>
+  <property>
+    <name>topology.acker.executors</name>
+    <value>null</value>
+    <description>How many executors to spawn for ackers.
+
+      If this is set to 0, then Storm will immediately ack tuples as soon
+       as they come off the spout, effectively disabling reliability.
+    </description>
+  </property>
+  <property>
+    <name>topology.message.timeout.secs</name>
+    <value>30</value>
+    <description>The maximum amount of time given to the topology to fully process a message
+       emitted by a spout. If the message is not acked within this time frame, Storm
+       will fail the message on the spout. Some spouts implementations will then replay
+       the message at a later time.</description>
+  </property>
+  <property>
+    <name>topology.skip.missing.kryo.registrations</name>
+    <value>false</value>
+    <description> Whether or not Storm should skip the loading of kryo registrations for which it
+       does not know the class or have the serializer implementation. Otherwise, the task will
+       fail to load and will throw an error at runtime. The use case of this is if you want to
+       declare your serializations on the storm.yaml files on the cluster rather than every single
+       time you submit a topology. Different applications may use different serializations and so
+       a single application may not have the code for the other serializers used by other apps.
+       By setting this config to true, Storm will ignore that it doesn't have those other serializations
+       rather than throw an error.</description>
+  </property>
+  <property>
+    <name>topology.max.task.parallelism</name>
+    <value>null</value>
+    <description>The maximum parallelism allowed for a component in this topology. This configuration is
+       typically used in testing to limit the number of threads spawned in local mode.</description>
+  </property>
+  <property>
+    <name>topology.max.spout.pending</name>
+    <value>null</value>
+    <description>The maximum number of tuples that can be pending on a spout task at any given time.
+       This config applies to individual tasks, not to spouts or topologies as a whole.
+
+       A pending tuple is one that has been emitted from a spout but has not been acked or failed yet.
+       Note that this config parameter has no effect for unreliable spouts that don't tag
+       their tuples with a message id.</description>
+  </property>
+  <property>
+    <name>topology.state.synchronization.timeout.secs</name>
+    <value>60</value>
+    <description>The maximum amount of time a component gives a source of state to synchronize before it requests
+       synchronization again.</description>
+  </property>
+  <property>
+    <name>topology.stats.sample.rate</name>
+    <value>0.05</value>
+    <description>The percentage of tuples to sample to produce stats for a task.</description>
+  </property>
+  <property>
+    <name>topology.builtin.metrics.bucket.size.secs</name>
+    <value>60</value>
+    <description>The time period that builtin metrics data in bucketed into.</description>
+  </property>
+  <property>
+    <name>topology.fall.back.on.java.serialization</name>
+    <value>true</value>
+    <description>Whether or not to use Java serialization in a topology.</description>
+  </property>
+  <property>
+    <name>topology.worker.childopts</name>
+    <value>null</value>
+    <description>Topology-specific options for the worker child process. This is used in addition to WORKER_CHILDOPTS.</description>
+  </property>
+  <property>
+    <name>topology.executor.receive.buffer.size</name>
+    <value>1024</value>
+    <description>The size of the Disruptor receive queue for each executor. Must be a power of 2.</description>
+  </property>
+  <property>
+    <name>topology.executor.send.buffer.size</name>
+    <value>1024</value>
+    <description>The size of the Disruptor send queue for each executor. Must be a power of 2.</description>
+  </property>
+  <property>
+    <name>topology.receiver.buffer.size</name>
+    <value>8</value>
+    <description>The maximum number of messages to batch from the thread receiving off the network to the
+       executor queues. Must be a power of 2.</description>
+  </property>
+  <property>
+    <name>topology.transfer.buffer.size</name>
+    <value>1024</value>
+    <description>The size of the Disruptor transfer queue for each worker.</description>
+  </property>
+  <property>
+    <name>topology.tick.tuple.freq.secs</name>
+    <value>null</value>
+    <description>How often a tick tuple from the "__system" component and "__tick" stream should be sent
+       to tasks. Meant to be used as a component-specific configuration.</description>
+  </property>
+  <property>
+    <name>topology.worker.shared.thread.pool.size</name>
+    <value>4</value>
+    <description>The size of the shared thread pool for worker tasks to make use of. The thread pool can be accessed
+       via the TopologyContext.</description>
+  </property>
+  <property>
+    <name>topology.disruptor.wait.strategy</name>
+    <value>com.lmax.disruptor.BlockingWaitStrategy</value>
+    <description>Configure the wait strategy used for internal queuing. Can be used to tradeoff latency
+       vs. throughput.</description>
+  </property>
+  <property>
+    <name>topology.executor.send.buffer.size</name>
+    <value>1024</value>
+    <description>The size of the Disruptor send queue for each executor. Must be a power of 2.</description>
+  </property>
+  <property>
+    <name>topology.receiver.buffer.size</name>
+    <value>8</value>
+    <description>The maximum number of messages to batch from the thread receiving off the network to the
+       executor queues. Must be a power of 2.</description>
+  </property>
+  <property>
+    <name>topology.transfer.buffer.size</name>
+    <value>1024</value>
+    <description>The size of the Disruptor transfer queue for each worker.</description>
+  </property>
+  <property>
+    <name>topology.tick.tuple.freq.secs</name>
+    <value>null</value>
+    <description>How often a tick tuple from the "__system" component and "__tick" stream should be sent
+       to tasks. Meant to be used as a component-specific configuration.</description>
+  </property>
+  <property>
+    <name>topology.worker.shared.thread.pool.size</name>
+    <value>4</value>
+    <description>The size of the shared thread pool for worker tasks to make use of. The thread pool can be accessed
+       via the TopologyContext.</description>
+  </property>
+  <property>
+    <name>topology.spout.wait.strategy</name>
+    <value>backtype.storm.spout.SleepSpoutWaitStrategy</value>
+    <description>A class that implements a strategy for what to do when a spout needs to wait. Waiting is
+       triggered in one of two conditions:
+
+       1. nextTuple emits no tuples
+       2. The spout has hit maxSpoutPending and can't emit any more tuples</description>
+  </property>
+  <property>
+    <name>topology.sleep.spout.wait.strategy.time.ms</name>
+    <value>1</value>
+    <description>The amount of milliseconds the SleepEmptyEmitStrategy should sleep for.</description>
+  </property>
+  <property>
+    <name>topology.error.throttle.interval.secs</name>
+    <value>10</value>
+    <description>The interval in seconds to use for determining whether to throttle error reported to Zookeeper. For example,
+       an interval of 10 seconds with topology.max.error.report.per.interval set to 5 will only allow 5 errors to be
+       reported to Zookeeper per task for every 10 second interval of time.</description>
+  </property>
+  <property>
+    <name>topology.max.error.report.per.interval</name>
+    <value>5</value>
+    <description>The interval in seconds to use for determining whether to throttle error reported to Zookeeper. For example,
+       an interval of 10 seconds with topology.max.error.report.per.interval set to 5 will only allow 5 errors to be
+       reported to Zookeeper per task for every 10 second interval of time.</description>
+  </property>
+  <property>
+    <name>topology.kryo.factory</name>
+    <value>backtype.storm.serialization.DefaultKryoFactory</value>
+    <description>Class that specifies how to create a Kryo instance for serialization. Storm will then apply
+       topology.kryo.register and topology.kryo.decorators on top of this. The default implementation
+       implements topology.fall.back.on.java.serialization and turns references off.</description>
+  </property>
+  <property>
+    <name>topology.tuple.serializer</name>
+    <value>backtype.storm.serialization.types.ListDelegateSerializer</value>
+    <description>The serializer class for ListDelegate (tuple payload).
+       The default serializer will be ListDelegateSerializer</description>
+  </property>
+  <property>
+    <name>topology.trident.batch.emit.interval.millis</name>
+    <value>500</value>
+    <description>How often a batch can be emitted in a Trident topology.</description>
+  </property>
+  <property>
+    <name>dev.zookeeper.path</name>
+    <value>/tmp/dev-storm-zookeeper</value>
+    <description>The path to use as the zookeeper dir when running a zookeeper server via
+       "storm dev-zookeeper". This zookeeper instance is only intended for development;
+       it is not a production grade zookeeper setup.</description>
+  </property>
+</configuration>
diff --git a/app-packages/storm-win/metainfo.xml b/app-packages/storm-win/metainfo.xml
new file mode 100644
index 0000000..1e9e7ab
--- /dev/null
+++ b/app-packages/storm-win/metainfo.xml
@@ -0,0 +1,150 @@
+<?xml version="1.0"?>
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<metainfo>
+  <schemaVersion>2.0</schemaVersion>
+  <application>
+    <name>STORM</name>
+    <comment>Apache Hadoop Stream processing framework</comment>
+    <version>${pkg.version}</version>
+    <exportedConfigs>storm-site</exportedConfigs>
+
+    <exportGroups>
+      <exportGroup>
+        <name>QuickLinks</name>
+        <exports>
+          <export>
+            <name>org.apache.slider.jmx</name>
+            <value>http://${STORM_UI_SERVER_HOST}:${site.storm-site.ui.port}/api/v1/cluster/summary</value>
+          </export>
+          <export>
+            <name>org.apache.slider.monitor</name>
+            <value>http://${STORM_UI_SERVER_HOST}:${site.storm-site.ui.port}</value>
+          </export>
+          <export>
+            <name>nimbus.host_port</name>
+            <value>http://${NIMBUS_HOST}:${site.storm-site.nimbus.thrift.port}</value>
+          </export>
+        </exports>
+      </exportGroup>
+    </exportGroups>
+
+    <commandOrders>
+      <commandOrder>
+        <command>NIMBUS-START</command>
+        <requires>SUPERVISOR-INSTALLED,STORM_UI_SERVER-INSTALLED,DRPC_SERVER-INSTALLED
+        </requires>
+      </commandOrder>
+      <commandOrder>
+        <command>SUPERVISOR-START</command>
+        <requires>NIMBUS-STARTED</requires>
+      </commandOrder>
+      <commandOrder>
+        <command>DRPC_SERVER-START</command>
+        <requires>NIMBUS-STARTED</requires>
+      </commandOrder>
+      <commandOrder>
+        <command>STORM_UI_SERVER-START</command>
+        <requires>NIMBUS-STARTED</requires>
+      </commandOrder>
+    </commandOrders>
+
+    <components>
+
+      <component>
+        <name>NIMBUS</name>
+        <category>MASTER</category>
+        <publishConfig>true</publishConfig>
+        <autoStartOnFailure>true</autoStartOnFailure>
+        <appExports>QuickLinks-nimbus.host_port</appExports>
+        <commandScript>
+          <script>scripts/nimbus.py</script>
+          <scriptType>PYTHON</scriptType>
+          <timeout>600</timeout>
+        </commandScript>
+      </component>
+
+      <component>
+        <name>SUPERVISOR</name>
+        <category>SLAVE</category>
+        <autoStartOnFailure>true</autoStartOnFailure>
+        <componentExports>
+          <componentExport>
+            <name>log_viewer_port</name>
+            <value>${THIS_HOST}:${site.storm-site.logviewer.port}</value>
+          </componentExport>
+        </componentExports>
+        <commandScript>
+          <script>scripts/supervisor.py</script>
+          <scriptType>PYTHON</scriptType>
+          <timeout>600</timeout>
+        </commandScript>
+      </component>
+
+      <component>
+        <name>STORM_UI_SERVER</name>
+        <category>MASTER</category>
+        <publishConfig>true</publishConfig>
+        <appExports>QuickLinks-org.apache.slider.monitor,QuickLinks-org.apache.slider.jmx</appExports>
+        <autoStartOnFailure>true</autoStartOnFailure>
+        <commandScript>
+          <script>scripts/ui_server.py</script>
+          <scriptType>PYTHON</scriptType>
+          <timeout>600</timeout>
+        </commandScript>
+      </component>
+
+      <component>
+        <name>DRPC_SERVER</name>
+        <category>MASTER</category>
+        <autoStartOnFailure>true</autoStartOnFailure>
+        <commandScript>
+          <script>scripts/drpc_server.py</script>
+          <scriptType>PYTHON</scriptType>
+          <timeout>600</timeout>
+        </commandScript>
+      </component>
+    </components>
+
+    <osSpecifics>
+      <osSpecific>
+        <osType>any</osType>
+        <packages>
+          <package>
+            <type>zip</type>
+            <name>files/${pkg.name}</name>
+          </package>
+        </packages>
+      </osSpecific>
+    </osSpecifics>
+
+    <configFiles>
+      <configFile>
+        <type>yaml</type>
+        <fileName>storm.yaml</fileName>
+        <dictionaryName>storm-site</dictionaryName>
+      </configFile>
+      <configFile>
+        <type>env</type>
+        <fileName>storm-env.sh</fileName>
+        <dictionaryName>storm-env</dictionaryName>
+      </configFile>
+    </configFiles>
+
+  </application>
+</metainfo>
diff --git a/app-packages/storm-win/package/scripts/drpc_server.py b/app-packages/storm-win/package/scripts/drpc_server.py
new file mode 100644
index 0000000..779854a
--- /dev/null
+++ b/app-packages/storm-win/package/scripts/drpc_server.py
@@ -0,0 +1,55 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import sys
+from resource_management import *
+from storm import storm
+from service import service
+
+class DrpcServer(Script):
+  def install(self, env):
+    self.install_packages(env)
+
+  def configure(self, env):
+    import params
+    env.set_params(params)
+
+    storm()
+
+  def start(self, env):
+    import params
+    env.set_params(params)
+    self.configure(env)
+
+    service("drpc", action="start")
+
+  def stop(self, env):
+    import params
+    env.set_params(params)
+
+    service("drpc", action="stop")
+
+  def status(self, env):
+    import status_params
+    env.set_params(status_params)
+    #check_process_status(status_params.pid_drpc)
+
+if __name__ == "__main__":
+  DrpcServer().execute()
diff --git a/app-packages/storm-win/package/scripts/nimbus.py b/app-packages/storm-win/package/scripts/nimbus.py
new file mode 100644
index 0000000..c7c3120
--- /dev/null
+++ b/app-packages/storm-win/package/scripts/nimbus.py
@@ -0,0 +1,55 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import sys
+from resource_management import *
+from storm import storm
+from service import service
+
+class Nimbus(Script):
+  def install(self, env):
+    self.install_packages(env)
+
+  def configure(self, env):
+    import params
+    env.set_params(params)
+
+    storm()
+
+  def start(self, env):
+    import params
+    env.set_params(params)
+    self.configure(env)
+
+    service("nimbus", action="start")
+
+  def stop(self, env):
+    import params
+    env.set_params(params)
+
+    service("nimbus", action="stop")
+
+  def status(self, env):
+    import status_params
+    env.set_params(status_params)
+    check_process_status(status_params.pid_nimbus)
+
+if __name__ == "__main__":
+  Nimbus().execute()
diff --git a/app-packages/storm-win/package/scripts/params.py b/app-packages/storm-win/package/scripts/params.py
new file mode 100644
index 0000000..21e5c65
--- /dev/null
+++ b/app-packages/storm-win/package/scripts/params.py
@@ -0,0 +1,39 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+from resource_management import *
+import status_params
+
+# server configurations
+config = Script.get_config()
+
+app_root = config['configurations']['global']['app_root']
+conf_dir = format("{app_root}/conf")
+storm_user = config['configurations']['global']['app_user']
+log_dir = config['configurations']['global']['app_log_dir']
+pid_dir = status_params.pid_dir
+local_dir = config['configurations']['storm-site']['storm.local.dir']
+user_group = config['configurations']['global']['user_group']
+java64_home = config['hostLevelParams']['java_home']
+nimbus_host = config['configurations']['storm-site']['nimbus.host']
+nimbus_port = config['configurations']['storm-site']['nimbus.thrift.port']
+rest_api_conf_file = format("{conf_dir}/config.yaml")
+rest_lib_dir = format("{app_root}/external/storm-rest")
+storm_bin = format("{app_root}/bin/storm.cmd")
diff --git a/app-packages/storm-win/package/scripts/rest_api.py b/app-packages/storm-win/package/scripts/rest_api.py
new file mode 100644
index 0000000..33d8924
--- /dev/null
+++ b/app-packages/storm-win/package/scripts/rest_api.py
@@ -0,0 +1,57 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import sys
+from resource_management import *
+from storm import storm
+from service import service
+
+
+class StormRestApi(Script):
+  def install(self, env):
+    self.install_packages(env)
+    self.configure(env)
+
+  def configure(self, env):
+    import params
+    env.set_params(params)
+
+    storm()
+
+  def start(self, env):
+    import params
+    env.set_params(params)
+    self.configure(env)
+
+    service("rest_api", action="start")
+
+  def stop(self, env):
+    import params
+    env.set_params(params)
+
+    service("rest_api", action="stop")
+
+  def status(self, env):
+    import status_params
+    env.set_params(status_params)
+    check_process_status(status_params.pid_rest_api)
+
+if __name__ == "__main__":
+  StormRestApi().execute()
diff --git a/app-packages/storm-win/package/scripts/service.py b/app-packages/storm-win/package/scripts/service.py
new file mode 100644
index 0000000..aa7d339
--- /dev/null
+++ b/app-packages/storm-win/package/scripts/service.py
@@ -0,0 +1,76 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+
+from resource_management import *
+import os
+import sys
+import xml.etree.ElementTree as et
+
+"""
+Slider package uses jps as pgrep does not list the whole process start command
+"""
+def service(
+    name,
+    action='start'):
+  import params
+  import status_params
+
+  pid_file = status_params.pid_files[name]
+  backtype = format("backtype.storm.daemon.{name}")
+
+  if action == "start":
+    os.environ['STORM_LOG_DIR'] = params.log_dir
+    os.environ['STORM_HOME'] = params.app_root
+    os.environ['STORM_CONF_DIR'] = params.conf_dir
+
+    generate_xml = format("{storm_bin} --service {name} > {log_dir}/{name}.cmd")
+
+    Execute(generate_xml,
+            logoutput=True,
+            wait_for_finish=True
+    )
+
+    tree = et.parse(format("{log_dir}/{name}.cmd"))
+    root = tree.getroot()
+    cmd_part = None
+    for child in root:
+      if child.tag == "arguments":
+        cmd_part = child.text
+
+    if cmd_part:
+      cmd = format("{java64_home}\\bin\\java {cmd_part}")
+
+      Execute(cmd,
+              logoutput=False,
+              wait_for_finish=False,
+              pid_file=pid_file
+      )
+    else:
+      Logger.warn("Valid command file did not get generated at " + format("{log_dir}/{name}.cmd"))
+
+  elif action == "stop":
+    pid = format("`cat {pid_file}` >/dev/null 2>&1")
+    Execute(format("kill {pid}")
+    )
+    Execute(format("kill -9 {pid}"),
+            ignore_failures=True
+    )
+    Execute(format("rm -f {pid_file}"))
diff --git a/app-packages/storm-win/package/scripts/status_params.py b/app-packages/storm-win/package/scripts/status_params.py
new file mode 100644
index 0000000..2bf6870
--- /dev/null
+++ b/app-packages/storm-win/package/scripts/status_params.py
@@ -0,0 +1,35 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+from resource_management import *
+
+config = Script.get_config()
+
+container_id = config['configurations']['global']['app_container_id']
+pid_dir = config['configurations']['global']['app_pid_dir']
+pid_nimbus = format("{pid_dir}/nimbus.pid")
+pid_supervisor = format("{pid_dir}/supervisor.pid")
+pid_drpc = format("{pid_dir}/drpc.pid")
+pid_ui = format("{pid_dir}/ui.pid")
+pid_logviewer = format("{pid_dir}/logviewer.pid")
+pid_files = {"logviewer":pid_logviewer,
+             "ui": pid_ui,
+             "nimbus": pid_nimbus,
+             "supervisor": pid_supervisor,
+             "drpc": pid_drpc}
\ No newline at end of file
diff --git a/app-packages/storm-win/package/scripts/storm.py b/app-packages/storm-win/package/scripts/storm.py
new file mode 100644
index 0000000..e2e6465
--- /dev/null
+++ b/app-packages/storm-win/package/scripts/storm.py
@@ -0,0 +1,45 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+from resource_management import *
+from yaml_config import yaml_config
+import sys
+
+def storm():
+  import params
+
+  Directory([params.log_dir, params.pid_dir, params.local_dir, params.conf_dir],
+            owner=params.storm_user,
+            group=params.user_group,
+            recursive=True
+  )
+
+  File(format("{conf_dir}/config.yaml"),
+            content=Template("config.yaml.j2"),
+            owner = params.storm_user,
+            group = params.user_group
+  )
+
+  yaml_config( "storm.yaml",
+               conf_dir = params.conf_dir,
+               configurations = params.config['configurations']['storm-site'],
+               owner = params.storm_user,
+               group = params.user_group
+  )
diff --git a/app-packages/storm-win/package/scripts/supervisor.py b/app-packages/storm-win/package/scripts/supervisor.py
new file mode 100644
index 0000000..47c20c9
--- /dev/null
+++ b/app-packages/storm-win/package/scripts/supervisor.py
@@ -0,0 +1,61 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import sys
+from resource_management import *
+from yaml_config import yaml_config
+from storm import storm
+from service import service
+
+
+class Supervisor(Script):
+  def install(self, env):
+    self.install_packages(env)
+
+  def configure(self, env):
+    import params
+    env.set_params(params)
+    storm()
+
+  def start(self, env):
+    import params
+    env.set_params(params)
+    self.configure(env)
+
+    service("supervisor", action="start")
+    service("logviewer", action="start")
+
+  def stop(self, env):
+    import params
+    env.set_params(params)
+
+    service("supervisor", action="stop")
+    service("logviewer", action="stop")
+
+  def status(self, env):
+    import status_params
+    env.set_params(status_params)
+
+    check_process_status(status_params.pid_supervisor)
+
+
+if __name__ == "__main__":
+  Supervisor().execute()
+
diff --git a/app-packages/storm-win/package/scripts/ui_server.py b/app-packages/storm-win/package/scripts/ui_server.py
new file mode 100644
index 0000000..0fe7cd2
--- /dev/null
+++ b/app-packages/storm-win/package/scripts/ui_server.py
@@ -0,0 +1,55 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import sys
+from resource_management import *
+from storm import storm
+from service import service
+
+class UiServer(Script):
+  def install(self, env):
+    self.install_packages(env)
+
+  def configure(self, env):
+    import params
+    env.set_params(params)
+
+    storm()
+
+  def start(self, env):
+    import params
+    env.set_params(params)
+    self.configure(env)
+
+    service("ui", action="start")
+
+  def stop(self, env):
+    import params
+    env.set_params(params)
+
+    service("ui", action="stop")
+
+  def status(self, env):
+    import status_params
+    env.set_params(status_params)
+    check_process_status(status_params.pid_ui)
+
+if __name__ == "__main__":
+  UiServer().execute()
diff --git a/app-packages/storm-win/package/scripts/yaml_config.py b/app-packages/storm-win/package/scripts/yaml_config.py
new file mode 100644
index 0000000..5f763cc
--- /dev/null
+++ b/app-packages/storm-win/package/scripts/yaml_config.py
@@ -0,0 +1,80 @@
+#!/usr/bin/env python
+"""
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+"""
+
+import re
+import socket
+from resource_management import *
+
+def escape_yaml_propetry(value):
+  # pre-process value for any "_HOST" tokens
+  value = value.replace('_HOST', socket.getfqdn())
+
+  unquouted = False
+  unquouted_values = ["null","Null","NULL","true","True","TRUE","false","False","FALSE","YES","Yes","yes","NO","No","no","ON","On","on","OFF","Off","off"]
+  
+  if value in unquouted_values:
+    unquouted = True
+
+  # if is list [a,b,c]
+  if re.match('^\w*\[.+\]\w*$', value):
+    unquouted = True
+
+  # if is map {'a':'b'}
+  if re.match('^\w*\{.+\}\w*$', value):
+    unquouted = True
+
+  try:
+    int(value)
+    unquouted = True
+  except ValueError:
+    pass
+  
+  try:
+    float(value)
+    unquouted = True
+  except ValueError:
+    pass
+  
+  if not unquouted:
+    value = value.replace("'","''")
+    value = "'"+value+"'"
+    
+  return value
+
+def yaml_inline_template(configurations):
+  return source.InlineTemplate('''{% for key, value in configurations_dict.items() %}{{ key }}: {{ escape_yaml_propetry(value) }}
+{% endfor %}''', configurations_dict=configurations, extra_imports=[escape_yaml_propetry])
+
+def yaml_config(
+  filename,
+  configurations = None,
+  conf_dir = None,
+  mode = None,
+  owner = None,
+  group = None
+):
+    config_content = yaml_inline_template(configurations)
+
+    File (format("{conf_dir}/{filename}"),
+      content = config_content,
+      owner = owner,
+      group = group,
+      mode = mode
+    )
diff --git a/app-packages/storm-win/package/templates/config.yaml.j2 b/app-packages/storm-win/package/templates/config.yaml.j2
new file mode 100644
index 0000000..c3dd542
--- /dev/null
+++ b/app-packages/storm-win/package/templates/config.yaml.j2
@@ -0,0 +1,37 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+nimbusHost: {{nimbus_host}}
+nimbusPort: {{nimbus_port}}
+
+enableGanglia: false
+
+# ganglia configuration
+ganglia:
+
+  # how often to report to ganglia metrics (in seconds)
+  reportInterval: 600
+
+  # the hostname of the gmond server where storm cluster metrics will be sent
+  host: localhost
+  port: 8649
+
+  # address mode
+  # default is MULTICAST
+  addressMode: "UNICAST"
+
+  # an <IP>:<HOSTNAME> pair to spoof
+  # this allows us to simulate storm cluster metrics coming from a specific host
+  #spoof: "192.168.1.1:storm"
\ No newline at end of file
diff --git a/app-packages/storm-win/package/templates/storm_jaas.conf.j2 b/app-packages/storm-win/package/templates/storm_jaas.conf.j2
new file mode 100644
index 0000000..a1ba6ea
--- /dev/null
+++ b/app-packages/storm-win/package/templates/storm_jaas.conf.j2
@@ -0,0 +1,44 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *       http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+StormServer {
+   com.sun.security.auth.module.Krb5LoginModule required
+   useKeyTab=true
+   keyTab="{{storm_server_keytab_path}}"
+   storeKey=true
+   useTicketCache=false
+   principal="{{storm_jaas_server_principal}}";
+};
+StormClient {
+   com.sun.security.auth.module.Krb5LoginModule required
+   useKeyTab=true
+   keyTab="{{storm_client_keytab_path}}"
+   storeKey=true
+   useTicketCache=false
+   serviceName="{{storm_jaas_stormclient_servicename}}"
+   debug=true
+   principal="{{storm_jaas_client_principal}}";
+};
+Client {
+   com.sun.security.auth.module.Krb5LoginModule required
+   useKeyTab=true
+   keyTab="{{storm_client_keytab_path}}"
+   storeKey=true
+   useTicketCache=false
+   serviceName="zookeeper"
+   principal="{{storm_jaas_client_principal}}";
+};
diff --git a/app-packages/storm-win/pom.xml b/app-packages/storm-win/pom.xml
new file mode 100644
index 0000000..3b4cb9d
--- /dev/null
+++ b/app-packages/storm-win/pom.xml
@@ -0,0 +1,91 @@
+<?xml version="1.0"?>
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
+  <!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+  <parent>
+    <groupId>org.apache.slider</groupId>
+    <artifactId>slider</artifactId>
+    <version>0.60.0-incubating</version>
+    <relativePath>../../pom.xml</relativePath>
+  </parent>
+  <modelVersion>4.0.0</modelVersion>
+  <artifactId>slider-storm-app-win-package</artifactId>
+  <packaging>pom</packaging>
+  <name>Slider Storm App Package</name>
+  <description>Slider Storm App Package</description>
+  <version>${pkg.version}</version>
+  <properties>
+    <work.dir>package-tmp</work.dir>
+  </properties>
+
+  <profiles>
+    <profile>
+      <id>storm-app-package-win</id>
+      <build>
+        <plugins>
+
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-antrun-plugin</artifactId>
+            <version>1.7</version>
+            <executions>
+              <execution>
+                <id>copy</id>
+                <phase>validate</phase>
+                <configuration>
+                  <target name="copy and rename file">
+                    <copy file="${pkg.src}/${pkg.name}" tofile="${project.build.directory}/${pkg.name}" />
+                  </target>
+                </configuration>
+                <goals>
+                  <goal>run</goal>
+                </goals>
+              </execution>
+            </executions>
+          </plugin>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-assembly-plugin</artifactId>
+            <configuration>
+              <tarLongFileMode>gnu</tarLongFileMode>
+              <descriptor>src/assembly/storm.xml</descriptor>
+              <appendAssemblyId>false</appendAssemblyId>
+            </configuration>
+            <executions>
+              <execution>
+                <id>build-tarball</id>
+                <phase>package</phase>
+                <goals>
+                  <goal>single</goal>
+                </goals>
+              </execution>
+            </executions>
+          </plugin>
+
+        </plugins>
+      </build>
+    </profile>
+  </profiles>
+
+  <build>
+  </build>
+
+  <dependencies>
+  </dependencies>
+
+</project>
diff --git a/app-packages/storm/resources.json b/app-packages/storm-win/resources-default.json
similarity index 83%
rename from app-packages/storm/resources.json
rename to app-packages/storm-win/resources-default.json
index b184a40..a36f005 100644
--- a/app-packages/storm/resources.json
+++ b/app-packages/storm-win/resources-default.json
@@ -3,6 +3,8 @@
   "metadata" : {
   },
   "global" : {
+    "yarn.log.include.patterns": "",
+    "yarn.log.exclude.patterns": ""
   },
   "components": {
     "slider-appmaster": {
@@ -11,21 +13,17 @@
       "yarn.role.priority": "1",
       "yarn.component.instances": "1"
     },
-    "STORM_REST_API": {
+    "STORM_UI_SERVER": {
       "yarn.role.priority": "2",
       "yarn.component.instances": "1"
     },
-    "STORM_UI_SERVER": {
+    "DRPC_SERVER": {
       "yarn.role.priority": "3",
       "yarn.component.instances": "1"
     },
-    "DRPC_SERVER": {
-      "yarn.role.priority": "4",
-      "yarn.component.instances": "1"
-    },
     "SUPERVISOR": {
-      "yarn.role.priority": "5",
+      "yarn.role.priority": "4",
       "yarn.component.instances": "1"
     }
   }
-}
\ No newline at end of file
+}
diff --git a/app-packages/storm-win/src/assembly/storm.xml b/app-packages/storm-win/src/assembly/storm.xml
new file mode 100644
index 0000000..2ee7d31
--- /dev/null
+++ b/app-packages/storm-win/src/assembly/storm.xml
@@ -0,0 +1,68 @@
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~  or more contributor license agreements.  See the NOTICE file
+  ~  distributed with this work for additional information
+  ~  regarding copyright ownership.  The ASF licenses this file
+  ~  to you under the Apache License, Version 2.0 (the
+  ~  "License"); you may not use this file except in compliance
+  ~  with the License.  You may obtain a copy of the License at
+  ~
+  ~       http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~  Unless required by applicable law or agreed to in writing, software
+  ~  distributed under the License is distributed on an "AS IS" BASIS,
+  ~  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  ~  See the License for the specific language governing permissions and
+  ~  limitations under the License.
+  -->
+
+
+<assembly
+  xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0 http://maven.apache.org/xsd/assembly-1.1.0.xsd">
+  <id>storm_v${storm.version}</id>
+  <formats>
+    <format>zip</format>
+    <format>dir</format>
+  </formats>
+  <includeBaseDirectory>false</includeBaseDirectory>
+
+  <files>
+    <file>
+      <source>appConfig-default.json</source>
+      <outputDirectory>/</outputDirectory>
+      <filtered>true</filtered>
+      <fileMode>0755</fileMode>
+    </file>
+    <file>
+      <source>metainfo.xml</source>
+      <outputDirectory>/</outputDirectory>
+      <filtered>true</filtered>
+      <fileMode>0755</fileMode>
+    </file>
+    <file>
+      <source>${pkg.src}/${pkg.name}</source>
+      <outputDirectory>package/files</outputDirectory>
+      <filtered>false</filtered>
+      <fileMode>0755</fileMode>
+    </file>
+  </files>
+
+  <fileSets>
+    <fileSet>
+      <directory>${project.basedir}</directory>
+      <outputDirectory>/</outputDirectory>
+      <excludes>
+        <exclude>pom.xml</exclude>
+        <exclude>src/**</exclude>
+        <exclude>target/**</exclude>
+        <exclude>appConfig-default.json</exclude>
+        <exclude>metainfo.xml</exclude>
+      </excludes>
+      <fileMode>0755</fileMode>
+      <directoryMode>0755</directoryMode>
+    </fileSet>
+
+  </fileSets>
+</assembly>
diff --git a/app-packages/storm/README.txt b/app-packages/storm/README.txt
index 971cf14..12dc140 100644
--- a/app-packages/storm/README.txt
+++ b/app-packages/storm/README.txt
@@ -17,21 +17,20 @@
 
 How to create a Slider app package for Storm?
 
-To create the app package you will need the Storm tarball copied to a specific location.
-Various configurations provided in this sample are customized for apache-storm-0.9.1.2.1.1.0-237.tar.gz.
-So if you use a different version you may need to edit a few config values.
+To create the app package you will need the Storm tarball and invoke mvn command
+with appropriate parameters.
 
-Replace the placeholder tarball for Storm.
-  cp ~/Downloads/apache-storm-0.9.1.2.1.1.0-237.tar.gz package/files/
-  rm package/files/apache-storm-0.9.1.2.1.1.0-237.tar.gz.REPLACE
+Command:
+mvn clean package -Pstorm-app-package -Dpkg.version=<version>
+   -Dpkg.name=<file name of app tarball> -Dpkg.src=<folder location where the pkg is available>
 
-Create a zip package at the root of the package (<slider enlistment>/app-packages/storm-v0_91/) 
-  zip -r storm_v091.zip .
+Example:
+mvn clean package -Pstorm-app-package -Dpkg.version=0.9.3.2.2.0.0-578
+   -Dpkg.name=apache-storm-0.9.3.2.2.0.0-578.tar.gz -Dpkg.src=/Users/user1/Downloads
 
-Verify the content using  
-  unzip -l "$@" storm_v091.zip
+App package can be found in
+  app-packages/storm/target/slider-storm-app-package-${pkg.version}.zip
 
-While appConfig.json and resources.json are not required for the package they work
-well as the default configuration for Slider apps. So its advisable that when you
-create an application package for Slider, include sample/default resources.json and
-appConfig.json for a minimal Yarn cluster.
+appConfig-default.json and resources-default.json are not required to be packaged.
+These files are included as reference configuration for Slider apps and are suitable
+for a one-node cluster.
diff --git a/app-packages/storm/appConfig-default.json b/app-packages/storm/appConfig-default.json
new file mode 100644
index 0000000..d7908a3
--- /dev/null
+++ b/app-packages/storm/appConfig-default.json
@@ -0,0 +1,43 @@
+{
+  "schema": "http://example.org/specification/v2.0.0",
+  "metadata": {
+  },
+  "global": {
+    "application.def": ".slider/package/STORM/slider-storm-app-package-${pkg.version}.zip",
+    "java_home": "/usr/jdk64/jdk1.7.0_67",
+    "create.default.zookeeper.node": "true",
+
+    "site.global.app_user": "${USER_NAME}",
+    "site.global.app_root": "${AGENT_WORK_ROOT}/app/install/apache-storm-${pkg.version}",
+    "site.global.user_group": "hadoop",
+    "site.global.ganglia_server_host": "${NN_HOST}",
+    "site.global.ganglia_server_id": "Application2",
+    "site.global.ganglia_enabled":"true",
+    "site.global.ganglia_server_port": "8668",
+
+    "site.storm-site.storm.log.dir" : "${AGENT_LOG_ROOT}",
+    "site.storm-site.storm.zookeeper.servers": "['${ZK_HOST}']",
+    "site.storm-site.nimbus.thrift.port": "${NIMBUS.ALLOCATED_PORT}",
+    "site.storm-site.storm.local.dir": "${AGENT_WORK_ROOT}/app/tmp/storm",
+    "site.storm-site.transactional.zookeeper.root": "/transactional",
+    "site.storm-site.storm.zookeeper.port": "2181",
+    "site.storm-site.nimbus.childopts": "-Xmx1024m -javaagent:${AGENT_WORK_ROOT}/app/install/apache-storm-${pkg.version}/external/storm-jmxetric/lib/jmxetric-1.0.4.jar=host=${@//site/global/ganglia_server_host},port=${@//site/global/ganglia_server_port},wireformat31x=true,mode=multicast,config=${AGENT_WORK_ROOT}/app/install/apache-storm-${pkg.version}/external/storm-jmxetric/conf/jmxetric-conf.xml,process=Nimbus_JVM",
+    "site.storm-site.supervisor.childopts": "-Xmx256m -javaagent:${AGENT_WORK_ROOT}/app/install/apache-storm-${pkg.version}/external/storm-jmxetric/lib/jmxetric-1.0.4.jar=host=${@//site/global/ganglia_server_host},port=${@//site/global/ganglia_server_port},wireformat31x=true,mode=multicast,config=${AGENT_WORK_ROOT}/app/install/apache-storm-${pkg.version}/external/storm-jmxetric/conf/jmxetric-conf.xml,process=Supervisor_JVM",
+    "site.storm-site.ui.childopts": "-Xmx768m",
+    "site.storm-site.worker.childopts": "-Xmx768m -javaagent:${AGENT_WORK_ROOT}/app/install/apache-storm-${pkg.version}/external/storm-jmxetric/lib/jmxetric-1.0.4.jar=host=${@//site/global/ganglia_server_host},port=${@//site/global/ganglia_server_port},wireformat31x=true,mode=multicast,config=${AGENT_WORK_ROOT}/app/install/apache-storm-${pkg.version}/external/storm-jmxetric/conf/jmxetric-conf.xml,process=Worker_%ID%_JVM",
+    "site.storm-site.dev.zookeeper.path": "${AGENT_WORK_ROOT}/app/tmp/dev-storm-zookeeper",
+    "site.storm-site.drpc.invocations.port": "0",
+    "site.storm-site.storm.zookeeper.root": "${DEFAULT_ZK_PATH}",
+    "site.storm-site.transactional.zookeeper.port": "null",
+    "site.storm-site.nimbus.host": "${NIMBUS_HOST}",
+    "site.storm-site.ui.port": "${STORM_UI_SERVER.ALLOCATED_PORT}",
+    "site.storm-site.supervisor.slots.ports": "[${SUPERVISOR.ALLOCATED_PORT}{PER_CONTAINER},${SUPERVISOR.ALLOCATED_PORT}{PER_CONTAINER}]",
+    "site.storm-site.drpc.port": "0",
+    "site.storm-site.logviewer.port": "${SUPERVISOR.ALLOCATED_PORT}{PER_CONTAINER}"
+  },
+  "components": {
+    "slider-appmaster": {
+      "jvm.heapsize": "256M"
+    }
+  }
+}
diff --git a/app-packages/storm/appConfig-secured-default.json b/app-packages/storm/appConfig-secured-default.json
new file mode 100644
index 0000000..4c40ddf
--- /dev/null
+++ b/app-packages/storm/appConfig-secured-default.json
@@ -0,0 +1,67 @@
+{
+  "schema": "http://example.org/specification/v2.0.0",
+  "metadata": {
+  },
+  "global": {
+    "application.def": ".slider/package/STORM/slider-storm-app-package-${pkg.version}.zip",
+    "java_home": "/usr/jdk64/jdk1.7.0_67",
+    "create.default.zookeeper.node": "true",
+
+    "site.global.app_user": "${USER_NAME}",
+    "site.global.app_root": "${AGENT_WORK_ROOT}/app/install/apache-storm-${pkg.version}",
+    "site.global.user_group": "hadoop",
+    "site.global.ganglia_server_host": "${NN_HOST}",
+    "site.global.ganglia_server_id": "Application2",
+    "site.global.ganglia_enabled":"true",
+    "site.global.ganglia_server_port": "8668",
+
+    "site.storm-site.storm.log.dir" : "${AGENT_LOG_ROOT}",
+    "site.storm-site.storm.zookeeper.servers": "['${ZK_HOST}']",
+    "site.storm-site.nimbus.thrift.port": "${NIMBUS.ALLOCATED_PORT}",
+    "site.storm-site.storm.local.dir": "${AGENT_WORK_ROOT}/app/tmp/storm",
+    "site.storm-site.transactional.zookeeper.root": "/transactional",
+    "site.storm-site.storm.zookeeper.port": "2181",
+    "site.storm-site.nimbus.childopts": "-Xmx1024m -Djava.security.auth.login.config=${AGENT_WORK_ROOT}/app/install/apache-storm-${pkg.version}/conf/storm_jaas.conf -javaagent:${AGENT_WORK_ROOT}/app/install/apache-storm-${pkg.version}/external/storm-jmxetric/lib/jmxetric-1.0.4.jar=host=${@//site/global/ganglia_server_host},port=${@//site/global/ganglia_server_port},wireformat31x=true,mode=multicast,config=${AGENT_WORK_ROOT}/app/install/apache-storm-${pkg.version}/external/storm-jmxetric/conf/jmxetric-conf.xml,process=Nimbus_JVM",
+    "site.storm-site.supervisor.childopts": "-Xmx256m -Djava.security.auth.login.config=${AGENT_WORK_ROOT}/app/install/apache-storm-${pkg.version}/conf/storm_jaas.conf -javaagent:${AGENT_WORK_ROOT}/app/install/apache-storm-${pkg.version}/external/storm-jmxetric/lib/jmxetric-1.0.4.jar=host=${@//site/global/ganglia_server_host},port=${@//site/global/ganglia_server_port},wireformat31x=true,mode=multicast,config=${AGENT_WORK_ROOT}/app/install/apache-storm-${pkg.version}/external/storm-jmxetric/conf/jmxetric-conf.xml,process=Supervisor_JVM",
+    "site.storm-site.ui.childopts": "-Xmx768m -Djava.security.auth.login.config=${AGENT_WORK_ROOT}/app/install/apache-storm-${pkg.version}/conf/storm_jaas.conf",
+    "site.storm-site.worker.childopts": "-Xmx768m -javaagent:${AGENT_WORK_ROOT}/app/install/apache-storm-${pkg.version}/external/storm-jmxetric/lib/jmxetric-1.0.4.jar=host=${@//site/global/ganglia_server_host},port=${@//site/global/ganglia_server_port},wireformat31x=true,mode=multicast,config=${AGENT_WORK_ROOT}/app/install/apache-storm-${pkg.version}/external/storm-jmxetric/conf/jmxetric-conf.xml,process=Worker_%ID%_JVM",
+    "site.storm-site.dev.zookeeper.path": "${AGENT_WORK_ROOT}/app/tmp/dev-storm-zookeeper",
+    "site.storm-site.drpc.invocations.port": "0",
+    "site.storm-site.storm.zookeeper.root": "${DEFAULT_ZK_PATH}",
+    "site.storm-site.transactional.zookeeper.port": "null",
+    "site.storm-site.nimbus.host": "${NIMBUS_HOST}",
+    "site.storm-site.ui.port": "${STORM_UI_SERVER.ALLOCATED_PORT}",
+    "site.storm-site.supervisor.slots.ports": "[${SUPERVISOR.ALLOCATED_PORT}{PER_CONTAINER},${SUPERVISOR.ALLOCATED_PORT}{PER_CONTAINER}]",
+    "site.storm-site.drpc.port": "0",
+    "site.storm-site.logviewer.port": "${SUPERVISOR.ALLOCATED_PORT}{PER_CONTAINER}",
+
+    "site.storm-site.nimbus.authorizer": "backtype.storm.security.auth.authorizer.SimpleACLAuthorizer",
+    "site.storm-site.storm.thrift.transport": "backtype.storm.security.auth.kerberos.KerberosSaslTransportPlugin",
+    "site.storm-site.java.security.auth.login.config": "${AGENT_WORK_ROOT}/app/install/apache-storm-${pkg.version}/conf/storm_jaas.conf",
+    "site.storm-site.storm.principal.tolocal": "backtype.storm.security.auth.KerberosPrincipalToLocal",
+    "site.storm-site.storm.zookeeper.superACL": "sasl:${USER_NAME}",
+    "site.storm-site.nimbus.admins": "['${USER_NAME}']",
+    "site.storm-site.nimbus.users": "['${USER_NAME}']",
+    "site.storm-site.nimbus.supervisor.users": "['${USER_NAME}']",
+    "site.storm-site.nimubs.authorizer": "backtype.storm.security.auth.authorizer.SimpleACLAuthorizer", 
+    "site.storm-site.storm.thrift.transport": "backtype.storm.security.auth.kerberos.KerberosSaslTransportPlugin",
+    "site.storm-site.storm.principal.tolocal": "backtype.storm.security.auth.KerberosPrincipalToLocal",
+    "site.storm-site.ui.filter": "org.apache.hadoop.security.authentication.server.AuthenticationFilter",
+    "site.storm-site.ui.filter.params": "{'type': 'kerberos', 'kerberos.principal': 'HTTP/_HOST', 'kerberos.keytab': '/etc/security/keytabs/spnego.service.keytab', 'kerberos.name.rules': 'RULE:[2:$1@$0]([jt]t@.*EXAMPLE.COM)s/.*/$MAPRED_USER/ RULE:[2:$1@$0]([nd]n@.*EXAMPLE.COM)s/.*/$HDFS_USER/DEFAULT'}",
+
+    "site.storm-env.kerberos_domain": "EXAMPLE.COM",
+    "site.storm-env.storm_client_principal_name": "${USER_NAME}@EXAMPLE.COM",
+    "site.storm-env.storm_server_principal_name": "${USER_NAME}/_HOST@EXAMPLE.COM",
+    "site.storm-env.storm_client_keytab": "${AGENT_WORK_ROOT}/keytabs/${USER_NAME}.STORM.client.keytab",
+    "site.storm-env.storm_server_keytab": "${AGENT_WORK_ROOT}/keytabs/${USER_NAME}.STORM.nimbus.keytab"
+    
+  },
+  "components": {
+    "slider-appmaster": {
+      "jvm.heapsize": "256M",
+      "slider.hdfs.keytab.dir": ".slider/keytabs/storm",
+      "slider.am.login.keytab.name": "${USER_NAME}.headless.keytab",
+      "slider.keytab.principal.name": "${USER_NAME}"
+    }
+  }
+}
diff --git a/app-packages/storm/appConfig.json b/app-packages/storm/appConfig.json
deleted file mode 100644
index 24078cf..0000000
--- a/app-packages/storm/appConfig.json
+++ /dev/null
@@ -1,126 +0,0 @@
-{
-  "schema": "http://example.org/specification/v2.0.0",
-  "metadata": {
-  },
-  "global": {
-    "application.def": "package/storm_v091.zip",
-    "config_types": "storm-site",
-    "java_home": "/usr/jdk64/jdk1.7.0_45",
-    "package_list": "files/apache-storm-0.9.1.2.1.1.0-237.tar.gz",
-    "create.default.zookeeper.node": "true",
-    "site.global.app_user": "yarn",
-    "site.global.app_root": "${AGENT_WORK_ROOT}/app/install/apache-storm-0.9.1.2.1.1.0-237",
-    "site.global.user_group": "hadoop",
-    "site.global.security_enabled": "false",
-    "site.global.ganglia_server_host": "${NN_HOST}",
-    "site.global.ganglia_server_id": "Application2",
-    "site.global.ganglia_enabled":"true",
-    "site.global.ganglia_server_port": "8668",
-    "site.global.rest_api_port": "${STORM_REST_API.ALLOCATED_PORT}",
-    "site.global.rest_api_admin_port": "${STORM_REST_API.ALLOCATED_PORT}",
-    "site.storm-site.topology.tuple.serializer": "backtype.storm.serialization.types.ListDelegateSerializer",
-    "site.storm-site.topology.workers": "1",
-    "site.storm-site.drpc.worker.threads": "64",
-    "site.storm-site.storm.zookeeper.servers": "['${ZK_HOST}']",
-    "site.storm-site.supervisor.heartbeat.frequency.secs": "5",
-    "site.storm-site.topology.executor.send.buffer.size": "1024",
-    "site.storm-site.drpc.childopts": "-Xmx768m",
-    "site.storm-site.nimbus.thrift.port": "${NIMBUS.ALLOCATED_PORT}",
-    "site.storm-site.storm.zookeeper.retry.intervalceiling.millis": "30000",
-    "site.storm-site.storm.local.dir": "${AGENT_WORK_ROOT}/app/tmp/storm",
-    "site.storm-site.topology.receiver.buffer.size": "8",
-    "site.storm-site.storm.messaging.netty.client_worker_threads": "1",
-    "site.storm-site.transactional.zookeeper.root": "/transactional",
-    "site.storm-site.drpc.request.timeout.secs": "600",
-    "site.storm-site.topology.skip.missing.kryo.registrations": "false",
-    "site.storm-site.worker.heartbeat.frequency.secs": "1",
-    "site.storm-site.zmq.hwm": "0",
-    "site.storm-site.storm.zookeeper.connection.timeout": "15000",
-    "site.storm-site.topology.max.error.report.per.interval": "5",
-    "site.storm-site.storm.messaging.netty.server_worker_threads": "1",
-    "site.storm-site.supervisor.worker.start.timeout.secs": "120",
-    "site.storm-site.zmq.threads": "1",
-    "site.storm-site.topology.acker.executors": "null",
-    "site.storm-site.storm.local.mode.zmq": "false",
-    "site.storm-site.topology.max.task.parallelism": "null",
-    "site.storm-site.storm.zookeeper.port": "2181",
-    "site.storm-site.nimbus.childopts": "-Xmx1024m -javaagent:${AGENT_WORK_ROOT}/app/install/apache-storm-0.9.1.2.1.1.0-237/contrib/storm-jmxetric/lib/jmxetric-1.0.4.jar=host=${NN_HOST},port=8668,wireformat31x=true,mode=multicast,config=${AGENT_WORK_ROOT}/app/install/apache-storm-0.9.1.2.1.1.0-237/contrib/storm-jmxetric/conf/jmxetric-conf.xml,process=Nimbus_JVM",
-    "site.storm-site.worker.childopts": "-Xmx768m -javaagent:${AGENT_WORK_ROOT}/app/install/apache-storm-0.9.1.2.1.1.0-237/contrib/storm-jmxetric/lib/jmxetric-1.0.4.jar=host=${NN_HOST},port=8668,wireformat31x=true,mode=multicast,config=${AGENT_WORK_ROOT}/app/install/apache-storm-0.9.1.2.1.1.0-237/contrib/storm-jmxetric/conf/jmxetric-conf.xml,process=Worker_%ID%_JVM",
-    "site.storm-site.drpc.queue.size": "128",
-    "site.storm-site.storm.zookeeper.retry.times": "5",
-    "site.storm-site.nimbus.monitor.freq.secs": "10",
-    "site.storm-site.storm.cluster.mode": "distributed",
-    "site.storm-site.dev.zookeeper.path": "${AGENT_WORK_ROOT}/app/tmp/dev-storm-zookeeper",
-    "site.storm-site.drpc.invocations.port": "0",
-    "site.storm-site.storm.zookeeper.root": "${DEF_ZK_PATH}",
-    "site.storm-site.logviewer.childopts": "-Xmx128m",
-    "site.storm-site.transactional.zookeeper.port": "null",
-    "site.storm-site.topology.worker.childopts": "null",
-    "site.storm-site.topology.max.spout.pending": "null",
-    "site.storm-site.nimbus.cleanup.inbox.freq.secs": "600",
-    "site.storm-site.storm.messaging.netty.min_wait_ms": "100",
-    "site.storm-site.nimbus.task.timeout.secs": "30",
-    "site.storm-site.nimbus.thrift.max_buffer_size": "1048576",
-    "site.storm-site.topology.sleep.spout.wait.strategy.time.ms": "1",
-    "site.storm-site.topology.optimize": "true",
-    "site.storm-site.nimbus.reassign": "true",
-    "site.storm-site.storm.messaging.transport": "backtype.storm.messaging.netty.Context",
-    "site.storm-site.logviewer.appender.name": "A1",
-    "site.storm-site.nimbus.host": "${NIMBUS_HOST}",
-    "site.storm-site.ui.port": "${STORM_UI_SERVER.ALLOCATED_PORT}",
-    "site.storm-site.supervisor.slots.ports": "[${SUPERVISOR.ALLOCATED_PORT}{DO_NOT_PROPAGATE},${SUPERVISOR.ALLOCATED_PORT}{DO_NOT_PROPAGATE}]",
-    "site.storm-site.nimbus.file.copy.expiration.secs": "600",
-    "site.storm-site.supervisor.monitor.frequency.secs": "3",
-    "site.storm-site.transactional.zookeeper.servers": "null",
-    "site.storm-site.zmq.linger.millis": "5000",
-    "site.storm-site.topology.error.throttle.interval.secs": "10",
-    "site.storm-site.topology.worker.shared.thread.pool.size": "4",
-    "site.storm-site.java.library.path": "/usr/local/lib:/opt/local/lib:/usr/lib",
-    "site.storm-site.topology.spout.wait.strategy": "backtype.storm.spout.SleepSpoutWaitStrategy",
-    "site.storm-site.task.heartbeat.frequency.secs": "3",
-    "site.storm-site.topology.transfer.buffer.size": "1024",
-    "site.storm-site.storm.zookeeper.session.timeout": "20000",
-    "site.storm-site.topology.executor.receive.buffer.size": "1024",
-    "site.storm-site.topology.stats.sample.rate": "0.05",
-    "site.storm-site.topology.fall.back.on.java.serialization": "true",
-    "site.storm-site.supervisor.childopts": "-Xmx256m -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.port=0 -javaagent:${AGENT_WORK_ROOT}/app/install/apache-storm-0.9.1.2.1.1.0-237/contrib/storm-jmxetric/lib/jmxetric-1.0.4.jar=host=${NN_HOST},port=8668,wireformat31x=true,mode=multicast,config=${AGENT_WORK_ROOT}/app/install/apache-storm-0.9.1.2.1.1.0-237/contrib/storm-jmxetric/conf/jmxetric-conf.xml,process=Supervisor_JVM",
-    "site.storm-site.topology.enable.message.timeouts": "true",
-    "site.storm-site.storm.messaging.netty.max_wait_ms": "1000",
-    "site.storm-site.nimbus.topology.validator": "backtype.storm.nimbus.DefaultTopologyValidator",
-    "site.storm-site.nimbus.supervisor.timeout.secs": "60",
-    "site.storm-site.topology.disruptor.wait.strategy": "com.lmax.disruptor.BlockingWaitStrategy",
-    "site.storm-site.nimbus.inbox.jar.expiration.secs": "3600",
-    "site.storm-site.drpc.port": "0",
-    "site.storm-site.topology.kryo.factory": "backtype.storm.serialization.DefaultKryoFactory",
-    "site.storm-site.storm.zookeeper.retry.interval": "1000",
-    "site.storm-site.storm.messaging.netty.max_retries": "30",
-    "site.storm-site.topology.tick.tuple.freq.secs": "null",
-    "site.storm-site.supervisor.enable": "true",
-    "site.storm-site.nimbus.task.launch.secs": "120",
-    "site.storm-site.task.refresh.poll.secs": "10",
-    "site.storm-site.topology.message.timeout.secs": "30",
-    "site.storm-site.storm.messaging.netty.buffer_size": "5242880",
-    "site.storm-site.topology.state.synchronization.timeout.secs": "60",
-    "site.storm-site.supervisor.worker.timeout.secs": "30",
-    "site.storm-site.topology.trident.batch.emit.interval.millis": "500",
-    "site.storm-site.topology.builtin.metrics.bucket.size.secs": "60",
-    "site.storm-site.storm.thrift.transport": "backtype.storm.security.auth.SimpleTransportPlugin",
-    "site.storm-site.logviewer.port": "${SUPERVISOR.ALLOCATED_PORT}{DO_NOT_PROPAGATE}",
-    "site.storm-site.topology.debug": "false"
-  },
-  "components": {
-    "slider-appmaster": {
-      "jvm.heapsize": "256M"
-    },
-    "NIMBUS": {
-    },
-    "STORM_REST_API": {
-    },
-    "STORM_UI_SERVER": {
-    },
-    "DRPC_SERVER": {
-    },
-    "SUPERVISOR": {
-    }
-  }
-}
diff --git a/app-packages/storm/configuration/global.xml b/app-packages/storm/configuration/global.xml
deleted file mode 100644
index 5cc9170..0000000
--- a/app-packages/storm/configuration/global.xml
+++ /dev/null
@@ -1,39 +0,0 @@
-<?xml version="1.0"?>
-<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
-<!--
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
--->
-
-<configuration>
-  <property>
-    <name>storm_user</name>
-    <value>storm</value>
-    <description></description>
-  </property>
-  <property>
-    <name>storm_log_dir</name>
-    <value>/var/log/storm</value>
-    <description></description>
-  </property>
-  <property>
-    <name>storm_pid_dir</name>
-    <value>/var/run/storm</value>
-    <description></description>
-  </property>
-</configuration>
diff --git a/app-packages/storm/configuration/storm-env.xml b/app-packages/storm/configuration/storm-env.xml
new file mode 100644
index 0000000..091c08d
--- /dev/null
+++ b/app-packages/storm/configuration/storm-env.xml
@@ -0,0 +1,65 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<!--
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+
+<configuration>
+
+  <property>
+    <name>kerberos_domain</name>
+    <value></value>
+    <description>The kerberos domain to be used for this Storm cluster</description>
+  </property>
+  <property>
+    <name>storm_client_principal_name</name>
+    <value></value>
+    <description>The principal name for the Storm client to be used to communicate with Nimbus and Zookeeper</description>
+  </property>
+  <property>
+    <name>storm_server_principal_name</name>
+    <value></value>
+    <description>The principal name for the Storm server to be used by Nimbus</description>
+  </property>
+  <property>
+    <name>storm_client_keytab</name>
+    <value></value>
+    <description>The keytab file path for Storm client</description>
+  </property>
+  <property>
+    <name>storm_server_keytab</name>
+    <value></value>
+    <description>The keytab file path for Storm server</description>
+  </property>
+  <!-- storm-env.sh -->
+  <property>
+    <name>content</name>
+    <description>This is the jinja template for storm-env.sh file</description>
+    <value>
+#!/bin/bash
+
+# Set Storm specific environment variables here.
+
+# The java implementation to use.
+export JAVA_HOME={{java_home}}
+
+# export STORM_CONF_DIR=""
+    </value>
+  </property>
+</configuration>
diff --git a/app-packages/storm/configuration/storm-site.xml b/app-packages/storm/configuration/storm-site.xml
index 6eca8f9..b3cce6a 100644
--- a/app-packages/storm/configuration/storm-site.xml
+++ b/app-packages/storm/configuration/storm-site.xml
@@ -118,7 +118,7 @@
   </property>
   <property>
     <name>nimbus.childopts</name>
-    <value>-Xmx1024m -Djava.security.auth.login.config=/etc/storm/storm_jaas.conf -javaagent:/usr/lib/storm/contrib/storm-jmxetric/lib/jmxetric-1.0.4.jar=host={0},port=8649,wireformat31x=true,mode=multicast,config=/usr/lib/storm/contrib/storm-jmxetric/conf/jmxetric-conf.xml,process=Nimbus_JVM</value>
+    <value>-Xmx1024m -Djava.security.auth.login.config=/etc/storm/conf/storm_jaas.conf -javaagent:/usr/lib/storm/contrib/storm-jmxetric/lib/jmxetric-1.0.4.jar=host=localhost,port=8649,wireformat31x=true,mode=multicast,config=/usr/lib/storm/contrib/storm-jmxetric/conf/jmxetric-conf.xml,process=Nimbus_JVM</value>
     <description>This parameter is used by the storm-deploy project to configure the jvm options for the nimbus daemon.</description>
   </property>
   <property>
@@ -188,7 +188,7 @@
   </property>
   <property>
     <name>ui.childopts</name>
-    <value>-Xmx768m -Djava.security.auth.login.config=/etc/storm/storm_jaas.conf</value>
+    <value>-Xmx768m -Djava.security.auth.login.config=/etc/storm/conf/storm_jaas.conf</value>
     <description>Childopts for Storm UI Java process.</description>
   </property>
   <property>
@@ -264,7 +264,7 @@
   </property>
   <property>
     <name>supervisor.childopts</name>
-    <value>-Xmx256m -Djava.security.auth.login.config=/etc/storm/storm_jaas.conf -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.port=56431 -javaagent:/usr/lib/storm/contrib/storm-jmxetric/lib/jmxetric-1.0.4.jar=host={0},port=8650,wireformat31x=true,mode=multicast,config=/usr/lib/storm/contrib/storm-jmxetric/conf/jmxetric-conf.xml,process=Supervisor_JVM</value>
+    <value>-Xmx256m -Djava.security.auth.login.config=/etc/storm/conf/storm_jaas.conf -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.port=56431 -javaagent:/usr/lib/storm/contrib/storm-jmxetric/lib/jmxetric-1.0.4.jar=host=localhost,port=8650,wireformat31x=true,mode=multicast,config=/usr/lib/storm/contrib/storm-jmxetric/conf/jmxetric-conf.xml,process=Supervisor_JVM</value>
     <description>This parameter is used by the storm-deploy project to configure the jvm options for the supervisor daemon.</description>
   </property>
   <property>
@@ -291,15 +291,8 @@
     <description>How often the supervisor sends a heartbeat to the master.</description>
   </property>
   <property>
-    <name>supervisor.enable</name>
-    <value>true</value>
-    <description>Whether or not the supervisor should launch workers assigned to it. Defaults
-       to true -- and you should probably never change this value. This configuration
-       is used in the Storm unit tests.</description>
-  </property>
-  <property>
     <name>worker.childopts</name>
-    <value>-Xmx768m -javaagent:/usr/lib/storm/contrib/storm-jmxetric/lib/jmxetric-1.0.4.jar=host={0},port=8650,wireformat31x=true,mode=multicast,config=/usr/lib/storm/contrib/storm-jmxetric/conf/jmxetric-conf.xml,process=Worker_%ID%_JVM</value>
+    <value>-Xmx768m -javaagent:/usr/lib/storm/contrib/storm-jmxetric/lib/jmxetric-1.0.4.jar=host=localhost,port=8650,wireformat31x=true,mode=multicast,config=/usr/lib/storm/contrib/storm-jmxetric/conf/jmxetric-conf.xml,process=Worker_%ID%_JVM</value>
     <description>The jvm opts provided to workers launched by this supervisor. All \"%ID%\" substrings are replaced with an identifier for this worker.</description>
   </property>
   <property>
diff --git a/app-packages/storm/jmx_metrics.json b/app-packages/storm/jmx_metrics.json
index f7d4e60..b0816b1 100644
--- a/app-packages/storm/jmx_metrics.json
+++ b/app-packages/storm/jmx_metrics.json
@@ -2,17 +2,17 @@
     "Component": {
         "NIMBUS": {
             "FreeSlots": {
-                "metric": "$['slots.free']",
+                "metric": "$['slotsFree']",
                 "pointInTime": true,
                 "temporal": false
             },
             "Tasks": {
-                "metric": "$['tasks.total']",
+                "metric": "$['tasksTotal']",
                 "pointInTime": true,
                 "temporal": false
             },
             "Executors": {
-                "metric": "$['executors.total']",
+                "metric": "$['executorsTotal']",
                 "pointInTime": true,
                 "temporal": false
             },
@@ -22,7 +22,7 @@
                 "temporal": false
             },
             "NimbusUptime": {
-                "metric": "$['nimbus.uptime']",
+                "metric": "$['nimbusUptime']",
                 "pointInTime": true,
                 "temporal": false
             }
diff --git a/app-packages/storm/metainfo.xml b/app-packages/storm/metainfo.xml
index dbe8549..28b0e9b 100644
--- a/app-packages/storm/metainfo.xml
+++ b/app-packages/storm/metainfo.xml
@@ -21,7 +21,7 @@
   <application>
     <name>STORM</name>
     <comment>Apache Hadoop Stream processing framework</comment>
-    <version>0.9.1.2.1</version>
+    <version>${pkg.version}</version>
     <exportedConfigs>storm-site</exportedConfigs>
 
     <exportGroups>
@@ -29,19 +29,19 @@
         <name>QuickLinks</name>
         <exports>
           <export>
-            <name>app.jmx</name>
-            <value>http://${STORM_REST_API_HOST}:${site.global.rest_api_port}/api/cluster/summary</value>
+            <name>org.apache.slider.jmx</name>
+            <value>http://${STORM_UI_SERVER_HOST}:${site.storm-site.ui.port}/api/v1/cluster/summary</value>
           </export>
           <export>
-            <name>app.monitor</name>
+            <name>org.apache.slider.monitor</name>
             <value>http://${STORM_UI_SERVER_HOST}:${site.storm-site.ui.port}</value>
           </export>
           <export>
-            <name>app.metrics</name>
+            <name>org.apache.slider.metrics</name>
             <value>http://${site.global.ganglia_server_host}/cgi-bin/rrd.py?c=${site.global.ganglia_server_id}</value>
           </export>
           <export>
-            <name>ganglia.ui</name>
+            <name>org.apache.slider.metrics.ui</name>
             <value>http://${site.global.ganglia_server_host}/ganglia?c=${site.global.ganglia_server_id}</value>
           </export>
           <export>
@@ -55,8 +55,7 @@
     <commandOrders>
       <commandOrder>
         <command>NIMBUS-START</command>
-        <requires>SUPERVISOR-INSTALLED,STORM_UI_SERVER-INSTALLED,DRPC_SERVER-INSTALLED,STORM_REST_API-INSTALLED
-        </requires>
+        <requires>SUPERVISOR-INSTALLED,STORM_UI_SERVER-INSTALLED,DRPC_SERVER-INSTALLED</requires>
       </commandOrder>
       <commandOrder>
         <command>SUPERVISOR-START</command>
@@ -67,10 +66,6 @@
         <requires>NIMBUS-STARTED</requires>
       </commandOrder>
       <commandOrder>
-        <command>STORM_REST_API-START</command>
-        <requires>NIMBUS-STARTED,DRPC_SERVER-STARTED,STORM_UI_SERVER-STARTED</requires>
-      </commandOrder>
-      <commandOrder>
         <command>STORM_UI_SERVER-START</command>
         <requires>NIMBUS-STARTED</requires>
       </commandOrder>
@@ -81,8 +76,10 @@
       <component>
         <name>NIMBUS</name>
         <category>MASTER</category>
+        <publishConfig>true</publishConfig>
         <autoStartOnFailure>true</autoStartOnFailure>
-        <appExports>QuickLinks-nimbus.host_port,QuickLinks-ganglia.ui,QuickLinks-app.metrics</appExports>
+        <appExports>QuickLinks-nimbus.host_port,QuickLinks-org.apache.slider.metrics.ui,QuickLinks-org.apache.slider.metrics</appExports>
+        <maxInstanceCount>1</maxInstanceCount>
         <commandScript>
           <script>scripts/nimbus.py</script>
           <scriptType>PYTHON</scriptType>
@@ -91,18 +88,6 @@
       </component>
 
       <component>
-        <name>STORM_REST_API</name>
-        <category>MASTER</category>
-        <autoStartOnFailure>true</autoStartOnFailure>
-        <appExports>QuickLinks-app.jmx</appExports>
-        <commandScript>
-          <script>scripts/rest_api.py</script>
-          <scriptType>PYTHON</scriptType>
-          <timeout>600</timeout>
-        </commandScript>
-      </component>
-
-      <component>
         <name>SUPERVISOR</name>
         <category>SLAVE</category>
         <autoStartOnFailure>true</autoStartOnFailure>
@@ -123,7 +108,8 @@
         <name>STORM_UI_SERVER</name>
         <category>MASTER</category>
         <publishConfig>true</publishConfig>
-        <appExports>QuickLinks-app.monitor</appExports>
+        <appExports>QuickLinks-org.apache.slider.monitor,QuickLinks-org.apache.slider.jmx</appExports>
+        <maxInstanceCount>1</maxInstanceCount>
         <autoStartOnFailure>true</autoStartOnFailure>
         <commandScript>
           <script>scripts/ui_server.py</script>
@@ -136,6 +122,7 @@
         <name>DRPC_SERVER</name>
         <category>MASTER</category>
         <autoStartOnFailure>true</autoStartOnFailure>
+        <maxInstanceCount>1</maxInstanceCount>
         <commandScript>
           <script>scripts/drpc_server.py</script>
           <scriptType>PYTHON</scriptType>
@@ -150,10 +137,24 @@
         <packages>
           <package>
             <type>tarball</type>
-            <name>files/apache-storm-0.9.1.2.1.1.0-237.tar.gz</name>
+            <name>files/apache-storm-${pkg.version}.tar.gz</name>
           </package>
         </packages>
       </osSpecific>
     </osSpecifics>
+
+    <configFiles>
+      <configFile>
+        <type>yaml</type>
+        <fileName>storm.yaml</fileName>
+        <dictionaryName>storm-site</dictionaryName>
+      </configFile>
+      <configFile>
+        <type>env</type>
+        <fileName>storm-env.sh</fileName>
+        <dictionaryName>storm-env</dictionaryName>
+      </configFile>
+    </configFiles>
+
   </application>
 </metainfo>
diff --git a/app-packages/storm/package/scripts/params.py b/app-packages/storm/package/scripts/params.py
index cf21b27..1ccba5e 100644
--- a/app-packages/storm/package/scripts/params.py
+++ b/app-packages/storm/package/scripts/params.py
@@ -34,12 +34,10 @@
 java64_home = config['hostLevelParams']['java_home']
 nimbus_host = config['configurations']['storm-site']['nimbus.host']
 nimbus_port = config['configurations']['storm-site']['nimbus.thrift.port']
-nimbus_host = config['configurations']['storm-site']['nimbus.host']
-rest_api_port = config['configurations']['global']['rest_api_port']
-rest_api_admin_port = config['configurations']['global']['rest_api_admin_port']
 rest_api_conf_file = format("{conf_dir}/config.yaml")
-rest_lib_dir = format("{app_root}/contrib/storm-rest")
-storm_bin = format("{app_root}/bin/storm")
+rest_lib_dir = format("{app_root}/external/storm-rest")
+storm_bin = format("{app_root}/bin/storm.py")
+storm_env_sh_template = config['configurations']['storm-env']['content']
 
 ganglia_installed = config['configurations']['global']['ganglia_enabled']
 if ganglia_installed:
@@ -47,12 +45,17 @@
   ganglia_server = config['configurations']['global']['ganglia_server_host']
   ganglia_port = config['configurations']['global']['ganglia_server_port']
 
-_authentication = config['configurations']['core-site']['hadoop.security.authentication']
-security_enabled = ( not is_empty(_authentication) and _authentication == 'kerberos')
+security_enabled = config['configurations']['global']['security_enabled']
 
 if security_enabled:
   _hostname_lowercase = config['hostname'].lower()
-  _kerberos_domain = config['configurations']['global']['kerberos_domain']
-  _storm_principal_name = config['configurations']['global']['storm_principal_name']
-  storm_jaas_principal = _storm_principal_name.replace('_HOST', _hostname_lowercase)
-  storm_keytab_path = config['configurations']['global']['storm_keytab']
+  _kerberos_domain = config['configurations']['storm-env']['kerberos_domain']
+  _storm_client_principal_name = config['configurations']['storm-env']['storm_client_principal_name']
+  _storm_server_principal_name = config['configurations']['storm-env']['storm_server_principal_name']
+
+  storm_jaas_client_principal = _storm_client_principal_name.replace('_HOST', _hostname_lowercase)
+  storm_client_keytab_path = config['configurations']['storm-env']['storm_client_keytab']
+  storm_jaas_server_principal = _storm_server_principal_name.replace('_HOST',nimbus_host.lower())
+  storm_jaas_stormclient_servicename = storm_jaas_server_principal.split("/")[0]
+  storm_server_keytab_path = config['configurations']['storm-env']['storm_server_keytab']
+  kinit_path_local = functions.get_kinit_path(["/usr/bin", "/usr/kerberos/bin", "/usr/sbin"])
diff --git a/app-packages/storm/package/scripts/service.py b/app-packages/storm/package/scripts/service.py
index 13fcef2..7a7dbdf 100644
--- a/app-packages/storm/package/scripts/service.py
+++ b/app-packages/storm/package/scripts/service.py
@@ -21,6 +21,8 @@
 
 from resource_management import *
 import time
+import os
+import sys
 
 """
 Slider package uses jps as pgrep does not list the whole process start command
@@ -31,6 +33,7 @@
   import params
   import status_params
 
+  python_binary = os.environ['PYTHON_EXE'] if 'PYTHON_EXE' in os.environ else sys.executable
   pid_file = status_params.pid_files[name]
   container_id = status_params.container_id
   no_op_test = format("ls {pid_file} >/dev/null 2>&1 && ps `cat {pid_file}` >/dev/null 2>&1")
@@ -50,18 +53,16 @@
     if name == "rest_api":
       cmd = format("{rest_process_cmd} {rest_api_conf_file} > {log_dir}/restapi.log")
     else:
-      cmd = format("env JAVA_HOME={java64_home} PATH=$PATH:{java64_home}/bin STORM_BASE_DIR={app_root} STORM_CONF_DIR={conf_dir} {storm_bin} {name} > {log_dir}/{name}.out 2>&1")
+      cmd = format("env JAVA_HOME={java64_home} PATH={java64_home}/bin:$PATH STORM_BASE_DIR={app_root} STORM_CONF_DIR={conf_dir} {python_binary} {storm_bin} {name} > {log_dir}/{name}.out 2>&1")
 
     Execute(cmd,
             not_if=no_op_test,
-            user=params.storm_user,
             logoutput=False,
             wait_for_finish=False
     )
 
     if name == "rest_api":
       Execute(crt_pid_cmd,
-              user=params.storm_user,
               logoutput=True,
               tries=6,
               try_sleep=10
@@ -70,14 +71,13 @@
       content = None
       for i in xrange(12):
         Execute(crt_pid_cmd,
-                user=params.storm_user,
                 logoutput=True
         )
         with open(pid_file) as f:
           content = f.readline().strip()
         if content.isdigit():
           break;
-        File(pid_file, action = "delete")
+        File(pid_file, action="delete")
         time.sleep(10)
         pass
 
diff --git a/app-packages/storm/package/scripts/status_params.py b/app-packages/storm/package/scripts/status_params.py
index 5907446..7dda158 100644
--- a/app-packages/storm/package/scripts/status_params.py
+++ b/app-packages/storm/package/scripts/status_params.py
@@ -33,5 +33,5 @@
              "ui": pid_ui,
              "nimbus": pid_nimbus,
              "supervisor": pid_supervisor,
-             "drpc": pid_drpc,
-             "rest_api": pid_rest_api}
\ No newline at end of file
+             "rest_api": pid_rest_api,
+             "drpc": pid_drpc}
\ No newline at end of file
diff --git a/app-packages/storm/package/scripts/storm.py b/app-packages/storm/package/scripts/storm.py
index bce272b..8ecb3a1 100644
--- a/app-packages/storm/package/scripts/storm.py
+++ b/app-packages/storm/package/scripts/storm.py
@@ -43,8 +43,16 @@
                owner = params.storm_user,
                group = params.user_group
   )
-  
+
+  File(format("{conf_dir}/storm-env.sh"),
+    owner=params.storm_user,
+    content=InlineTemplate(params.storm_env_sh_template)
+  )
+
   if params.security_enabled:
-    TemplateConfig( format("{conf_dir}/storm_jaas.conf"),
-      owner = params.storm_user
-    )
\ No newline at end of file
+    File(format("{conf_dir}/storm_jaas.conf"),
+              content=Template("storm_jaas.conf.j2"),
+              owner = params.storm_user,
+              group = params.user_group
+    )
+
diff --git a/app-packages/storm/package/scripts/yaml_config.py b/app-packages/storm/package/scripts/yaml_config.py
index 39261be..5f763cc 100644
--- a/app-packages/storm/package/scripts/yaml_config.py
+++ b/app-packages/storm/package/scripts/yaml_config.py
@@ -19,9 +19,13 @@
 """
 
 import re
+import socket
 from resource_management import *
 
 def escape_yaml_propetry(value):
+  # pre-process value for any "_HOST" tokens
+  value = value.replace('_HOST', socket.getfqdn())
+
   unquouted = False
   unquouted_values = ["null","Null","NULL","true","True","TRUE","false","False","FALSE","YES","Yes","yes","NO","No","no","ON","On","on","OFF","Off","off"]
   
@@ -31,7 +35,11 @@
   # if is list [a,b,c]
   if re.match('^\w*\[.+\]\w*$', value):
     unquouted = True
-    
+
+  # if is map {'a':'b'}
+  if re.match('^\w*\{.+\}\w*$', value):
+    unquouted = True
+
   try:
     int(value)
     unquouted = True
@@ -50,6 +58,10 @@
     
   return value
 
+def yaml_inline_template(configurations):
+  return source.InlineTemplate('''{% for key, value in configurations_dict.items() %}{{ key }}: {{ escape_yaml_propetry(value) }}
+{% endfor %}''', configurations_dict=configurations, extra_imports=[escape_yaml_propetry])
+
 def yaml_config(
   filename,
   configurations = None,
@@ -58,8 +70,7 @@
   owner = None,
   group = None
 ):
-    config_content = source.InlineTemplate('''{% for key, value in configurations_dict.items() %}{{ key }}: {{ escape_yaml_propetry(value) }}
-{% endfor %}''', configurations_dict=configurations, extra_imports=[escape_yaml_propetry])
+    config_content = yaml_inline_template(configurations)
 
     File (format("{conf_dir}/{filename}"),
       content = config_content,
diff --git a/app-packages/storm/package/templates/config.yaml.j2 b/app-packages/storm/package/templates/config.yaml.j2
index 32d2c99..aa4ec46 100644
--- a/app-packages/storm/package/templates/config.yaml.j2
+++ b/app-packages/storm/package/templates/config.yaml.j2
@@ -16,15 +16,6 @@
 nimbusHost: {{nimbus_host}}
 nimbusPort: {{nimbus_port}}
 
-# HTTP-specific options.
-http:
-
-  # The port on which the HTTP server listens for service requests.
-  port: {{rest_api_port}}
-
-  # The port on which the HTTP server listens for administrative requests.
-  adminPort: {{rest_api_admin_port}}
-
 {% if ganglia_installed %}
 enableGanglia: {{ganglia_installed}}
 
diff --git a/app-packages/storm/package/templates/storm_jaas.conf.j2 b/app-packages/storm/package/templates/storm_jaas.conf.j2
index 4031d22..a1ba6ea 100644
--- a/app-packages/storm/package/templates/storm_jaas.conf.j2
+++ b/app-packages/storm/package/templates/storm_jaas.conf.j2
@@ -15,12 +15,30 @@
  *  See the License for the specific language governing permissions and
  *  limitations under the License.
  */
+StormServer {
+   com.sun.security.auth.module.Krb5LoginModule required
+   useKeyTab=true
+   keyTab="{{storm_server_keytab_path}}"
+   storeKey=true
+   useTicketCache=false
+   principal="{{storm_jaas_server_principal}}";
+};
+StormClient {
+   com.sun.security.auth.module.Krb5LoginModule required
+   useKeyTab=true
+   keyTab="{{storm_client_keytab_path}}"
+   storeKey=true
+   useTicketCache=false
+   serviceName="{{storm_jaas_stormclient_servicename}}"
+   debug=true
+   principal="{{storm_jaas_client_principal}}";
+};
 Client {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
-   keyTab="{{storm_keytab_path}}"
+   keyTab="{{storm_client_keytab_path}}"
    storeKey=true
    useTicketCache=false
    serviceName="zookeeper"
-   principal="{{storm_jaas_principal}}";
+   principal="{{storm_jaas_client_principal}}";
 };
diff --git a/app-packages/storm/pom.xml b/app-packages/storm/pom.xml
new file mode 100644
index 0000000..00ec044
--- /dev/null
+++ b/app-packages/storm/pom.xml
@@ -0,0 +1,90 @@
+<?xml version="1.0"?>
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
+  <!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+  <parent>
+    <groupId>org.apache.slider</groupId>
+    <artifactId>slider</artifactId>
+    <version>0.60.0-incubating</version>
+    <relativePath>../../pom.xml</relativePath>
+  </parent>
+  <modelVersion>4.0.0</modelVersion>
+  <artifactId>slider-storm-app-package</artifactId>
+  <packaging>pom</packaging>
+  <name>Slider Storm App Package</name>
+  <description>Slider Storm App Package</description>
+  <properties>
+    <work.dir>package-tmp</work.dir>
+  </properties>
+
+  <profiles>
+    <profile>
+      <id>storm-app-package</id>
+      <build>
+        <plugins>
+
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-antrun-plugin</artifactId>
+            <version>1.7</version>
+            <executions>
+              <execution>
+                <id>copy</id>
+                <phase>validate</phase>
+                <configuration>
+                  <target name="copy and rename file">
+                    <copy file="${pkg.src}/${pkg.name}" tofile="${project.build.directory}/${pkg.name}" />
+                  </target>
+                </configuration>
+                <goals>
+                  <goal>run</goal>
+                </goals>
+              </execution>
+            </executions>
+          </plugin>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-assembly-plugin</artifactId>
+            <configuration>
+              <tarLongFileMode>gnu</tarLongFileMode>
+              <descriptor>src/assembly/storm.xml</descriptor>
+              <appendAssemblyId>false</appendAssemblyId>
+            </configuration>
+            <executions>
+              <execution>
+                <id>build-tarball</id>
+                <phase>package</phase>
+                <goals>
+                  <goal>single</goal>
+                </goals>
+              </execution>
+            </executions>
+          </plugin>
+
+        </plugins>
+      </build>
+    </profile>
+  </profiles>
+
+  <build>
+  </build>
+
+  <dependencies>
+  </dependencies>
+
+</project>
diff --git a/app-packages/storm/resources-default.json b/app-packages/storm/resources-default.json
new file mode 100644
index 0000000..65e6579
--- /dev/null
+++ b/app-packages/storm/resources-default.json
@@ -0,0 +1,34 @@
+{
+  "schema" : "http://example.org/specification/v2.0.0",
+  "metadata" : {
+  },
+  "global" : {
+    "yarn.log.include.patterns": "",
+    "yarn.log.exclude.patterns": ""
+  },
+  "components": {
+    "slider-appmaster": {
+      "yarn.memory": "512"
+    },
+    "NIMBUS": {
+      "yarn.role.priority": "1",
+      "yarn.component.instances": "1",
+      "yarn.memory": "2048"
+    },
+    "STORM_UI_SERVER": {
+      "yarn.role.priority": "2",
+      "yarn.component.instances": "1",
+      "yarn.memory": "1278"
+    },
+    "DRPC_SERVER": {
+      "yarn.role.priority": "3",
+      "yarn.component.instances": "1",
+      "yarn.memory": "1278"
+    },
+    "SUPERVISOR": {
+      "yarn.role.priority": "4",
+      "yarn.component.instances": "1",
+      "yarn.memory": "3072"
+    }
+  }
+}
diff --git a/app-packages/storm/src/assembly/storm.xml b/app-packages/storm/src/assembly/storm.xml
new file mode 100644
index 0000000..f7dcf13
--- /dev/null
+++ b/app-packages/storm/src/assembly/storm.xml
@@ -0,0 +1,75 @@
+<!--
+  ~ Licensed to the Apache Software Foundation (ASF) under one
+  ~  or more contributor license agreements.  See the NOTICE file
+  ~  distributed with this work for additional information
+  ~  regarding copyright ownership.  The ASF licenses this file
+  ~  to you under the Apache License, Version 2.0 (the
+  ~  "License"); you may not use this file except in compliance
+  ~  with the License.  You may obtain a copy of the License at
+  ~
+  ~       http://www.apache.org/licenses/LICENSE-2.0
+  ~
+  ~  Unless required by applicable law or agreed to in writing, software
+  ~  distributed under the License is distributed on an "AS IS" BASIS,
+  ~  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  ~  See the License for the specific language governing permissions and
+  ~  limitations under the License.
+  -->
+
+
+<assembly
+  xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0 http://maven.apache.org/xsd/assembly-1.1.0.xsd">
+  <id>slider-storm-v${storm.version}</id>
+  <formats>
+    <format>zip</format>
+    <format>dir</format>
+  </formats>
+  <includeBaseDirectory>false</includeBaseDirectory>
+
+  <files>
+    <file>
+      <source>appConfig-default.json</source>
+      <outputDirectory>/</outputDirectory>
+      <filtered>true</filtered>
+      <fileMode>0755</fileMode>
+    </file>
+    <file>
+      <source>appConfig-secured-default.json</source>
+      <outputDirectory>/</outputDirectory>
+      <filtered>true</filtered>
+      <fileMode>0755</fileMode>
+    </file>
+    <file>
+      <source>metainfo.xml</source>
+      <outputDirectory>/</outputDirectory>
+      <filtered>true</filtered>
+      <fileMode>0755</fileMode>
+    </file>
+    <file>
+      <source>${pkg.src}/${pkg.name}</source>
+      <outputDirectory>package/files</outputDirectory>
+      <filtered>false</filtered>
+      <fileMode>0755</fileMode>
+    </file>
+  </files>
+
+  <fileSets>
+    <fileSet>
+      <directory>${project.basedir}</directory>
+      <outputDirectory>/</outputDirectory>
+      <excludes>
+        <exclude>pom.xml</exclude>
+        <exclude>src/**</exclude>
+        <exclude>target/**</exclude>
+        <exclude>appConfig-default.json</exclude>
+        <exclude>appConfig-secured-default.json</exclude>
+        <exclude>metainfo.xml</exclude>
+      </excludes>
+      <fileMode>0755</fileMode>
+      <directoryMode>0755</directoryMode>
+    </fileSet>
+
+  </fileSets>
+</assembly>
diff --git a/pom.xml b/pom.xml
index fb27aba..6cf66ab 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1,4 +1,4 @@
-<!--
+  <!--
    Licensed to the Apache Software Foundation (ASF) under one or more
    contributor license agreements.  See the NOTICE file distributed with
    this work for additional information regarding copyright ownership.
@@ -19,7 +19,7 @@
   <groupId>org.apache.slider</groupId>
   <artifactId>slider</artifactId>
   <name>Slider</name>
-  <version>0.50.2-incubating</version>
+  <version>0.60.0-incubating</version>
   <packaging>pom</packaging>
 
   <description>
@@ -38,7 +38,6 @@
     <module>app-packages/command-logger/slider-pkg</module>
     <module>slider-core</module>
     <module>slider-agent</module>
-    <module>app-packages/accumulo</module>
     <module>slider-assembly</module>
     <module>slider-funtest</module>
     <module>slider-providers/hbase/slider-hbase-provider</module>
@@ -72,6 +71,19 @@
       <url>http://slider.incubator.apache.org/</url>
     </site>
     <downloadUrl>http://git-wip-us.apache.org/repos/asf/incubator-slider.git</downloadUrl>
+    <repository>
+      <id>${distMgmtStagingId}</id>
+      <name>${distMgmtStagingName}</name>
+      <url>${distMgmtStagingUrl}</url>
+    </repository>
+    <snapshotRepository>
+      <id>${distMgmtSnapshotsId}</id>
+      <name>${distMgmtSnapshotsName}</name>
+      <url>${distMgmtSnapshotsUrl}</url>
+    </snapshotRepository>
+
+
+
   </distributionManagement>
   
   <mailingLists>
@@ -92,7 +104,12 @@
   </mailingLists>
 
   <properties>
-
+    <distMgmtSnapshotsId>apache.snapshots.https</distMgmtSnapshotsId>
+    <distMgmtSnapshotsName>Apache Development Snapshot Repository</distMgmtSnapshotsName>
+    <distMgmtSnapshotsUrl>https://repository.apache.org/content/repositories/snapshots</distMgmtSnapshotsUrl>
+    <distMgmtStagingId>apache.staging.https</distMgmtStagingId>
+    <distMgmtStagingName>Apache Release Distribution Repository</distMgmtStagingName>
+    <distMgmtStagingUrl>https://repository.apache.org/service/local/staging/deploy/maven2</distMgmtStagingUrl>
     <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
     <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
 
@@ -114,14 +131,16 @@
     <test.failIfNoTests>true</test.failIfNoTests>
     <test.funtests.failIfNoTests>false</test.funtests.failIfNoTests>
     <test.forkMode>always</test.forkMode>
+    <!-- path environment variable -->
+    <test.env.path>${env.PATH}</test.env.path>    
     
     <!--
     core artifacts
     -->
-    <hadoop.version>2.4.1</hadoop.version>
+    <hadoop.version>2.6.0-SNAPSHOT</hadoop.version>
 
     <hbase.version>0.98.4-hadoop2</hbase.version>
-    <accumulo.version>1.6.0</accumulo.version>
+    <accumulo.version>1.6.1</accumulo.version>
     
     <!--
      artifact versions
@@ -150,7 +169,7 @@
     <servlet-api.version>2.5</servlet-api.version>
     <jsr311-api.version>1.1.1</jsr311-api.version>
     <jaxb-api.version>2.2.7</jaxb-api.version>
-
+    <jsp.version>2.1</jsp.version>
     <junit.version>4.11</junit.version>
     <log4j.version>1.2.17</log4j.version>
     <metrics.version>3.0.1</metrics.version>
@@ -162,7 +181,7 @@
 
     <slf4j.version>1.7.5</slf4j.version>
     <stringtemplate.version>2.4.1</stringtemplate.version>
-    <zookeeper.version>3.4.5</zookeeper.version>
+    <zookeeper.version>3.4.6</zookeeper.version>
 
 
     <!--  Plugin versions    -->
@@ -175,6 +194,7 @@
     
     <maven.version.range>[3.0.0,)</maven.version.range>
     
+    <maven-antrun-plugin.version>1.7</maven-antrun-plugin.version>
     <maven-assembly-plugin.version>2.4</maven-assembly-plugin.version>
     <maven.cobertura.version>2.5.2</maven.cobertura.version>
     <maven-compiler-plugin.version>3.1</maven-compiler-plugin.version>
@@ -182,6 +202,7 @@
     <maven-deploy-plugin.version>2.7</maven-deploy-plugin.version>
     <maven-doxia-module-markdown.version>1.4</maven-doxia-module-markdown.version>
     <maven-enforcer-plugin.version>1.0</maven-enforcer-plugin.version>
+    <maven-exec-plugin.version>1.2.1</maven-exec-plugin.version>
     <maven-jar-plugin.version>2.3.1</maven-jar-plugin.version>
     <maven.javadoc.version>2.8</maven.javadoc.version>
     <maven.project.version>2.4</maven.project.version>
@@ -213,6 +234,16 @@
       <id>ASF Staging</id>
       <url>https://repository.apache.org/content/groups/staging/</url>
     </repository>
+    <repository>
+      <id>ASF Snapshots</id>
+      <url>https://repository.apache.org/content/repositories/snapshots/</url>
+      <snapshots>
+        <enabled>true</enabled>
+      </snapshots>
+      <releases>
+        <enabled>false</enabled>
+      </releases>
+    </repository>
   </repositories>
 
 
@@ -320,36 +351,7 @@
         </executions>
       </plugin>
 
-      <plugin>
-        <groupId>org.apache.rat</groupId>
-        <artifactId>apache-rat-plugin</artifactId>
-        <version>${apache-rat-plugin.version}</version>
-        <executions>
-          <execution>
-            <id>check-licenses</id>
-            <goals>
-              <goal>check</goal>
-            </goals>
-          </execution>
-        </executions>
-        <configuration>
-          <excludes>
-            <exclude>**/*.json</exclude>
-            <exclude>**/*.tar</exclude>
-            <exclude>**/build.properties</exclude>
-            <exclude>**/regionservers</exclude>
-            <exclude>**/slaves</exclude>
-            <exclude>**/httpfs-signature.secret</exclude>
-            <exclude>**/dfs.exclude</exclude>
-            <exclude>**/*.iml</exclude>
-            <exclude>**/rat.txt</exclude>
-            <exclude>DISCLAIMER</exclude>
-            <exclude>app-packages/hbase/target/**</exclude>
-            <exclude>target/*</exclude>
-          </excludes>
-        </configuration>
-      </plugin>
-  
+
   </plugins>
   </build>
 
@@ -524,6 +526,12 @@
 
       <dependency>
         <groupId>org.apache.hadoop</groupId>
+        <artifactId>hadoop-yarn-registry</artifactId>
+        <version>${hadoop.version}</version>
+      </dependency>
+      
+      <dependency>
+        <groupId>org.apache.hadoop</groupId>
         <artifactId>hadoop-yarn-server-web-proxy</artifactId>
         <version>${hadoop.version}</version>
       </dependency>
@@ -598,6 +606,12 @@
         <groupId>commons-httpclient</groupId>
         <artifactId>commons-httpclient</artifactId>
         <version>${httpclient.version}</version>
+        <exclusions>
+          <exclusion>
+            <groupId>commons-codec</groupId>
+            <artifactId>commons-codec</artifactId>
+          </exclusion>
+        </exclusions>
       </dependency>
       
       <!-- ======================================================== -->
@@ -931,6 +945,18 @@
             <groupId>org.junit</groupId>
             <artifactId>junit</artifactId>
           </exclusion>
+          <exclusion>
+            <groupId>com.sun.jdmk</groupId>
+            <artifactId>jmxtools</artifactId>
+          </exclusion>
+          <exclusion>
+            <groupId>com.sun.jmx</groupId>
+            <artifactId>jmxri</artifactId>
+          </exclusion>
+          <exclusion>
+            <groupId>org.jboss.netty</groupId>
+            <artifactId>netty</artifactId>
+          </exclusion>
         </exclusions>
       </dependency>
 
@@ -1127,6 +1153,12 @@
       </dependency>
 
       <dependency>
+        <groupId>javax.servlet.jsp</groupId>
+        <artifactId>jsp-api</artifactId>
+        <version>${jsp.version}</version>
+      </dependency>
+
+      <dependency>
         <groupId>com.sun.jersey</groupId>
         <artifactId>jersey-client</artifactId>
         <version>${jersey.version}</version>
@@ -1216,24 +1248,126 @@
         <groupId>org.mortbay.jetty</groupId>
         <artifactId>jetty</artifactId>
         <version>${jetty.version}</version>
+        <exclusions>
+          <!-- cut the jetty version of the servlet API —Hadoop ships with one-->
+          <exclusion>
+            <groupId>org.mortbay.jetty</groupId>
+            <artifactId>servlet-api</artifactId>
+          </exclusion>
+        </exclusions>
       </dependency>
+
       <dependency>
         <groupId>org.mortbay.jetty</groupId>
         <artifactId>jetty-util</artifactId>
         <version>${jetty.version}</version>
       </dependency>
+
       <dependency>
         <groupId>org.mortbay.jetty</groupId>
         <artifactId>jetty-sslengine</artifactId>
         <version>${jetty.version}</version>
       </dependency>
 
+      <dependency>
+        <groupId>org.codehaus.jettison</groupId>
+        <artifactId>jettison</artifactId>
+        <version>1.1</version>
+      </dependency>
+
+      <dependency>
+        <groupId>org.powermock</groupId>
+        <artifactId>powermock-core</artifactId>
+        <version>1.5</version>
+      </dependency>
+
+      <dependency>
+        <groupId>org.powermock</groupId>
+        <artifactId>powermock-reflect</artifactId>
+        <version>1.5</version>
+      </dependency>
+
+      <dependency>
+        <groupId>org.powermock</groupId>
+        <artifactId>powermock-api-easymock</artifactId>
+        <version>1.5</version>
+      </dependency>
+
+      <dependency>
+        <groupId>org.powermock</groupId>
+        <artifactId>powermock-module-junit4</artifactId>
+        <version>1.5</version>
+      </dependency>
+
     </dependencies>
+    
   </dependencyManagement>
 
   <profiles>
 
     <profile>
+      <id>Non-Windows</id>
+      <activation>
+        <os>
+          <family>!windows</family>
+        </os>
+      </activation>
+      <modules>
+        <module>app-packages/accumulo</module>
+        <module>app-packages/hbase</module>
+        <module>app-packages/storm</module>
+      </modules>
+    </profile>
+    
+    <profile>
+      <id>Windows</id>
+      <activation>
+        <os>
+          <family>windows</family>
+        </os>
+      </activation>
+      <modules>
+        <module>app-packages/hbase-win</module>
+        <module>app-packages/storm-win</module>
+      </modules>
+    </profile>
+    <profile>
+      <id>rat</id>
+      <build>
+        <plugins>
+          <plugin>
+            <groupId>org.apache.rat</groupId>
+            <artifactId>apache-rat-plugin</artifactId>
+            <version>${apache-rat-plugin.version}</version>
+            <executions>
+              <execution>
+                <id>check-licenses</id>
+                <goals>
+                  <goal>check</goal>
+                </goals>
+              </execution>
+            </executions>
+            <configuration>
+              <excludes>
+                <exclude>**/*.json</exclude>
+                <exclude>**/*.tar</exclude>
+                <exclude>**/build.properties</exclude>
+                <exclude>**/regionservers</exclude>
+                <exclude>**/slaves</exclude>
+                <exclude>**/httpfs-signature.secret</exclude>
+                <exclude>**/dfs.exclude</exclude>
+                <exclude>**/*.iml</exclude>
+                <exclude>**/rat.txt</exclude>
+                <exclude>DISCLAIMER</exclude>
+                <exclude>app-packages/hbase/target/**</exclude>
+                <exclude>target/*</exclude>
+              </excludes>
+            </configuration>
+          </plugin>
+        </plugins>
+      </build>
+    </profile>
+    <profile>
       <id>apache-release</id>
       <build>
         <plugins>
@@ -1271,39 +1405,54 @@
     </profile>
 
     <profile>
-      <!-- local builds of everything -->
-      <id>local</id>
+      <!-- 2.6 snapshots -->
+      <id>branch-2.6</id>
       <properties>
-        <hadoop.version>2.4.1-SNAPSHOT</hadoop.version>
-        <hbase.version>0.98.4-SNAPSHOT</hbase.version>
-        <accumulo.version>1.6.0-SNAPSHOT</accumulo.version>
+        <hadoop.version>2.6.0-SNAPSHOT</hadoop.version>
+      </properties>
+    </profile>
+
+    <profile>
+      <!-- 2.6 snapshots -->
+      <id>release-2.6</id>
+      <properties>
+        <hadoop.version>2.6.0</hadoop.version>
       </properties>
     </profile>
 
     <profile>
       <!-- hadoop branch-2 builds  -->
-      <id>hadoop-2.4.1</id>
-      <properties>
-        <hadoop.version>2.4.1</hadoop.version>
-      </properties>
-    </profile>
-    <profile>
-      <!-- hadoop branch-2 builds  -->
       <id>branch-2</id>
       <properties>
-        <hadoop.version>2.6.0-SNAPSHOT</hadoop.version>
+        <hadoop.version>2.7.0-SNAPSHOT</hadoop.version>
       </properties>
     </profile>
     
     <profile>
-      <!-- hadoop branch-2 builds  -->
-      <id>hadoop-trunk</id>
+      <!-- hadoop trunk builds  -->
+      <id>trunk</id>
       <properties>
         <hadoop.version>3.0.0-SNAPSHOT</hadoop.version>
       </properties>
     </profile>
     
     <profile>
+      <!-- Java 7 build -->
+      <id>java7</id>
+      <properties>
+       <project.java.src.version>7</project.java.src.version>
+      </properties>
+    </profile>
+
+    <profile>
+      <!-- Java 8 build -->
+      <id>java8</id>
+      <properties>
+       <project.java.src.version>8</project.java.src.version>
+      </properties>
+    </profile>
+    
+    <profile>
       <!-- anything for a jenkins build -->
       <id>jenkins</id>
       <properties>
@@ -1387,7 +1536,49 @@
 
     </build>
     </profile>
+    <profile>
+      <id>sign</id>
+      <build>
+        <plugins>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-gpg-plugin</artifactId>
+            <executions>
+              <execution>
+                <id>sign-artifacts</id>
+                <phase>verify</phase>
+                <goals>
+                  <goal>sign</goal>
+                </goals>
+              </execution>
+            </executions>
+          </plugin>
+        </plugins>
+      </build>
+    </profile>
 
+    <profile>
+      <id>private-repo</id>
+      <!-- this profile is for pulling custom app versions from a private maven repo -->
+      <activation>
+        <property>
+          <name>private.repo.url</name>
+        </property>
+      </activation>
+      <repositories>
+        <repository>
+          <releases>
+            <enabled>true</enabled>
+          </releases>
+          <snapshots>
+            <enabled>false</enabled>
+          </snapshots>
+          <id>private-repo</id>
+          <name>Private Repo</name>
+          <url>${private.repo.url}</url>
+        </repository>
+      </repositories>
+    </profile>
   </profiles>
 
 
diff --git a/slider-agent/conf/agent.ini b/slider-agent/conf/agent.ini
index 7b9d57d..48113e3 100644
--- a/slider-agent/conf/agent.ini
+++ b/slider-agent/conf/agent.ini
@@ -43,6 +43,7 @@
 [command]
 max_retries=2
 sleep_between_retries=1
+auto_restart=5,5
 
 [security]
 
diff --git a/slider-agent/pom.xml b/slider-agent/pom.xml
index 09f2dae..332d5d1 100644
--- a/slider-agent/pom.xml
+++ b/slider-agent/pom.xml
@@ -19,7 +19,7 @@
   <parent>
     <groupId>org.apache.slider</groupId>
     <artifactId>slider</artifactId>
-    <version>0.50.2-incubating</version>
+    <version>0.60.0-incubating</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>slider-agent</artifactId>
@@ -32,8 +32,9 @@
     <package.release>1</package.release>
     <skipTests>false</skipTests>
     <python.ver>python &gt;= 2.6</python.ver>
+    <executable.python>${project.basedir}/../slider-agent/src/test/python/python-wrap</executable.python>
+    <python.path.l>${project.basedir}/src/main/python/jinja2:${project.basedir}/src/test/python:${project.basedir}/src/main/python:${project.basedir}/src/main/python/agent:${project.basedir}/src/main/python/resource_management:${project.basedir}/src/test/python/agent:${project.basedir}/src/test/python/resource_management:${project.basedir}/src/main/python/kazoo</python.path.l>
   </properties>
-
   <build>
     <plugins>
       
@@ -59,17 +60,17 @@
       <plugin>
         <groupId>org.codehaus.mojo</groupId>
         <artifactId>exec-maven-plugin</artifactId>
-        <version>1.2</version>
+        <version>${maven-exec-plugin.version}</version>
         <executions>
           <execution>
             <configuration>
-              <executable>${project.basedir}/src/test/python/python-wrap</executable>
+              <executable>${executable.python}</executable>
               <workingDirectory>src/test/python</workingDirectory>
               <arguments>
                 <argument>unitTests.py</argument>
               </arguments>
               <environmentVariables>
-                <PYTHONPATH>${project.basedir}/src/main/python/jinja2:${project.basedir}/src/test/python:${project.basedir}/src/main/python:${project.basedir}/src/main/python/agent:${project.basedir}/src/main/python/resource_management:${project.basedir}/src/test/python/agent:${project.basedir}/src/test/python/resource_management:${project.basedir}/src/main/python/kazoo</PYTHONPATH>
+                <PYTHONPATH>${python.path.l}</PYTHONPATH>
               </environmentVariables>
               <skip>${skipTests}</skip>
             </configuration>
@@ -82,32 +83,9 @@
         </executions>
       </plugin>
       
-      <plugin>
-        <groupId>org.apache.rat</groupId>
-        <artifactId>apache-rat-plugin</artifactId>
-        <version>${apache-rat-plugin.version}</version>
-        <executions>
-          <execution>
-            <id>check-licenses</id>
-            <goals>
-              <goal>check</goal>
-            </goals>
-          </execution>
-        </executions>
-        <configuration>
-          <excludes>
-            <exclude>src/test/python/agent/dummy_output_error.txt</exclude>
-            <exclude>src/test/python/agent/dummy_output_good.txt</exclude>
-            <!-- jinja2 files (BSD license) -->
-            <exclude>src/main/python/jinja2/**</exclude>
-            <!-- mock files (BSD license) -->
-            <exclude>src/test/python/mock/**</exclude>
-            <!-- kazoo files (Apache License, Version 2.0) -->
-            <exclude>src/main/python/kazoo/**</exclude>
-          </excludes>
-        </configuration>
-      </plugin>
+  
     </plugins>
+    
     <extensions>
       <extension>
         <groupId>org.apache.maven.wagon</groupId>
@@ -115,4 +93,60 @@
       </extension>
     </extensions>
   </build>
+  
+  <profiles>
+   <profile>
+      <id>Windows</id>
+      <activation>
+        <os><family>windows</family></os>
+      </activation>
+      <properties>
+        <executable.python>python</executable.python>
+        <python.path.l>${project.basedir}\src\main\python\jinja2;${project.basedir}\src\test\python;${project.basedir}\src\main\python;${project.basedir}\src\main\python\agent;${project.basedir}\src\main\python\resource_management;${project.basedir}\src\test\python\agent;${project.basedir}\src\test\python\resource_management;${project.basedir}\src\main\python\kazoo</python.path.l>
+      </properties>
+    </profile>
+
+    <profile>
+      <id>Linux</id>
+      <activation>
+        <os><family>!windows</family></os>
+      </activation>
+      <properties>
+        <executable.python>${project.basedir}/../slider-agent/src/test/python/python-wrap</executable.python>
+        <python.path.l>${project.basedir}/src/main/python/jinja2:${project.basedir}/src/test/python:${project.basedir}/src/main/python:${project.basedir}/src/main/python/agent:${project.basedir}/src/main/python/resource_management:${project.basedir}/src/test/python/agent:${project.basedir}/src/test/python/resource_management:${project.basedir}/src/main/python/kazoo</python.path.l>
+      </properties>
+    </profile>
+    <profile>
+      <id>rat</id>
+      <build>
+        <plugins>
+          <plugin>
+            <groupId>org.apache.rat</groupId>
+            <artifactId>apache-rat-plugin</artifactId>
+            <version>${apache-rat-plugin.version}</version>
+            <executions>
+              <execution>
+                <id>check-licenses</id>
+                <goals>
+                  <goal>check</goal>
+                </goals>
+              </execution>
+            </executions>
+            <configuration>
+              <excludes>
+                <exclude>src/test/python/agent/dummy_output_error.txt</exclude>
+                <exclude>src/test/python/agent/dummy_output_good.txt</exclude>
+                <!-- jinja2 files (BSD license) -->
+                <exclude>src/main/python/jinja2/**</exclude>
+                <!-- mock files (BSD license) -->
+                <exclude>src/test/python/mock/**</exclude>
+                <!-- kazoo files (Apache License, Version 2.0) -->
+                <exclude>src/main/python/kazoo/**</exclude>
+              </excludes>
+            </configuration>
+          </plugin>
+        </plugins>
+      </build>
+    </profile>
+  </profiles>
 </project>
diff --git a/slider-agent/src/main/python/agent/ActionQueue.py b/slider-agent/src/main/python/agent/ActionQueue.py
index 4c45a76..4cb5de7 100644
--- a/slider-agent/src/main/python/agent/ActionQueue.py
+++ b/slider-agent/src/main/python/agent/ActionQueue.py
@@ -27,6 +27,7 @@
 import time
 
 from AgentConfig import AgentConfig
+from AgentToggleLogger import AgentToggleLogger
 from CommandStatusDict import CommandStatusDict
 from CustomServiceOrchestrator import CustomServiceOrchestrator
 import Constants
@@ -51,8 +52,10 @@
   STORE_APPLIED_CONFIG = 'record_config'
   AUTO_RESTART = 'auto_restart'
 
-  def __init__(self, config, controller):
+  def __init__(self, config, controller, agentToggleLogger):
     super(ActionQueue, self).__init__()
+    self.queueOutAgentToggleLogger = agentToggleLogger
+    self.queueInAgentToggleLogger = AgentToggleLogger("info")
     self.commandQueue = Queue.Queue()
     self.commandStatuses = CommandStatusDict(callback_action=
     self.status_update_callback)
@@ -61,7 +64,8 @@
     self._stop = threading.Event()
     self.tmpdir = config.getResolvedPath(AgentConfig.APP_TASK_DIR)
     self.customServiceOrchestrator = CustomServiceOrchestrator(config,
-                                                               controller)
+                                                               controller,
+                                                               self.queueOutAgentToggleLogger)
 
 
   def stop(self):
@@ -72,9 +76,12 @@
 
   def put(self, commands):
     for command in commands:
-      logger.info("Adding " + command['commandType'] + " for service " + \
-                  command['serviceName'] + " of cluster " + \
-                  command['clusterName'] + " to the queue.")
+      self.queueInAgentToggleLogger.adjustLogLevelAtStart(command['commandType'])
+      message = "Adding " + command['commandType'] + " for service " + \
+                command['serviceName'] + " of cluster " + \
+                command['clusterName'] + " to the queue."
+      self.queueInAgentToggleLogger.log(message)
+      self.queueInAgentToggleLogger.adjustLogLevelAtEnd(command['commandType'])
       logger.debug(pprint.pformat(command))
       self.commandQueue.put(command)
 
@@ -86,7 +93,9 @@
     while not self.stopped():
       time.sleep(2)
       command = self.commandQueue.get() # Will block if queue is empty
+      self.queueOutAgentToggleLogger.adjustLogLevelAtStart(command['commandType'])
       self.process_command(command)
+      self.queueOutAgentToggleLogger.adjustLogLevelAtEnd(command['commandType'])
     logger.info("ActionQueue stopped.")
 
 
@@ -142,9 +151,10 @@
       store_config = 'true' == command['commandParams'][ActionQueue.STORE_APPLIED_CONFIG]
     store_command = False
     if 'roleParams' in command and ActionQueue.AUTO_RESTART in command['roleParams']:
-      logger.info("Component has indicated auto-restart. Saving details from START command.")
       store_command = 'true' == command['roleParams'][ActionQueue.AUTO_RESTART]
 
+    if store_command:
+      logger.info("Component has indicated auto-restart. Saving details from START command.")
 
     # running command
     commandresult = self.customServiceOrchestrator.runCommand(command,
@@ -154,6 +164,12 @@
                                                                 'tmperr'],
                                                               True,
                                                               store_config or store_command)
+    # If command is STOP then set flag to indicate stop has been triggered.
+    # In future we might check status of STOP command and take other measures
+    # if graceful STOP fails (like force kill the processes)
+    if command['roleCommand'] == 'STOP':
+      self.controller.appGracefulStopTriggered = True
+
     # dumping results
     status = self.COMPLETED_STATUS
     if commandresult[Constants.EXIT_CODE] != 0:
diff --git a/slider-agent/src/main/python/agent/AgentConfig.py b/slider-agent/src/main/python/agent/AgentConfig.py
index e45ba23..86925b1 100644
--- a/slider-agent/src/main/python/agent/AgentConfig.py
+++ b/slider-agent/src/main/python/agent/AgentConfig.py
@@ -61,6 +61,7 @@
 [command]
 max_retries=2
 sleep_between_retries=1
+auto_restart=5,5
 
 [security]
 keysdir=security/keys
@@ -109,6 +110,8 @@
   # agent version file
   VERSION_FILE = "version_file"
 
+  AUTO_RESTART = "auto_restart"
+
   FOLDER_MAPPING = {
     APP_PACKAGE_DIR: "WORK",
     APP_INSTALL_DIR: "WORK",
@@ -164,6 +167,17 @@
       return ""
     return command
 
+  # return max, window - max failures within window minutes
+  def getErrorWindow(self):
+    window = config.get(AgentConfig.COMMAND_SECTION, AgentConfig.AUTO_RESTART)
+    if window != None:
+      parts = window.split(',')
+      if len(parts) == 2:
+        if parts[0].isdigit() and parts[1].isdigit():
+          return (int(parts[0]), int(parts[1]))
+      pass
+    return (0, 0)
+
   def set(self, category, name, value):
     global config
     return config.set(category, name, value)
diff --git a/slider-agent/src/main/python/agent/AgentToggleLogger.py b/slider-agent/src/main/python/agent/AgentToggleLogger.py
new file mode 100644
index 0000000..9a0ae3f
--- /dev/null
+++ b/slider-agent/src/main/python/agent/AgentToggleLogger.py
@@ -0,0 +1,69 @@
+#!/usr/bin/env python
+
+'''
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+'''
+
+import logging
+
+logger = logging.getLogger()
+
+'''
+Create a singleton instance of this class for every loop that either
+writes to or reads from the action queue, takes action based on the
+commandType and dumps logs along the way. It's target is to keep the
+verbosity level of agent logs to zero during non-interesting heartbeats
+like STATUS_COMMAND, and to ensure that it starts logging at info level
+again, the moment a non-STATUS_COMMAND shows up during its path into or
+out of the action queue.
+'''
+class AgentToggleLogger:
+  def __init__(self, logLevel="info"):
+    self.logLevel = logLevel
+
+  def log(self, message, *args, **kwargs):
+    if self.logLevel == "info":
+      logger.info(message, *args, **kwargs)
+    else:
+      logger.debug(message, *args, **kwargs)
+
+  '''
+  The methods adjustLogLevelAtStart and adjustLogLevelAtEnd work hand
+  in hand to do the following :
+  - STATUS related info should be logged at least once before the agent
+    enters into the STATUS loop
+  - If a non STATUS command shows up in the queue the logger switches
+    to info level
+  - STATUS will be logged at least once every time the log level toggles
+    back to info level when a non STATUS command shows up
+  '''
+
+  # Call this method at the start of the loop over action queue,
+  # right after reading from or writing to the queue
+  def adjustLogLevelAtStart(self, commandType):
+    from ActionQueue import ActionQueue
+    if self.logLevel != "info" and commandType != ActionQueue.STATUS_COMMAND:
+      self.logLevel = "info"
+
+  # Call this method as the last statement in the loop over action queue
+  def adjustLogLevelAtEnd(self, commandType):
+    from ActionQueue import ActionQueue
+    if commandType == ActionQueue.STATUS_COMMAND:
+      self.logLevel = "debug"
+    else:
+      self.logLevel = "info"
+
diff --git a/slider-agent/src/main/python/agent/Constants.py b/slider-agent/src/main/python/agent/Constants.py
index 2975266..f120b94 100644
--- a/slider-agent/src/main/python/agent/Constants.py
+++ b/slider-agent/src/main/python/agent/Constants.py
@@ -33,3 +33,4 @@
 ZK_QUORUM="zk_quorum"
 ZK_REG_PATH="zk_reg_path"
 AUTO_GENERATED="auto_generated"
+MAX_AM_CONNECT_RETRIES = 10
diff --git a/slider-agent/src/main/python/agent/Controller.py b/slider-agent/src/main/python/agent/Controller.py
index 1e27efa..387bc7e 100644
--- a/slider-agent/src/main/python/agent/Controller.py
+++ b/slider-agent/src/main/python/agent/Controller.py
@@ -27,9 +27,11 @@
 import threading
 import urllib2
 import pprint
+import math
 from random import randint
 
 from AgentConfig import AgentConfig
+from AgentToggleLogger import AgentToggleLogger
 from Heartbeat import Heartbeat
 from Register import Register
 from ActionQueue import ActionQueue
@@ -86,7 +88,13 @@
     self.statusCommand = None
     self.failureCount = 0
     self.heartBeatRetryCount = 0
-    self.autoRestart = False
+    self.autoRestartFailures = 0
+    self.autoRestartTrackingSince = 0
+    self.terminateAgent = False
+    self.stopCommand = None
+    self.appGracefulStopQueued = False
+    self.appGracefulStopTriggered = False
+    self.tags = ""
 
 
   def __del__(self):
@@ -122,34 +130,42 @@
           self.componentActualState,
           self.componentExpectedState,
           self.actionQueue.customServiceOrchestrator.allocated_ports,
+          self.actionQueue.customServiceOrchestrator.log_folders,
+          self.tags,
           id))
         logger.info("Registering with the server at " + self.registerUrl +
                     " with data " + pprint.pformat(data))
         response = self.sendRequest(self.registerUrl, data)
-        ret = json.loads(response)
+        regResp = json.loads(response)
         exitstatus = 0
-        # exitstatus is a code of error which was rised on server side.
+        # exitstatus is a code of error which was raised on server side.
         # exitstatus = 0 (OK - Default)
         # exitstatus = 1 (Registration failed because
         #                different version of agent and server)
-        if 'exitstatus' in ret.keys():
-          exitstatus = int(ret['exitstatus'])
-          # log - message, which will be printed to agents  log
-        if 'log' in ret.keys():
-          log = ret['log']
+        if 'exitstatus' in regResp.keys():
+          exitstatus = int(regResp['exitstatus'])
+
+        # log - message, which will be printed to agents  log
+        if 'log' in regResp.keys():
+          log = regResp['log']
+
+        # container may be associated with tags
+        if 'tags' in regResp.keys():
+          self.tags = regResp['tags']
+
         if exitstatus == 1:
           logger.error(log)
           self.isRegistered = False
           self.repeatRegistration = False
-          return ret
-        logger.info("Registered with the server with " + pprint.pformat(ret))
+          return regResp
+        logger.info("Registered with the server with " + pprint.pformat(regResp))
         print("Registered with the server")
-        self.responseId = int(ret['responseId'])
+        self.responseId = int(regResp['responseId'])
         self.isRegistered = True
-        if 'statusCommands' in ret.keys():
+        if 'statusCommands' in regResp.keys():
           logger.info("Got status commands on registration " + pprint.pformat(
-            ret['statusCommands']))
-          self.addToQueue(ret['statusCommands'])
+            regResp['statusCommands']))
+          self.addToQueue(regResp['statusCommands'])
           pass
         else:
           self.hasMappedComponents = False
@@ -166,7 +182,7 @@
         time.sleep(delay)
         pass
       pass
-    return ret
+    return regResp
 
 
   def addToQueue(self, commands):
@@ -187,15 +203,46 @@
 
   def shouldStopAgent(self):
     '''
-    If component has failed after start then stop the agent
+    Stop the agent if:
+      - Component has failed after start
+      - AM sent terminate agent command
     '''
+    shouldStopAgent = False
     if (self.componentActualState == State.FAILED) \
       and (self.componentExpectedState == State.STARTED) \
       and (self.failureCount >= Controller.MAX_FAILURE_COUNT_TO_STOP):
-      return True
-    else:
-      return False
-    pass
+      logger.info("Component instance has stopped, stopping the agent ...")
+      shouldStopAgent = True
+    if self.terminateAgent:
+      logger.info("Terminate agent command received from AM, stopping the agent ...")
+      shouldStopAgent = True
+    return shouldStopAgent
+
+  def isAppGracefullyStopped(self):
+    '''
+    If an app graceful stop command was queued then it is considered stopped if:
+      - app stop was triggered
+
+    Note: We should enhance this method by checking if the app is stopped
+          successfully and if not, then take alternate measures (like kill
+          processes). For now if stop is triggered it is considered stopped.
+    '''
+    isAppStopped = False
+    if self.appGracefulStopTriggered:
+      isAppStopped = True
+    return isAppStopped
+
+  def stopApp(self):
+    '''
+    Stop the app if:
+      - the app is currently in STARTED state and
+        a valid stop command is provided
+    '''
+    if (self.componentActualState == State.STARTED) and (not self.stopCommand == None):
+      # Try to do graceful stop
+      self.addToQueue([self.stopCommand])
+      self.appGracefulStopQueued = True
+      logger.info("Attempting to gracefully stop the application ...")
 
   def heartbeatWithServer(self):
     self.DEBUG_HEARTBEAT_RETRIES = 0
@@ -207,12 +254,14 @@
 
     while not self.DEBUG_STOP_HEARTBEATING:
 
-      if self.shouldStopAgent():
-        logger.info("Component instance has stopped, stopping the agent ...")
-        ProcessHelper.stopAgent()
-
       commandResult = {}
       try:
+        if self.appGracefulStopQueued and not self.isAppGracefullyStopped():
+          # Continue to wait until app is stopped
+          continue
+        if self.shouldStopAgent():
+          ProcessHelper.stopAgent()
+
         if not retry:
           data = json.dumps(
             self.heartbeat.build(commandResult,
@@ -229,11 +278,24 @@
 
         serverId = int(response['responseId'])
 
+        if 'restartAgent' in response.keys():
+          restartAgent = response['restartAgent']
+          if restartAgent:
+            logger.error("Got restartAgent command")
+            self.restartAgent()
+        if 'terminateAgent' in response.keys():
+          self.terminateAgent = response['terminateAgent']
+          if self.terminateAgent:
+            logger.error("Got terminateAgent command")
+            self.stopApp()
+            # Continue will add some wait time
+            continue
+
         restartEnabled = False
         if 'restartEnabled' in response:
           restartEnabled = response['restartEnabled']
           if restartEnabled:
-            logger.info("Component auto-restart is enabled.")
+            logger.debug("Component auto-restart is enabled.")
 
         if 'hasMappedComponents' in response.keys():
           self.hasMappedComponents = response['hasMappedComponents'] != False
@@ -254,17 +316,18 @@
         else:
           self.responseId = serverId
 
+        commandSentFromAM = False
         if 'executionCommands' in response.keys():
           self.updateStateBasedOnCommand(response['executionCommands'])
           self.addToQueue(response['executionCommands'])
+          commandSentFromAM = True
           pass
         if 'statusCommands' in response.keys() and len(response['statusCommands']) > 0:
           self.addToQueue(response['statusCommands'])
+          commandSentFromAM = True
           pass
-        if "true" == response['restartAgent']:
-          logger.error("Got restartAgent command")
-          self.restartAgent()
-        else:
+
+        if not commandSentFromAM:
           logger.info("No commands sent from the Server.")
           pass
 
@@ -274,7 +337,7 @@
           stored_command = self.actionQueue.customServiceOrchestrator.stored_command
           if len(stored_command) > 0:
             auto_start_command = self.create_start_command(stored_command)
-            if auto_start_command:
+            if auto_start_command and self.shouldAutoRestart():
               logger.info("Automatically adding a start command.")
               logger.debug("Auto start command: " + pprint.pformat(auto_start_command))
               self.updateStateBasedOnCommand([auto_start_command], False)
@@ -330,8 +393,7 @@
           zk_quorum = self.config.get(AgentConfig.SERVER_SECTION, Constants.ZK_QUORUM)
           zk_reg_path = self.config.get(AgentConfig.SERVER_SECTION, Constants.ZK_REG_PATH)
           registry = Registry(zk_quorum, zk_reg_path)
-          amHost, amSecuredPort = registry.readAMHostPort()
-          logger.info("Read from ZK registry: AM host = %s, AM secured port = %s" % (amHost, amSecuredPort))
+          amHost, amUnsecuredPort, amSecuredPort = registry.readAMHostPort()
           self.hostname = amHost
           self.secured_port = amSecuredPort
           self.config.set(AgentConfig.SERVER_SECTION, "hostname", self.hostname)
@@ -342,13 +404,14 @@
           return
         self.cachedconnect = None # Previous connection is broken now
         retry = True
-      # Sleep for some time
-      timeout = self.netutil.HEARTBEAT_IDDLE_INTERVAL_SEC \
-                - self.netutil.MINIMUM_INTERVAL_BETWEEN_HEARTBEATS
-      self.heartbeat_wait_event.wait(timeout=timeout)
-      # Sleep a bit more to allow STATUS_COMMAND results to be collected
-      # and sent in one heartbeat. Also avoid server overload with heartbeats
-      time.sleep(self.netutil.MINIMUM_INTERVAL_BETWEEN_HEARTBEATS)
+      finally:
+        # Sleep for some time
+        timeout = self.netutil.HEARTBEAT_IDDLE_INTERVAL_SEC \
+                  - self.netutil.MINIMUM_INTERVAL_BETWEEN_HEARTBEATS
+        self.heartbeat_wait_event.wait(timeout=timeout)
+        # Sleep a bit more to allow STATUS_COMMAND results to be collected
+        # and sent in one heartbeat. Also avoid server overload with heartbeats
+        time.sleep(self.netutil.MINIMUM_INTERVAL_BETWEEN_HEARTBEATS)
     pass
     logger.info("Controller stopped heart-beating.")
 
@@ -364,6 +427,17 @@
 
 
   def updateStateBasedOnCommand(self, commands, createStatus=True):
+    # A STOP command is paired with the START command to provide agents the
+    # capability to gracefully stop the app if possible. The STOP command needs
+    # to be stored since the AM might not be able to provide it since it could
+    # have lost the container state for whatever reasons. The STOP command has
+    # no other role to play in the Agent state transition so it is removed from
+    # the commands list.
+    index = 0
+    deleteIndex = 0
+    delete = False
+    # break only if an INSTALL command is found, since we might get a STOP
+    # command for a START command
     for command in commands:
       if command["roleCommand"] == "START":
         self.componentExpectedState = State.STARTED
@@ -372,12 +446,22 @@
         if createStatus:
           self.statusCommand = self.createStatusCommand(command)
 
+      # The STOP command index is stored to be deleted
+      if command["roleCommand"] == "STOP":
+        self.stopCommand = command
+        delete = True
+        deleteIndex = index
+
       if command["roleCommand"] == "INSTALL":
         self.componentExpectedState = State.INSTALLED
         self.componentActualState = State.INSTALLING
         self.failureCount = 0
-      break;
+        break;
+      index += 1
 
+    # Delete the STOP command
+    if delete:
+      del commands[deleteIndex]
 
   def updateStateBasedOnResult(self, commandResult):
     if len(commandResult) > 0:
@@ -432,10 +516,11 @@
 
 
   def run(self):
-    self.actionQueue = ActionQueue(self.config, controller=self)
+    self.agentToggleLogger = AgentToggleLogger("info")
+    self.actionQueue = ActionQueue(self.config, controller=self, agentToggleLogger=self.agentToggleLogger)
     self.actionQueue.start()
     self.register = Register(self.config)
-    self.heartbeat = Heartbeat(self.actionQueue, self.config)
+    self.heartbeat = Heartbeat(self.actionQueue, self.config, self.agentToggleLogger)
 
     opener = urllib2.build_opener()
     urllib2.install_opener(opener)
@@ -486,6 +571,35 @@
             return {'exitstatus': 1, 'log': err_msg}
 
 
+  # Basic window that only counts failures till the window duration expires
+  def shouldAutoRestart(self):
+    max, window = self.config.getErrorWindow()
+    if max <= 0 or window <= 0:
+      return True
+
+    seconds_now = time.time()
+    if self.autoRestartTrackingSince == 0:
+      self.autoRestartTrackingSince = seconds_now
+      self.autoRestartFailures = 1
+      return True
+
+    self.autoRestartFailures += 1
+    minutes = math.floor((seconds_now - self.autoRestartTrackingSince) / 60)
+    if self.autoRestartFailures > max:
+      logger.info("Auto restart not allowed due to " + str(self.autoRestartFailures) + " failures in " + str(minutes) +
+                  " minutes. Max restarts allowed is " + str(max) + " in " + str(window) + " minutes.")
+      return False
+
+    if minutes > window:
+      logger.info("Resetting window as number of minutes passed is " + str(minutes))
+      self.autoRestartTrackingSince = seconds_now
+      self.autoRestartFailures = 1
+      return True
+    return True
+
+    pass
+
+
 def main(argv=None):
   # Allow Ctrl-C
   signal.signal(signal.SIGINT, signal.SIG_DFL)
diff --git a/slider-agent/src/main/python/agent/CustomServiceOrchestrator.py b/slider-agent/src/main/python/agent/CustomServiceOrchestrator.py
index 15f1664..119c926 100644
--- a/slider-agent/src/main/python/agent/CustomServiceOrchestrator.py
+++ b/slider-agent/src/main/python/agent/CustomServiceOrchestrator.py
@@ -22,16 +22,19 @@
 import os
 import json
 import pprint
+import random
 import sys
 import socket
 import posixpath
 import platform
+import copy
 from AgentConfig import AgentConfig
 from AgentException import AgentException
 from PythonExecutor import PythonExecutor
 import hostname
 import Constants
 
+MAX_ATTEMPTS = 5
 
 logger = logging.getLogger()
 
@@ -47,10 +50,10 @@
   LIVE_STATUS = "STARTED"
   DEAD_STATUS = "INSTALLED"
 
-  def __init__(self, config, controller):
+  def __init__(self, config, controller, agentToggleLogger):
     self.config = config
     self.tmp_dir = config.getResolvedPath(AgentConfig.APP_TASK_DIR)
-    self.python_executor = PythonExecutor(self.tmp_dir, config)
+    self.python_executor = PythonExecutor(self.tmp_dir, config, agentToggleLogger)
     self.status_commands_stdout = os.path.realpath(posixpath.join(self.tmp_dir,
                                                                   'status_command_stdout.txt'))
     self.status_commands_stderr = os.path.realpath(posixpath.join(self.tmp_dir,
@@ -58,6 +61,7 @@
     self.public_fqdn = hostname.public_hostname()
     self.stored_command = {}
     self.allocated_ports = {}
+    self.log_folders = {}
     # Clean up old status command files if any
     try:
       os.unlink(self.status_commands_stdout)
@@ -133,15 +137,17 @@
       }
 
     if Constants.EXIT_CODE in ret and ret[Constants.EXIT_CODE] == 0:
-      ret[Constants.ALLOCATED_PORTS] = allocated_ports
-      self.allocated_ports = allocated_ports
+      ret[Constants.ALLOCATED_PORTS] = copy.deepcopy(allocated_ports)
+      ## Generally all ports are allocated at once but just in case
+      self.allocated_ports.update(allocated_ports)
 
     # Irrespective of the outcome report the folder paths
     if command_name == 'INSTALL':
-      ret[Constants.FOLDERS] = {
+      self.log_folders = {
         Constants.AGENT_LOG_ROOT: self.config.getLogPath(),
         Constants.AGENT_WORK_ROOT: self.config.getWorkRootPath()
       }
+      ret[Constants.FOLDERS] = copy.deepcopy(self.log_folders)
     return ret
 
 
@@ -212,8 +218,11 @@
     """
     # Perform few modifications to stay compatible with the way in which
     # site.pp files are generated by manifestGenerator.py
-    public_fqdn = self.public_fqdn
-    command['public_hostname'] = public_fqdn
+    command['public_hostname'] = self.public_fqdn
+    if 'hostname' in command:
+      command['appmaster_hostname'] = command['hostname']
+    command['hostname'] = self.public_fqdn
+
     # Now, dump the json file
     command_type = command['commandType']
     from ActionQueue import ActionQueue  # To avoid cyclic dependency
@@ -226,13 +235,13 @@
       task_id = command['taskId']
       file_path = os.path.realpath(posixpath.join(self.tmp_dir, "command-{0}.json".format(task_id)))
       # Json may contain passwords, that's why we need proper permissions
-    if os.path.isfile(file_path):
+    if os.path.isfile(file_path) and os.path.exists(file_path):
       os.unlink(file_path)
 
     self.finalize_command(command, store_command, allocated_ports)
 
     with os.fdopen(os.open(file_path, os.O_WRONLY | os.O_CREAT,
-                           0600), 'w') as f:
+                           0644), 'w') as f:
       content = json.dumps(command, sort_keys=False, indent=4)
       f.write(content)
     return file_path
@@ -242,16 +251,16 @@
   ${AGENT_WORK_ROOT} -> AgentConfig.getWorkRootPath()
   ${AGENT_LOG_ROOT} -> AgentConfig.getLogPath()
   ALLOCATED_PORT is a hint to allocate port. It works as follows:
-  Its of the form {component_name.ALLOCATED_PORT}[{DEFAULT_default_port}][{DO_NOT_PROPAGATE}]
+  Its of the form {component_name.ALLOCATED_PORT}[{DEFAULT_default_port}][{PER_CONTAINER}]
   Either a port gets allocated or if not then just set the value to "0"
   """
-
   def finalize_command(self, command, store_command, allocated_ports):
     component = command['componentName']
     allocated_for_this_component_format = "${{{0}.ALLOCATED_PORT}}"
     allocated_for_any = ".ALLOCATED_PORT}"
 
     port_allocation_req = allocated_for_this_component_format.format(component)
+    allowed_ports = self.get_allowed_ports(command)
     if 'configurations' in command:
       for key in command['configurations']:
         if len(command['configurations'][key]) > 0:
@@ -262,7 +271,7 @@
               value = value.replace("${AGENT_LOG_ROOT}",
                                     self.config.getLogPath())
               if port_allocation_req in value:
-                value = self.allocate_ports(value, port_allocation_req)
+                value = self.allocate_ports(value, port_allocation_req, allowed_ports)
                 allocated_ports[key + "." + k] = value
               elif allocated_for_any in value:
                 ## All unallocated ports should be set to 0
@@ -285,7 +294,7 @@
   All unallocated ports should be set to 0
   Look for "${SOME_COMPONENT_NAME.ALLOCATED_PORT}"
         or "${component.ALLOCATED_PORT}{DEFAULT_port}"
-        or "${component.ALLOCATED_PORT}{DEFAULT_port}{DO_NOT_PROPAGATE}"
+        or "${component.ALLOCATED_PORT}{DEFAULT_port}{PER_CONTAINER}"
   """
 
   def set_all_unallocated_ports(self, value):
@@ -314,11 +323,11 @@
   Port allocation can asks for multiple dynamic ports
   port_req_pattern is of type ${component_name.ALLOCATED_PORT}
     append {DEFAULT_ and find the default value
-    append {DO_NOT_PROPAGATE} if it exists
+    append {PER_CONTAINER} if it exists
   """
-  def allocate_ports(self, value, port_req_pattern):
+  def allocate_ports(self, value, port_req_pattern, allowed_ports=None):
     default_port_pattern = "{DEFAULT_"
-    do_not_propagate_pattern = "{DO_NOT_PROPAGATE}"
+    do_not_propagate_pattern = "{PER_CONTAINER}"
     index = value.find(port_req_pattern)
     while index != -1:
       replaced_pattern = port_req_pattern
@@ -338,7 +347,7 @@
       if index == value.find(replaced_pattern + do_not_propagate_pattern):
         replaced_pattern = replaced_pattern + do_not_propagate_pattern
         pass
-      port = self.allocate_port(def_port)
+      port = self.allocate_port(def_port, allowed_ports)
       value = value.replace(replaced_pattern, str(port), 1)
       logger.info("Allocated port " + str(port) + " for " + replaced_pattern)
       index = value.find(port_req_pattern)
@@ -347,24 +356,28 @@
     pass
 
 
-  def allocate_port(self, default_port=None):
+  def allocate_port(self, default_port=None, allowed_ports=None):
     if default_port != None:
       if self.is_port_available(default_port):
         return default_port
 
-    MAX_ATTEMPT = 5
-    iter = 0
+    port_list = [0] * MAX_ATTEMPTS
+    if allowed_ports != None:
+      port_list = allowed_ports
+
+    i = 0
     port = -1
-    while iter < MAX_ATTEMPT:
-      iter = iter + 1
+    itor = iter(port_list)
+    while i < min(len(port_list), MAX_ATTEMPTS):
       try:
         sock = socket.socket()
-        sock.bind(('', 0))
+        sock.bind(('', itor.next()))
         port = sock.getsockname()[1]
       except Exception, err:
-        logger.info("Encountered error while trying to opening socket - " + str(err))
+        logger.info("Encountered error while trying to open socket - " + str(err))
       finally:
         sock.close()
+      i = i + 1
       pass
     logger.info("Allocated dynamic port: " + str(port))
     return port
@@ -380,3 +393,43 @@
     return False
 
 
+  def get_allowed_ports(self, command):
+      allowed_ports = None
+      global_config = command['configurations'].get('global')
+      if global_config != None:
+          allowed_ports_value = global_config.get("slider.allowed.ports")
+          if allowed_ports_value:
+              allowed_ports = self.get_allowed_port_list(allowed_ports_value)
+
+      return allowed_ports
+
+
+  def get_allowed_port_list(self, allowedPortsOptionValue,
+                            num_values=MAX_ATTEMPTS):
+    selection = set()
+    invalid = set()
+    # tokens are comma seperated values
+    tokens = [x.strip() for x in allowedPortsOptionValue.split(',')]
+    for i in tokens:
+      try:
+        selection.add(int(i))
+      except:
+        # should be a range
+        try:
+          token = [int(k.strip()) for k in i.split('-')]
+          if len(token) > 1:
+            token.sort()
+            first = token[0]
+            last = token[len(token)-1]
+            for x in range(first, last+1):
+              selection.add(x)
+        except:
+          # not an int and not a range...
+          invalid.add(i)
+    selection = random.sample(selection, min (len(selection), num_values))
+    # Report invalid tokens before returning valid selection
+    logger.info("Allowed port values: " + str(selection))
+    logger.warning("Invalid port range values: " + str(invalid))
+    return selection
+
+
diff --git a/slider-agent/src/main/python/agent/Heartbeat.py b/slider-agent/src/main/python/agent/Heartbeat.py
index b107d92..e157ce3 100644
--- a/slider-agent/src/main/python/agent/Heartbeat.py
+++ b/slider-agent/src/main/python/agent/Heartbeat.py
@@ -31,16 +31,17 @@
 logger = logging.getLogger()
 
 class Heartbeat:
-  def __init__(self, actionQueue, config=None):
+  def __init__(self, actionQueue, config=None, agentToggleLogger=None):
     self.actionQueue = actionQueue
     self.config = config
     self.reports = []
+    self.agentToggleLogger = agentToggleLogger
 
   def build(self, commandResult, id='-1',
             componentsMapped=False):
     timestamp = int(time.time() * 1000)
     queueResult = self.actionQueue.result()
-    logger.info("Queue result: " + pformat(queueResult))
+    self.agentToggleLogger.log("Queue result: " + pformat(queueResult))
 
     nodeStatus = {"status": "HEALTHY",
                   "cause": "NONE"}
@@ -93,10 +94,11 @@
       if len(componentStatuses) > 0:
         heartbeat['componentStatus'] = componentStatuses
 
-    logger.info("Sending heartbeat with response id: " + str(id) + " and "
-                                                                   "timestamp: " + str(timestamp) +
-                ". Command(s) in progress: " + repr(commandsInProgress) +
-                ". Components mapped: " + repr(componentsMapped))
+    self.agentToggleLogger.log(
+                 "Sending heartbeat with response id: " + str(id) + " and "
+                 "timestamp: " + str(timestamp) +
+                 ". Command(s) in progress: " + repr(commandsInProgress) +
+                 ". Components mapped: " + repr(componentsMapped))
     logger.debug("Heartbeat : " + pformat(heartbeat))
 
     return heartbeat
diff --git a/slider-agent/src/main/python/agent/ProcessHelper.py b/slider-agent/src/main/python/agent/ProcessHelper.py
index 467c4d8..81737f4 100644
--- a/slider-agent/src/main/python/agent/ProcessHelper.py
+++ b/slider-agent/src/main/python/agent/ProcessHelper.py
@@ -24,11 +24,12 @@
 import sys
 import posixpath
 from shell import getTempFiles
+import Constants
 
 logger = logging.getLogger()
 
-if 'AGENT_WORK_ROOT' in os.environ:
-  pidfile = os.path.realpath(posixpath.join(os.environ['AGENT_WORK_ROOT'], "infra", "run", "agent.pid"))
+if Constants.AGENT_WORK_ROOT in os.environ:
+  pidfile = os.path.realpath(posixpath.join(os.environ[Constants.AGENT_WORK_ROOT], "infra", "run", "agent.pid"))
 else:
   pidfile = None
 
diff --git a/slider-agent/src/main/python/agent/PythonExecutor.py b/slider-agent/src/main/python/agent/PythonExecutor.py
index 54ce247..985d75f 100644
--- a/slider-agent/src/main/python/agent/PythonExecutor.py
+++ b/slider-agent/src/main/python/agent/PythonExecutor.py
@@ -30,6 +30,7 @@
 import sys
 import platform
 import Constants
+from AgentToggleLogger import AgentToggleLogger
 
 
 logger = logging.getLogger()
@@ -47,9 +48,10 @@
   event = threading.Event()
   python_process_has_been_killed = False
 
-  def __init__(self, tmpDir, config):
+  def __init__(self, tmpDir, config, agentToggleLogger):
     self.tmpDir = tmpDir
     self.config = config
+    self.agentToggleLogger = agentToggleLogger
     pass
 
   def run_file(self, script, script_params, tmpoutfile, tmperrfile, timeout,
@@ -81,7 +83,7 @@
 
     script_params += [tmpstructedoutfile, logger_level]
     pythonCommand = self.python_command(script, script_params)
-    logger.info("Running command " + pprint.pformat(pythonCommand))
+    self.agentToggleLogger.log("Running command " + pprint.pformat(pythonCommand))
     process = self.launch_python_subprocess(pythonCommand, tmpout, tmperr,
                                             environment_vars)
     logger.debug("Launching watchdog thread")
@@ -99,24 +101,23 @@
     out = open(tmpoutfile, 'r').read()
     error = open(tmperrfile, 'r').read()
 
+    structured_out = {}
     try:
       with open(tmpstructedoutfile, 'r') as fp:
         structured_out = json.load(fp)
-    except Exception:
+    except Exception as e:
       if os.path.exists(tmpstructedoutfile):
-        errMsg = 'Unable to read structured output from ' + tmpstructedoutfile
+        errMsg = 'Unable to read structured output from ' + tmpstructedoutfile + ' ' + str(e)
         structured_out = {
           'msg': errMsg
         }
         logger.warn(structured_out)
-      else:
-        structured_out = {}
 
     if self.python_process_has_been_killed:
       error = str(error) + "\n Python script has been killed due to timeout"
       returncode = 999
     result = self.condenseOutput(out, error, returncode, structured_out)
-    logger.info("Result: %s" % result)
+    self.agentToggleLogger.log("Result: %s" % result)
     return result
 
 
@@ -130,7 +131,7 @@
     env = os.environ.copy()
     if environment_vars:
       for k, v in environment_vars:
-        logger.info("Setting env: %s to %s", k, v)
+        self.agentToggleLogger.log("Setting env: %s to %s", k, v)
         env[k] = v
     return subprocess.Popen(command,
                             stdout=tmpout,
diff --git a/slider-agent/src/main/python/agent/Register.py b/slider-agent/src/main/python/agent/Register.py
index b59154f..c5197fd 100644
--- a/slider-agent/src/main/python/agent/Register.py
+++ b/slider-agent/src/main/python/agent/Register.py
@@ -29,19 +29,21 @@
   def __init__(self, config):
     self.config = config
 
-  def build(self, actualState, expectedState, allocated_ports, id='-1'):
+  def build(self, actualState, expectedState, allocated_ports, log_folders, tags="", id='-1'):
     timestamp = int(time.time() * 1000)
 
     version = self.read_agent_version()
 
     register = {'responseId': int(id),
                 'timestamp': timestamp,
-                'hostname': self.config.getLabel(),
+                'label': self.config.getLabel(),
                 'publicHostname': hostname.public_hostname(),
                 'agentVersion': version,
                 'actualState': actualState,
                 'expectedState': expectedState,
-                'allocatedPorts': allocated_ports
+                'allocatedPorts': allocated_ports,
+                'logFolders': log_folders,
+                'tags': tags
     }
     return register
 
diff --git a/slider-agent/src/main/python/agent/Registry.py b/slider-agent/src/main/python/agent/Registry.py
index 37736fe..e0bc5da 100644
--- a/slider-agent/src/main/python/agent/Registry.py
+++ b/slider-agent/src/main/python/agent/Registry.py
@@ -24,14 +24,16 @@
 
 logger = logging.getLogger()
 
-class Registry:
+class Registry(object):
   def __init__(self, zk_quorum, zk_reg_path):
     self.zk_quorum = zk_quorum
     self.zk_reg_path = zk_reg_path
 
   def readAMHostPort(self):
+    logger.debug("Trying to connect to ZK...")
     amHost = ""
     amSecuredPort = ""
+    amUnsecuredPort = ""
     zk = None
     try:
       zk = KazooClient(hosts=self.zk_quorum, read_only=True)
@@ -39,19 +41,30 @@
       data, stat = zk.get(self.zk_reg_path)
       logger.debug("Registry Data: %s" % (data.decode("utf-8")))
       sliderRegistry = json.loads(data)
-      amUrl = sliderRegistry["payload"]["internalView"]["endpoints"]["org.apache.slider.agents"]["address"]
-      amHost = amUrl.split("/")[2].split(":")[0]
-      amSecuredPort = amUrl.split(":")[2].split("/")[0]
-      # the port needs to be utf-8 encoded 
+      internalAttr = sliderRegistry["internal"]
+      for internal in internalAttr:
+        if internal["api"] == "classpath:org.apache.slider.agents.secure":
+          address0 = internal["addresses"][0]
+          amUrl = address0["uri"]
+          amHost = amUrl.split("/")[2].split(":")[0]
+          amSecuredPort = amUrl.split(":")[2].split("/")[0]
+        if internal["api"] == "classpath:org.apache.slider.agents.oneway":
+          address0 = internal["addresses"][0]
+          amUnsecureUrl = address0["uri"]
+          amHost = amUnsecureUrl.split("/")[2].split(":")[0]
+          amUnsecuredPort = amUnsecureUrl.split(":")[2].split("/")[0]
+
+      # the ports needs to be utf-8 encoded
       amSecuredPort = amSecuredPort.encode('utf8', 'ignore')
-    except Exception:
+      amUnsecuredPort = amUnsecuredPort.encode('utf8', 'ignore')
+    except Exception, e:
       # log and let empty strings be returned
-      logger.error("Could not connect to zk registry at %s in quorum %s" % 
-                   (self.zk_reg_path, self.zk_quorum))
+      logger.error("Could not connect to zk registry at %s in quorum %s. Error: %s" %
+                   (self.zk_reg_path, self.zk_quorum, str(e)))
       pass
     finally:
-      if not zk == None:
+      if not zk is None:
         zk.stop()
         zk.close()
-    logger.info("AM Host = %s, AM Secured Port = %s" % (amHost, amSecuredPort))
-    return amHost, amSecuredPort
+    logger.info("AM Host = %s, AM Secured Port = %s, ping port = %s" % (amHost, amSecuredPort, amUnsecuredPort))
+    return amHost, amUnsecuredPort, amSecuredPort
diff --git a/slider-agent/src/main/python/agent/main.py b/slider-agent/src/main/python/agent/main.py
index f68db04..3a75cb1 100644
--- a/slider-agent/src/main/python/agent/main.py
+++ b/slider-agent/src/main/python/agent/main.py
@@ -43,7 +43,7 @@
 agentPid = os.getpid()
 
 configFileRelPath = "infra/conf/agent.ini"
-logFileName = "agent.log"
+logFileName = "slider-agent.log"
 
 SERVER_STATUS_URL="https://{0}:{1}{2}"
 
@@ -172,7 +172,8 @@
     if pid == -1:
       print ("Agent process is not running")
     else:
-      os.kill(pid, signal.SIGKILL)
+      if not IS_WINDOWS:
+        os.kill(pid, signal.SIGKILL)
     os._exit(1)
 
 
@@ -185,9 +186,14 @@
   parser.add_option("--debug", dest="debug", help="Agent debug hint", default="")
   (options, args) = parser.parse_args()
 
-  if not 'AGENT_WORK_ROOT' in os.environ:
-    parser.error("AGENT_WORK_ROOT environment variable must be set.")
-  options.root_folder = os.environ['AGENT_WORK_ROOT']
+  if not Constants.AGENT_WORK_ROOT in os.environ and not 'PWD' in os.environ:
+    parser.error("AGENT_WORK_ROOT environment variable or PWD must be set.")
+  if Constants.AGENT_WORK_ROOT in os.environ:
+    options.root_folder = os.environ[Constants.AGENT_WORK_ROOT]
+  else:
+    # some launch environments do not end up setting all environment variables
+    options.root_folder = os.environ['PWD']
+
   if not 'AGENT_LOG_ROOT' in os.environ:
     parser.error("AGENT_LOG_ROOT environment variable must be set.")
   options.log_folder = os.environ['AGENT_LOG_ROOT']
@@ -217,15 +223,6 @@
   if options.debug:
     agentConfig.set(AgentConfig.AGENT_SECTION, AgentConfig.APP_DBG_CMD, options.debug)
 
-  # Extract the AM hostname and secured port from ZK registry
-  registry = Registry(options.zk_quorum, options.zk_reg_path)
-  amHost, amSecuredPort = registry.readAMHostPort()
-  if amHost:
-      agentConfig.set(AgentConfig.SERVER_SECTION, "hostname", amHost)
-
-  if amSecuredPort:
-      agentConfig.set(AgentConfig.SERVER_SECTION, "secured_port", amSecuredPort)
-
   # set the security directory to a subdirectory of the run dir
   secDir = posixpath.join(agentConfig.getResolvedPath(AgentConfig.RUN_DIR), "security")
   logger.info("Security/Keys directory: " + secDir)
@@ -248,16 +245,44 @@
   if len(all_log_folders) > 1:
     logger.info("Selected log folder from available: " + ",".join(all_log_folders))
 
-  server_url = SERVER_STATUS_URL.format(
-    agentConfig.get(AgentConfig.SERVER_SECTION, 'hostname'),
-    agentConfig.get(AgentConfig.SERVER_SECTION, 'secured_port'),
-    agentConfig.get(AgentConfig.SERVER_SECTION, 'check_path'))
-  print("Connecting to the server at " + server_url + "...")
-  logger.info('Connecting to the server at: ' + server_url)
+  # Extract the AM hostname and secured port from ZK registry
+  zk_lookup_tries = 0
+  while zk_lookup_tries < Constants.MAX_AM_CONNECT_RETRIES:
+    registry = Registry(options.zk_quorum, options.zk_reg_path)
+    amHost, amUnsecuredPort, amSecuredPort = registry.readAMHostPort()
 
-  # Wait until server is reachable
-  netutil = NetUtil()
-  netutil.try_to_connect(server_url, -1, logger)
+    tryConnect = True
+    if not amHost or not amSecuredPort or not amUnsecuredPort:
+      logger.info("Unable to extract AM host details from ZK, retrying ...")
+      tryConnect = False
+      time.sleep(NetUtil.CONNECT_SERVER_RETRY_INTERVAL_SEC)
+
+    if tryConnect:
+      if amHost:
+        agentConfig.set(AgentConfig.SERVER_SECTION, "hostname", amHost)
+
+      if amSecuredPort:
+        agentConfig.set(AgentConfig.SERVER_SECTION, "secured_port", amSecuredPort)
+
+      if amUnsecuredPort:
+        agentConfig.set(AgentConfig.SERVER_SECTION, "port", amUnsecuredPort)
+
+      server_url = SERVER_STATUS_URL.format(
+        agentConfig.get(AgentConfig.SERVER_SECTION, 'hostname'),
+        agentConfig.get(AgentConfig.SERVER_SECTION, 'port'),
+        agentConfig.get(AgentConfig.SERVER_SECTION, 'check_path'))
+      print("Connecting to the server at " + server_url + "...")
+      logger.info('Connecting to the server at: ' + server_url)
+
+      # Wait until server is reachable and continue to query ZK
+      netutil = NetUtil()
+      retries = netutil.try_to_connect(server_url, 3, logger)
+      if retries < 3:
+        break;
+      pass
+    pass
+    zk_lookup_tries += 1
+  pass
 
   # Launch Controller communication
   controller = Controller(agentConfig)
diff --git a/slider-agent/src/main/python/jinja2/ext/Vim/htmljinja.vim b/slider-agent/src/main/python/jinja2/ext/Vim/htmljinja.vim
deleted file mode 100644
index 3f9cba4..0000000
--- a/slider-agent/src/main/python/jinja2/ext/Vim/htmljinja.vim
+++ /dev/null
@@ -1,27 +0,0 @@
-" Vim syntax file
-" Language:	Jinja HTML template
-" Maintainer:	Armin Ronacher <armin.ronacher@active-4.com>
-" Last Change:	2007 Apr 8
-
-" For version 5.x: Clear all syntax items
-" For version 6.x: Quit when a syntax file was already loaded
-if version < 600
-  syntax clear
-elseif exists("b:current_syntax")
-  finish
-endif
-
-if !exists("main_syntax")
-  let main_syntax = 'html'
-endif
-
-if version < 600
-  so <sfile>:p:h/jinja.vim
-  so <sfile>:p:h/html.vim
-else
-  runtime! syntax/jinja.vim
-  runtime! syntax/html.vim
-  unlet b:current_syntax
-endif
-
-let b:current_syntax = "htmljinja"
diff --git a/slider-agent/src/main/python/jinja2/ext/Vim/jinja.vim b/slider-agent/src/main/python/jinja2/ext/Vim/jinja.vim
deleted file mode 100644
index 919954b..0000000
--- a/slider-agent/src/main/python/jinja2/ext/Vim/jinja.vim
+++ /dev/null
@@ -1,113 +0,0 @@
-" Vim syntax file
-" Language:	Jinja template
-" Maintainer:	Armin Ronacher <armin.ronacher@active-4.com>
-" Last Change:	2008 May 9
-" Version:      1.1
-"
-" Known Bugs:
-"   because of odd limitations dicts and the modulo operator
-"   appear wrong in the template.
-"
-" Changes:
-"
-"     2008 May 9:     Added support for Jinja2 changes (new keyword rules)
-
-" For version 5.x: Clear all syntax items
-" For version 6.x: Quit when a syntax file was already loaded
-if version < 600
-  syntax clear
-elseif exists("b:current_syntax")
-  finish
-endif
-
-syntax case match
-
-" Jinja template built-in tags and parameters (without filter, macro, is and raw, they
-" have special threatment)
-syn keyword jinjaStatement containedin=jinjaVarBlock,jinjaTagBlock,jinjaNested contained and if else in not or recursive as import
-
-syn keyword jinjaStatement containedin=jinjaVarBlock,jinjaTagBlock,jinjaNested contained is filter skipwhite nextgroup=jinjaFilter
-syn keyword jinjaStatement containedin=jinjaTagBlock contained macro skipwhite nextgroup=jinjaFunction
-syn keyword jinjaStatement containedin=jinjaTagBlock contained block skipwhite nextgroup=jinjaBlockName
-
-" Variable Names
-syn match jinjaVariable containedin=jinjaVarBlock,jinjaTagBlock,jinjaNested contained skipwhite /[a-zA-Z_][a-zA-Z0-9_]*/
-syn keyword jinjaSpecial containedin=jinjaVarBlock,jinjaTagBlock,jinjaNested contained false true none False True None loop super caller varargs kwargs
-
-" Filters
-syn match jinjaOperator "|" containedin=jinjaVarBlock,jinjaTagBlock,jinjaNested contained nextgroup=jinjaFilter
-syn match jinjaFilter contained skipwhite /[a-zA-Z_][a-zA-Z0-9_]*/
-syn match jinjaFunction contained skipwhite /[a-zA-Z_][a-zA-Z0-9_]*/
-syn match jinjaBlockName contained skipwhite /[a-zA-Z_][a-zA-Z0-9_]*/
-
-" Jinja template constants
-syn region jinjaString containedin=jinjaVarBlock,jinjaTagBlock,jinjaNested contained start=/"/ skip=/\\"/ end=/"/
-syn region jinjaString containedin=jinjaVarBlock,jinjaTagBlock,jinjaNested contained start=/'/ skip=/\\'/ end=/'/
-syn match jinjaNumber containedin=jinjaVarBlock,jinjaTagBlock,jinjaNested contained /[0-9]\+\(\.[0-9]\+\)\?/
-
-" Operators
-syn match jinjaOperator containedin=jinjaVarBlock,jinjaTagBlock,jinjaNested contained /[+\-*\/<>=!,:]/
-syn match jinjaPunctuation containedin=jinjaVarBlock,jinjaTagBlock,jinjaNested contained /[()\[\]]/
-syn match jinjaOperator containedin=jinjaVarBlock,jinjaTagBlock,jinjaNested contained /\./ nextgroup=jinjaAttribute
-syn match jinjaAttribute contained /[a-zA-Z_][a-zA-Z0-9_]*/
-
-" Jinja template tag and variable blocks
-syn region jinjaNested matchgroup=jinjaOperator start="(" end=")" transparent display containedin=jinjaVarBlock,jinjaTagBlock,jinjaNested contained
-syn region jinjaNested matchgroup=jinjaOperator start="\[" end="\]" transparent display containedin=jinjaVarBlock,jinjaTagBlock,jinjaNested contained
-syn region jinjaNested matchgroup=jinjaOperator start="{" end="}" transparent display containedin=jinjaVarBlock,jinjaTagBlock,jinjaNested contained
-syn region jinjaTagBlock matchgroup=jinjaTagDelim start=/{%-\?/ end=/-\?%}/ skipwhite containedin=ALLBUT,jinjaTagBlock,jinjaVarBlock,jinjaRaw,jinjaString,jinjaNested,jinjaComment
-
-syn region jinjaVarBlock matchgroup=jinjaVarDelim start=/{{-\?/ end=/-\?}}/ containedin=ALLBUT,jinjaTagBlock,jinjaVarBlock,jinjaRaw,jinjaString,jinjaNested,jinjaComment
-
-" Jinja template 'raw' tag
-syn region jinjaRaw matchgroup=jinjaRawDelim start="{%\s*raw\s*%}" end="{%\s*endraw\s*%}" containedin=ALLBUT,jinjaTagBlock,jinjaVarBlock,jinjaString,jinjaComment
-
-" Jinja comments
-syn region jinjaComment matchgroup=jinjaCommentDelim start="{#" end="#}" containedin=ALLBUT,jinjaTagBlock,jinjaVarBlock,jinjaString
-
-" Block start keywords.  A bit tricker.  We only highlight at the start of a
-" tag block and only if the name is not followed by a comma or equals sign
-" which usually means that we have to deal with an assignment.
-syn match jinjaStatement containedin=jinjaTagBlock contained skipwhite /\({%-\?\s*\)\@<=\<[a-zA-Z_][a-zA-Z0-9_]*\>\(\s*[,=]\)\@!/
-
-" and context modifiers
-syn match jinjaStatement containedin=jinjaTagBlock contained /\<with\(out\)\?\s\+context\>/ skipwhite
-
-
-" Define the default highlighting.
-" For version 5.7 and earlier: only when not done already
-" For version 5.8 and later: only when an item doesn't have highlighting yet
-if version >= 508 || !exists("did_jinja_syn_inits")
-  if version < 508
-    let did_jinja_syn_inits = 1
-    command -nargs=+ HiLink hi link <args>
-  else
-    command -nargs=+ HiLink hi def link <args>
-  endif
-
-  HiLink jinjaPunctuation jinjaOperator
-  HiLink jinjaAttribute jinjaVariable
-  HiLink jinjaFunction jinjaFilter
-
-  HiLink jinjaTagDelim jinjaTagBlock
-  HiLink jinjaVarDelim jinjaVarBlock
-  HiLink jinjaCommentDelim jinjaComment
-  HiLink jinjaRawDelim jinja
-
-  HiLink jinjaSpecial Special
-  HiLink jinjaOperator Normal
-  HiLink jinjaRaw Normal
-  HiLink jinjaTagBlock PreProc
-  HiLink jinjaVarBlock PreProc
-  HiLink jinjaStatement Statement
-  HiLink jinjaFilter Function
-  HiLink jinjaBlockName Function
-  HiLink jinjaVariable Identifier
-  HiLink jinjaString Constant
-  HiLink jinjaNumber Constant
-  HiLink jinjaComment Comment
-
-  delcommand HiLink
-endif
-
-let b:current_syntax = "jinja"
diff --git a/slider-agent/src/main/python/jinja2/ext/django2jinja/django2jinja.py b/slider-agent/src/main/python/jinja2/ext/django2jinja/django2jinja.py
deleted file mode 100644
index 6d9e76c..0000000
--- a/slider-agent/src/main/python/jinja2/ext/django2jinja/django2jinja.py
+++ /dev/null
@@ -1,768 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
-    Django to Jinja
-    ~~~~~~~~~~~~~~~
-
-    Helper module that can convert django templates into Jinja2 templates.
-
-    This file is not intended to be used as stand alone application but to
-    be used as library.  To convert templates you basically create your own
-    writer, add extra conversion logic for your custom template tags,
-    configure your django environment and run the `convert_templates`
-    function.
-
-    Here a simple example::
-
-        # configure django (or use settings.configure)
-        import os
-        os.environ['DJANGO_SETTINGS_MODULE'] = 'yourapplication.settings'
-        from yourapplication.foo.templatetags.bar import MyNode
-
-        from django2jinja import Writer, convert_templates
-
-        def write_my_node(writer, node):
-            writer.start_variable()
-            writer.write('myfunc(')
-            for idx, arg in enumerate(node.args):
-                if idx:
-                    writer.write(', ')
-                writer.node(arg)
-            writer.write(')')
-            writer.end_variable()
-
-        writer = Writer()
-        writer.node_handlers[MyNode] = write_my_node
-        convert_templates('/path/to/output/folder', writer=writer)
-    
-    Here is an example hos to automatically translate your django
-    variables to jinja2::
-        
-        import re
-        # List of tuple (Match pattern, Replace pattern, Exclusion pattern)
-        
-        var_re  = ((re.compile(r"(u|user)\.is_authenticated"), r"\1.is_authenticated()", None),
-                  (re.compile(r"\.non_field_errors"), r".non_field_errors()", None),
-                  (re.compile(r"\.label_tag"), r".label_tag()", None),
-                  (re.compile(r"\.as_dl"), r".as_dl()", None),
-                  (re.compile(r"\.as_table"), r".as_table()", None),
-                  (re.compile(r"\.as_widget"), r".as_widget()", None),
-                  (re.compile(r"\.as_hidden"), r".as_hidden()", None),
-                  
-                  (re.compile(r"\.get_([0-9_\w]+)_url"), r".get_\1_url()", None),
-                  (re.compile(r"\.url"), r".url()", re.compile(r"(form|calendar).url")),
-                  (re.compile(r"\.get_([0-9_\w]+)_display"), r".get_\1_display()", None),
-                  (re.compile(r"loop\.counter"), r"loop.index", None),
-                  (re.compile(r"loop\.revcounter"), r"loop.revindex", None),
-                  (re.compile(r"request\.GET\.([0-9_\w]+)"), r"request.GET.get('\1', '')", None),
-                  (re.compile(r"request\.get_host"), r"request.get_host()", None),
-                  
-                  (re.compile(r"\.all(?!_)"), r".all()", None),
-                  (re.compile(r"\.all\.0"), r".all()[0]", None),
-                  (re.compile(r"\.([0-9])($|\s+)"), r"[\1]\2", None),
-                  (re.compile(r"\.items"), r".items()", None),
-        )
-        writer = Writer(var_re=var_re)
-        
-    For details about the writing process have a look at the module code.
-
-    :copyright: (c) 2009 by the Jinja Team.
-    :license: BSD.
-"""
-import re
-import os
-import sys
-from jinja2.defaults import *
-from django.conf import settings
-from django.template import defaulttags as core_tags, loader, TextNode, \
-     FilterExpression, libraries, Variable, loader_tags, TOKEN_TEXT, \
-     TOKEN_VAR
-from django.template.debug import DebugVariableNode as VariableNode
-from django.templatetags import i18n as i18n_tags
-from StringIO import StringIO
-
-
-_node_handlers = {}
-_resolved_filters = {}
-_newline_re = re.compile(r'(?:\r\n|\r|\n)')
-
-
-# Django stores an itertools object on the cycle node.  Not only is this
-# thread unsafe but also a problem for the converter which needs the raw
-# string values passed to the constructor to create a jinja loop.cycle()
-# call from it.
-_old_cycle_init = core_tags.CycleNode.__init__
-def _fixed_cycle_init(self, cyclevars, variable_name=None):
-    self.raw_cycle_vars = map(Variable, cyclevars)
-    _old_cycle_init(self, cyclevars, variable_name)
-core_tags.CycleNode.__init__ = _fixed_cycle_init
-
-
-def node(cls):
-    def proxy(f):
-        _node_handlers[cls] = f
-        return f
-    return proxy
-
-
-def convert_templates(output_dir, extensions=('.html', '.txt'), writer=None,
-                      callback=None):
-    """Iterates over all templates in the template dirs configured and
-    translates them and writes the new templates into the output directory.
-    """
-    if writer is None:
-        writer = Writer()
-
-    def filter_templates(files):
-        for filename in files:
-            ifilename = filename.lower()
-            for extension in extensions:
-                if ifilename.endswith(extension):
-                    yield filename
-
-    def translate(f, loadname):
-        template = loader.get_template(loadname)
-        original = writer.stream
-        writer.stream = f
-        writer.body(template.nodelist)
-        writer.stream = original
-
-    if callback is None:
-        def callback(template):
-            print template
-
-    for directory in settings.TEMPLATE_DIRS:
-        for dirname, _, files in os.walk(directory):
-            dirname = dirname[len(directory) + 1:]
-            for filename in filter_templates(files):
-                source = os.path.normpath(os.path.join(dirname, filename))
-                target = os.path.join(output_dir, dirname, filename)
-                basetarget = os.path.dirname(target)
-                if not os.path.exists(basetarget):
-                    os.makedirs(basetarget)
-                callback(source)
-                f = file(target, 'w')
-                try:
-                    translate(f, source)
-                finally:
-                    f.close()
-
-
-class Writer(object):
-    """The core writer class."""
-
-    def __init__(self, stream=None, error_stream=None,
-                 block_start_string=BLOCK_START_STRING,
-                 block_end_string=BLOCK_END_STRING,
-                 variable_start_string=VARIABLE_START_STRING,
-                 variable_end_string=VARIABLE_END_STRING,
-                 comment_start_string=COMMENT_START_STRING,
-                 comment_end_string=COMMENT_END_STRING,
-                 initial_autoescape=True,
-                 use_jinja_autoescape=False,
-                 custom_node_handlers=None,
-                 var_re=[],
-                 env=None):
-        if stream is None:
-            stream = sys.stdout
-        if error_stream is None:
-            error_stream = sys.stderr
-        self.stream = stream
-        self.error_stream = error_stream
-        self.block_start_string = block_start_string
-        self.block_end_string = block_end_string
-        self.variable_start_string = variable_start_string
-        self.variable_end_string = variable_end_string
-        self.comment_start_string = comment_start_string
-        self.comment_end_string = comment_end_string
-        self.autoescape = initial_autoescape
-        self.spaceless = False
-        self.use_jinja_autoescape = use_jinja_autoescape
-        self.node_handlers = dict(_node_handlers,
-                                  **(custom_node_handlers or {}))
-        self._loop_depth = 0
-        self._filters_warned = set()
-        self.var_re = var_re
-        self.env = env
-
-    def enter_loop(self):
-        """Increments the loop depth so that write functions know if they
-        are in a loop.
-        """
-        self._loop_depth += 1
-
-    def leave_loop(self):
-        """Reverse of enter_loop."""
-        self._loop_depth -= 1
-
-    @property
-    def in_loop(self):
-        """True if we are in a loop."""
-        return self._loop_depth > 0
-
-    def write(self, s):
-        """Writes stuff to the stream."""
-        self.stream.write(s.encode(settings.FILE_CHARSET))
-
-    def print_expr(self, expr):
-        """Open a variable tag, write to the string to the stream and close."""
-        self.start_variable()
-        self.write(expr)
-        self.end_variable()
-
-    def _post_open(self):
-        if self.spaceless:
-            self.write('- ')
-        else:
-            self.write(' ')
-
-    def _pre_close(self):
-        if self.spaceless:
-            self.write(' -')
-        else:
-            self.write(' ')
-
-    def start_variable(self):
-        """Start a variable."""
-        self.write(self.variable_start_string)
-        self._post_open()
-
-    def end_variable(self, always_safe=False):
-        """End a variable."""
-        if not always_safe and self.autoescape and \
-           not self.use_jinja_autoescape:
-            self.write('|e')
-        self._pre_close()
-        self.write(self.variable_end_string)
-
-    def start_block(self):
-        """Starts a block."""
-        self.write(self.block_start_string)
-        self._post_open()
-
-    def end_block(self):
-        """Ends a block."""
-        self._pre_close()
-        self.write(self.block_end_string)
-
-    def tag(self, name):
-        """Like `print_expr` just for blocks."""
-        self.start_block()
-        self.write(name)
-        self.end_block()
-
-    def variable(self, name):
-        """Prints a variable.  This performs variable name transformation."""
-        self.write(self.translate_variable_name(name))
-
-    def literal(self, value):
-        """Writes a value as literal."""
-        value = repr(value)
-        if value[:2] in ('u"', "u'"):
-            value = value[1:]
-        self.write(value)
-
-    def filters(self, filters, is_block=False):
-        """Dumps a list of filters."""
-        want_pipe = not is_block
-        for filter, args in filters:
-            name = self.get_filter_name(filter)
-            if name is None:
-                self.warn('Could not find filter %s' % name)
-                continue
-            if name not in DEFAULT_FILTERS and \
-               name not in self._filters_warned:
-                self._filters_warned.add(name)
-                self.warn('Filter %s probably doesn\'t exist in Jinja' %
-                            name)
-            if not want_pipe:
-                want_pipe = True
-            else:
-                self.write('|')
-            self.write(name)
-            if args:
-                self.write('(')
-                for idx, (is_var, value) in enumerate(args):
-                    if idx:
-                        self.write(', ')
-                    if is_var:
-                        self.node(value)
-                    else:
-                        self.literal(value)
-                self.write(')')
-
-    def get_location(self, origin, position):
-        """Returns the location for an origin and position tuple as name
-        and lineno.
-        """
-        if hasattr(origin, 'source'):
-            source = origin.source
-            name = '<unknown source>'
-        else:
-            source = origin.loader(origin.loadname, origin.dirs)[0]
-            name = origin.loadname
-        lineno = len(_newline_re.findall(source[:position[0]])) + 1
-        return name, lineno
-
-    def warn(self, message, node=None):
-        """Prints a warning to the error stream."""
-        if node is not None and hasattr(node, 'source'):
-            filename, lineno = self.get_location(*node.source)
-            message = '[%s:%d] %s' % (filename, lineno, message)
-        print >> self.error_stream, message
-
-    def translate_variable_name(self, var):
-        """Performs variable name translation."""
-        if self.in_loop and var == 'forloop' or var.startswith('forloop.'):
-            var = var[3:]
-        
-        for reg, rep, unless in self.var_re:
-            no_unless = unless and unless.search(var) or True
-            if reg.search(var) and no_unless:
-                var = reg.sub(rep, var)
-                break
-        return var
-
-    def get_filter_name(self, filter):
-        """Returns the filter name for a filter function or `None` if there
-        is no such filter.
-        """
-        if filter not in _resolved_filters:
-            for library in libraries.values():
-                for key, value in library.filters.iteritems():
-                    _resolved_filters[value] = key
-        return _resolved_filters.get(filter, None)
-
-    def node(self, node):
-        """Invokes the node handler for a node."""
-        for cls, handler in self.node_handlers.iteritems():
-            if type(node) is cls or type(node).__name__ == cls:
-                handler(self, node)
-                break
-        else:
-            self.warn('Untranslatable node %s.%s found' % (
-                node.__module__,
-                node.__class__.__name__
-            ), node)
-
-    def body(self, nodes):
-        """Calls node() for every node in the iterable passed."""
-        for node in nodes:
-            self.node(node)
-
-
-@node(TextNode)
-def text_node(writer, node):
-    writer.write(node.s)
-
-
-@node(Variable)
-def variable(writer, node):
-    if node.translate:
-        writer.warn('i18n system used, make sure to install translations', node)
-        writer.write('_(')
-    if node.literal is not None:
-        writer.literal(node.literal)
-    else:
-        writer.variable(node.var)
-    if node.translate:
-        writer.write(')')
-
-
-@node(VariableNode)
-def variable_node(writer, node):
-    writer.start_variable()
-    if node.filter_expression.var.var == 'block.super' \
-       and not node.filter_expression.filters:
-        writer.write('super()')
-    else:
-        writer.node(node.filter_expression)
-    writer.end_variable()
-
-
-@node(FilterExpression)
-def filter_expression(writer, node):
-    writer.node(node.var)
-    writer.filters(node.filters)
-
-
-@node(core_tags.CommentNode)
-def comment_tag(writer, node):
-    pass
-
-
-@node(core_tags.DebugNode)
-def comment_tag(writer, node):
-    writer.warn('Debug tag detected.  Make sure to add a global function '
-                'called debug to the namespace.', node=node)
-    writer.print_expr('debug()')
-
-
-@node(core_tags.ForNode)
-def for_loop(writer, node):
-    writer.start_block()
-    writer.write('for ')
-    for idx, var in enumerate(node.loopvars):
-        if idx:
-            writer.write(', ')
-        writer.variable(var)
-    writer.write(' in ')
-    if node.is_reversed:
-        writer.write('(')
-    writer.node(node.sequence)
-    if node.is_reversed:
-        writer.write(')|reverse')
-    writer.end_block()
-    writer.enter_loop()
-    writer.body(node.nodelist_loop)
-    writer.leave_loop()
-    writer.tag('endfor')
-
-
-@node(core_tags.IfNode)
-def if_condition(writer, node):
-    writer.start_block()
-    writer.write('if ')
-    join_with = 'and'
-    if node.link_type == core_tags.IfNode.LinkTypes.or_:
-        join_with = 'or'
-    
-    for idx, (ifnot, expr) in enumerate(node.bool_exprs):
-        if idx:
-            writer.write(' %s ' % join_with)
-        if ifnot:
-            writer.write('not ')
-        writer.node(expr)
-    writer.end_block()
-    writer.body(node.nodelist_true)
-    if node.nodelist_false:
-        writer.tag('else')
-        writer.body(node.nodelist_false)
-    writer.tag('endif')
-
-
-@node(core_tags.IfEqualNode)
-def if_equal(writer, node):
-    writer.start_block()
-    writer.write('if ')
-    writer.node(node.var1)
-    if node.negate:
-        writer.write(' != ')
-    else:
-        writer.write(' == ')
-    writer.node(node.var2)
-    writer.end_block()
-    writer.body(node.nodelist_true)
-    if node.nodelist_false:
-        writer.tag('else')
-        writer.body(node.nodelist_false)
-    writer.tag('endif')
-
-
-@node(loader_tags.BlockNode)
-def block(writer, node):
-    writer.tag('block ' + node.name.replace('-', '_').rstrip('_'))
-    node = node
-    while node.parent is not None:
-        node = node.parent
-    writer.body(node.nodelist)
-    writer.tag('endblock')
-
-
-@node(loader_tags.ExtendsNode)
-def extends(writer, node):
-    writer.start_block()
-    writer.write('extends ')
-    if node.parent_name_expr:
-        writer.node(node.parent_name_expr)
-    else:
-        writer.literal(node.parent_name)
-    writer.end_block()
-    writer.body(node.nodelist)
-
-
-@node(loader_tags.ConstantIncludeNode)
-@node(loader_tags.IncludeNode)
-def include(writer, node):
-    writer.start_block()
-    writer.write('include ')
-    if hasattr(node, 'template'):
-        writer.literal(node.template.name)
-    else:
-        writer.node(node.template_name)
-    writer.end_block()
-
-
-@node(core_tags.CycleNode)
-def cycle(writer, node):
-    if not writer.in_loop:
-        writer.warn('Untranslatable free cycle (cycle outside loop)', node=node)
-        return
-    if node.variable_name is not None:
-        writer.start_block()
-        writer.write('set %s = ' % node.variable_name)
-    else:
-        writer.start_variable()
-    writer.write('loop.cycle(')
-    for idx, var in enumerate(node.raw_cycle_vars):
-        if idx:
-            writer.write(', ')
-        writer.node(var)
-    writer.write(')')
-    if node.variable_name is not None:
-        writer.end_block()
-    else:
-        writer.end_variable()
-
-
-@node(core_tags.FilterNode)
-def filter(writer, node):
-    writer.start_block()
-    writer.write('filter ')
-    writer.filters(node.filter_expr.filters, True)
-    writer.end_block()
-    writer.body(node.nodelist)
-    writer.tag('endfilter')
-
-
-@node(core_tags.AutoEscapeControlNode)
-def autoescape_control(writer, node):
-    original = writer.autoescape
-    writer.autoescape = node.setting
-    writer.body(node.nodelist)
-    writer.autoescape = original
-
-
-@node(core_tags.SpacelessNode)
-def spaceless(writer, node):
-    original = writer.spaceless
-    writer.spaceless = True
-    writer.warn('entering spaceless mode with different semantics', node)
-    # do the initial stripping
-    nodelist = list(node.nodelist)
-    if nodelist:
-        if isinstance(nodelist[0], TextNode):
-            nodelist[0] = TextNode(nodelist[0].s.lstrip())
-        if isinstance(nodelist[-1], TextNode):
-            nodelist[-1] = TextNode(nodelist[-1].s.rstrip())
-    writer.body(nodelist)
-    writer.spaceless = original
-
-
-@node(core_tags.TemplateTagNode)
-def template_tag(writer, node):
-    tag = {
-        'openblock':            writer.block_start_string,
-        'closeblock':           writer.block_end_string,
-        'openvariable':         writer.variable_start_string,
-        'closevariable':        writer.variable_end_string,
-        'opencomment':          writer.comment_start_string,
-        'closecomment':         writer.comment_end_string,
-        'openbrace':            '{',
-        'closebrace':           '}'
-    }.get(node.tagtype)
-    if tag:
-        writer.start_variable()
-        writer.literal(tag)
-        writer.end_variable()
-
-
-@node(core_tags.URLNode)
-def url_tag(writer, node):
-    writer.warn('url node used.  make sure to provide a proper url() '
-                'function', node)
-    if node.asvar:
-        writer.start_block()
-        writer.write('set %s = ' % node.asvar)
-    else:
-        writer.start_variable()
-    autoescape = writer.autoescape
-    writer.write('url(')
-    writer.literal(node.view_name)
-    for arg in node.args:
-        writer.write(', ')
-        writer.node(arg)
-    for key, arg in node.kwargs.items():
-        writer.write(', %s=' % key)
-        writer.node(arg)
-    writer.write(')')
-    if node.asvar:
-        writer.end_block()
-    else:
-        writer.end_variable()
-
-
-@node(core_tags.WidthRatioNode)
-def width_ratio(writer, node):
-    writer.warn('widthratio expanded into formula.  You may want to provide '
-                'a helper function for this calculation', node)
-    writer.start_variable()
-    writer.write('(')
-    writer.node(node.val_expr)
-    writer.write(' / ')
-    writer.node(node.max_expr)
-    writer.write(' * ')
-    writer.write(str(int(node.max_width)))
-    writer.write(')|round|int')
-    writer.end_variable(always_safe=True)
-
-
-@node(core_tags.WithNode)
-def with_block(writer, node):
-    writer.warn('with block expanded into set statement.  This could cause '
-                'variables following that block to be overriden.', node)
-    writer.start_block()
-    writer.write('set %s = ' % node.name)
-    writer.node(node.var)
-    writer.end_block()
-    writer.body(node.nodelist)
-
-
-@node(core_tags.RegroupNode)
-def regroup(writer, node):
-    if node.expression.var.literal:
-        writer.warn('literal in groupby filter used.   Behavior in that '
-                    'situation is undefined and translation is skipped.', node)
-        return
-    elif node.expression.filters:
-        writer.warn('filters in groupby filter used.   Behavior in that '
-                    'situation is undefined which is most likely a bug '
-                    'in your code.  Filters were ignored.', node)
-    writer.start_block()
-    writer.write('set %s = ' % node.var_name)
-    writer.node(node.target)
-    writer.write('|groupby(')
-    writer.literal(node.expression.var.var)
-    writer.write(')')
-    writer.end_block()
-
-
-@node(core_tags.LoadNode)
-def warn_load(writer, node):
-    writer.warn('load statement used which was ignored on conversion', node)
-
-
-@node(i18n_tags.GetAvailableLanguagesNode)
-def get_available_languages(writer, node):
-    writer.warn('make sure to provide a get_available_languages function', node)
-    writer.tag('set %s = get_available_languages()' %
-               writer.translate_variable_name(node.variable))
-
-
-@node(i18n_tags.GetCurrentLanguageNode)
-def get_current_language(writer, node):
-    writer.warn('make sure to provide a get_current_language function', node)
-    writer.tag('set %s = get_current_language()' %
-               writer.translate_variable_name(node.variable))
-
-
-@node(i18n_tags.GetCurrentLanguageBidiNode)
-def get_current_language_bidi(writer, node):
-    writer.warn('make sure to provide a get_current_language_bidi function', node)
-    writer.tag('set %s = get_current_language_bidi()' %
-               writer.translate_variable_name(node.variable))
-
-
-@node(i18n_tags.TranslateNode)
-def simple_gettext(writer, node):
-    writer.warn('i18n system used, make sure to install translations', node)
-    writer.start_variable()
-    writer.write('_(')
-    writer.node(node.value)
-    writer.write(')')
-    writer.end_variable()
-
-
-@node(i18n_tags.BlockTranslateNode)
-def translate_block(writer, node):
-    first_var = []
-    variables = set()
-
-    def touch_var(name):
-        variables.add(name)
-        if not first_var:
-            first_var.append(name)
-
-    def dump_token_list(tokens):
-        for token in tokens:
-            if token.token_type == TOKEN_TEXT:
-                writer.write(token.contents)
-            elif token.token_type == TOKEN_VAR:
-                writer.print_expr(token.contents)
-                touch_var(token.contents)
-
-    writer.warn('i18n system used, make sure to install translations', node)
-    writer.start_block()
-    writer.write('trans')
-    idx = -1
-    for idx, (key, var) in enumerate(node.extra_context.items()):
-        if idx:
-            writer.write(',')
-        writer.write(' %s=' % key)
-        touch_var(key)
-        writer.node(var.filter_expression)
-
-    have_plural = False
-    plural_var = None
-    if node.plural and node.countervar and node.counter:
-        have_plural = True
-        plural_var = node.countervar
-        if plural_var not in variables:
-            if idx > -1:
-                writer.write(',')
-            touch_var(plural_var)
-            writer.write(' %s=' % plural_var)
-            writer.node(node.counter)
-
-    writer.end_block()
-    dump_token_list(node.singular)
-    if node.plural and node.countervar and node.counter:
-        writer.start_block()
-        writer.write('pluralize')
-        if node.countervar != first_var[0]:
-            writer.write(' ' + node.countervar)
-        writer.end_block()
-        dump_token_list(node.plural)
-    writer.tag('endtrans')
-
-@node("SimpleNode")
-def simple_tag(writer, node):
-    """Check if the simple tag exist as a filter in """
-    name = node.tag_name
-    if writer.env and \
-       name not in writer.env.filters and \
-       name not in writer._filters_warned:
-        writer._filters_warned.add(name)
-        writer.warn('Filter %s probably doesn\'t exist in Jinja' %
-                    name)
-        
-    if not node.vars_to_resolve:
-        # No argument, pass the request
-        writer.start_variable()
-        writer.write('request|')
-        writer.write(name)
-        writer.end_variable()
-        return 
-    
-    first_var =  node.vars_to_resolve[0]
-    args = node.vars_to_resolve[1:]
-    writer.start_variable()
-    
-    # Copied from Writer.filters()
-    writer.node(first_var)
-    
-    writer.write('|')
-    writer.write(name)
-    if args:
-        writer.write('(')
-        for idx, var in enumerate(args):
-            if idx:
-                writer.write(', ')
-            if var.var:
-                writer.node(var)
-            else:
-                writer.literal(var.literal)
-        writer.write(')')
-    writer.end_variable()   
-
-# get rid of node now, it shouldn't be used normally
-del node
diff --git a/slider-agent/src/main/python/jinja2/ext/django2jinja/example.py b/slider-agent/src/main/python/jinja2/ext/django2jinja/example.py
deleted file mode 100644
index 2d4ab9a..0000000
--- a/slider-agent/src/main/python/jinja2/ext/django2jinja/example.py
+++ /dev/null
@@ -1,7 +0,0 @@
-from django.conf import settings
-settings.configure(TEMPLATE_DIRS=['templates'], TEMPLATE_DEBUG=True)
-
-from django2jinja import convert_templates, Writer
-
-writer = Writer(use_jinja_autoescape=True)
-convert_templates('converted', writer=writer)
diff --git a/slider-agent/src/main/python/jinja2/ext/django2jinja/templates/index.html b/slider-agent/src/main/python/jinja2/ext/django2jinja/templates/index.html
deleted file mode 100644
index d0fbe38..0000000
--- a/slider-agent/src/main/python/jinja2/ext/django2jinja/templates/index.html
+++ /dev/null
@@ -1,58 +0,0 @@
-{% extends "layout.html" %}
-{% load i18n %}
-{% block title %}Foo{% endblock %}
-{% block page-body %}
-  {{ block.super }}
-  Hello {{ name|cut:"d" }}!
-
-  {% for item in seq reversed %}
-    {% if forloop.index|divisibleby:2 %}
-      <li class="{% cycle 'a' 'b' %}">{{ item }}</li>
-    {% endif %}
-  {% endfor %}
-  {% ifequal foo bar %}
-    haha
-  {% else %}
-    hmm
-  {% endifequal %}
-  {% filter upper %}
-    {% include "subtemplate.html" %}
-    {% include foo %}
-  {% endfilter %}
-  {% spaceless %}
-    Hello World
-      {{ foo }}
-    Hmm
-  {% endspaceless %}
-  {% templatetag opencomment %}...{% templatetag closecomment %}
-  {% url foo a, b, c=d %}
-  {% url foo a, b, c=d as hmm %}
-
-  {% with object.value as value %}
-    <img src='bar.gif' height='10' width='{% widthratio value 200 100 %}'>
-  {% endwith %}
-
-  <pre>{% debug %}</pre>
-
-  {% blocktrans with book|title as book_t and author|title as author_t %}
-  This is {{ book_t }} by {{ author_t }}
-  {% endblocktrans %}
-
-  {% blocktrans count list|length as counter %}
-  There is only one {{ name }} object.
-  {% plural %}
-  There are {{ counter }} {{ name }} objects.
-  {% endblocktrans %}
-
-  {% blocktrans with name|escape as name count list|length as counter %}
-  There is only one {{ name }} object.
-  {% plural %}
-  There are {{ counter }} {{ name }} objects.
-  {% endblocktrans %}
-
-  {% blocktrans %}This string will have {{ value }} inside.{% endblocktrans %}
-
-  <p>{% trans "This is the title." %}</p>
-
-  {% regroup people by gender as grouped %}
-{% endblock %}
diff --git a/slider-agent/src/main/python/jinja2/ext/django2jinja/templates/layout.html b/slider-agent/src/main/python/jinja2/ext/django2jinja/templates/layout.html
deleted file mode 100644
index 3f21a12..0000000
--- a/slider-agent/src/main/python/jinja2/ext/django2jinja/templates/layout.html
+++ /dev/null
@@ -1,4 +0,0 @@
-<title>{% block title %}{% endblock %}</title>
-<div class="body">
-  {% block page-body %}{% endblock %}
-</div>
diff --git a/slider-agent/src/main/python/jinja2/ext/django2jinja/templates/subtemplate.html b/slider-agent/src/main/python/jinja2/ext/django2jinja/templates/subtemplate.html
deleted file mode 100644
index 980a0d5..0000000
--- a/slider-agent/src/main/python/jinja2/ext/django2jinja/templates/subtemplate.html
+++ /dev/null
@@ -1 +0,0 @@
-Hello World!
diff --git a/slider-agent/src/main/python/jinja2/ext/djangojinja2.py b/slider-agent/src/main/python/jinja2/ext/djangojinja2.py
deleted file mode 100644
index d24d164..0000000
--- a/slider-agent/src/main/python/jinja2/ext/djangojinja2.py
+++ /dev/null
@@ -1,86 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
-    djangojinja2
-    ~~~~~~~~~~~~
-
-    Adds support for Jinja2 to Django.
-
-    Configuration variables:
-
-    ======================= =============================================
-    Key                     Description
-    ======================= =============================================
-    `JINJA2_TEMPLATE_DIRS`  List of template folders
-    `JINJA2_EXTENSIONS`     List of Jinja2 extensions to use
-    `JINJA2_CACHE_SIZE`     The size of the Jinja2 template cache.
-    ======================= =============================================
-
-    :copyright: (c) 2009 by the Jinja Team.
-    :license: BSD.
-"""
-from itertools import chain
-from django.conf import settings
-from django.http import HttpResponse
-from django.core.exceptions import ImproperlyConfigured
-from django.template.context import get_standard_processors
-from django.template import TemplateDoesNotExist
-from jinja2 import Environment, FileSystemLoader, TemplateNotFound
-from jinja2.defaults import DEFAULT_NAMESPACE
-
-
-# the environment is unconfigured until the first template is loaded.
-_jinja_env = None
-
-
-def get_env():
-    """Get the Jinja2 env and initialize it if necessary."""
-    global _jinja_env
-    if _jinja_env is None:
-        _jinja_env = create_env()
-    return _jinja_env
-
-
-def create_env():
-    """Create a new Jinja2 environment."""
-    searchpath = list(settings.JINJA2_TEMPLATE_DIRS)
-    return Environment(loader=FileSystemLoader(searchpath),
-                       auto_reload=settings.TEMPLATE_DEBUG,
-                       cache_size=getattr(settings, 'JINJA2_CACHE_SIZE', 50),
-                       extensions=getattr(settings, 'JINJA2_EXTENSIONS', ()))
-
-
-def get_template(template_name, globals=None):
-    """Load a template."""
-    try:
-        return get_env().get_template(template_name, globals=globals)
-    except TemplateNotFound, e:
-        raise TemplateDoesNotExist(str(e))
-
-
-def select_template(templates, globals=None):
-    """Try to load one of the given templates."""
-    env = get_env()
-    for template in templates:
-        try:
-            return env.get_template(template, globals=globals)
-        except TemplateNotFound:
-            continue
-    raise TemplateDoesNotExist(', '.join(templates))
-
-
-def render_to_string(template_name, context=None, request=None,
-                     processors=None):
-    """Render a template into a string."""
-    context = dict(context or {})
-    if request is not None:
-        context['request'] = request
-        for processor in chain(get_standard_processors(), processors or ()):
-            context.update(processor(request))
-    return get_template(template_name).render(context)
-
-
-def render_to_response(template_name, context=None, request=None,
-                       processors=None, mimetype=None):
-    """Render a template into a response object."""
-    return HttpResponse(render_to_string(template_name, context, request,
-                                         processors), mimetype=mimetype)
diff --git a/slider-agent/src/main/python/jinja2/ext/inlinegettext.py b/slider-agent/src/main/python/jinja2/ext/inlinegettext.py
deleted file mode 100644
index cf4ed5e..0000000
--- a/slider-agent/src/main/python/jinja2/ext/inlinegettext.py
+++ /dev/null
@@ -1,78 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
-    Inline Gettext
-    ~~~~~~~~~~~~~~
-
-    An example extension for Jinja2 that supports inline gettext calls.
-    Requires the i18n extension to be loaded.
-
-    :copyright: (c) 2009 by the Jinja Team.
-    :license: BSD.
-"""
-import re
-from jinja2.ext import Extension
-from jinja2.lexer import Token, count_newlines
-from jinja2.exceptions import TemplateSyntaxError
-
-
-_outside_re = re.compile(r'\\?(gettext|_)\(')
-_inside_re = re.compile(r'\\?[()]')
-
-
-class InlineGettext(Extension):
-    """This extension implements support for inline gettext blocks::
-
-        <h1>_(Welcome)</h1>
-        <p>_(This is a paragraph)</p>
-
-    Requires the i18n extension to be loaded and configured.
-    """
-
-    def filter_stream(self, stream):
-        paren_stack = 0
-
-        for token in stream:
-            if token.type is not 'data':
-                yield token
-                continue
-
-            pos = 0
-            lineno = token.lineno
-
-            while 1:
-                if not paren_stack:
-                    match = _outside_re.search(token.value, pos)
-                else:
-                    match = _inside_re.search(token.value, pos)
-                if match is None:
-                    break
-                new_pos = match.start()
-                if new_pos > pos:
-                    preval = token.value[pos:new_pos]
-                    yield Token(lineno, 'data', preval)
-                    lineno += count_newlines(preval)
-                gtok = match.group()
-                if gtok[0] == '\\':
-                    yield Token(lineno, 'data', gtok[1:])
-                elif not paren_stack:
-                    yield Token(lineno, 'block_begin', None)
-                    yield Token(lineno, 'name', 'trans')
-                    yield Token(lineno, 'block_end', None)
-                    paren_stack = 1
-                else:
-                    if gtok == '(' or paren_stack > 1:
-                        yield Token(lineno, 'data', gtok)
-                    paren_stack += gtok == ')' and -1 or 1
-                    if not paren_stack:
-                        yield Token(lineno, 'block_begin', None)
-                        yield Token(lineno, 'name', 'endtrans')
-                        yield Token(lineno, 'block_end', None)
-                pos = match.end()
-
-            if pos < len(token.value):
-                yield Token(lineno, 'data', token.value[pos:])
-
-        if paren_stack:
-            raise TemplateSyntaxError('unclosed gettext expression',
-                                      token.lineno, stream.name,
-                                      stream.filename)
diff --git a/slider-agent/src/main/python/jinja2/ext/jinja.el b/slider-agent/src/main/python/jinja2/ext/jinja.el
deleted file mode 100644
index 401ba29..0000000
--- a/slider-agent/src/main/python/jinja2/ext/jinja.el
+++ /dev/null
@@ -1,213 +0,0 @@
-;;; jinja.el --- Jinja mode highlighting
-;;
-;; Author: Georg Brandl
-;; Copyright: (c) 2009 by the Jinja Team
-;; Last modified: 2008-05-22 23:04 by gbr
-;;
-;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
-;;
-;;; Commentary:
-;;
-;; Mostly ripped off django-mode by Lennart Borgman.
-;;
-;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
-;;
-;; This program is free software; you can redistribute it and/or
-;; modify it under the terms of the GNU General Public License as
-;; published by the Free Software Foundation; either version 2, or
-;; (at your option) any later version.
-;;
-;; This program is distributed in the hope that it will be useful,
-;; but WITHOUT ANY WARRANTY; without even the implied warranty of
-;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
-;; General Public License for more details.
-;;
-;; You should have received a copy of the GNU General Public License
-;; along with this program; see the file COPYING.  If not, write to
-;; the Free Software Foundation, Inc., 51 Franklin Street, Fifth
-;; Floor, Boston, MA 02110-1301, USA.
-;;
-;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
-;;
-;;; Code:
-
-(defconst jinja-font-lock-keywords
-  (list
-;   (cons (rx "{% comment %}" (submatch (0+ anything))
-;             "{% endcomment %}") (list 1 font-lock-comment-face))
-   '("{# ?\\(.*?\\) ?#}" . (1 font-lock-comment-face))
-   '("{%-?\\|-?%}\\|{{\\|}}" . font-lock-preprocessor-face)
-   '("{#\\|#}" . font-lock-comment-delimiter-face)
-   ;; first word in a block is a command
-   '("{%-?[ \t\n]*\\([a-zA-Z_]+\\)" . (1 font-lock-keyword-face))
-   ;; variables
-   '("\\({{ ?\\)\\([^|]*?\\)\\(|.*?\\)? ?}}" . (1 font-lock-variable-name-face))
-   ;; keywords and builtins
-   (cons (rx word-start
-             (or "in" "as" "recursive" "not" "and" "or" "if" "else"
-                 "import" "with" "without" "context")
-             word-end)
-         font-lock-keyword-face)
-   (cons (rx word-start
-             (or "true" "false" "none" "loop" "self" "super")
-             word-end)
-         font-lock-builtin-face)
-   ;; tests
-   '("\\(is\\)[ \t]*\\(not\\)[ \t]*\\([a-zA-Z_]+\\)"
-     (1 font-lock-keyword-face) (2 font-lock-keyword-face)
-     (3 font-lock-function-name-face))
-   ;; builtin filters
-   (cons (rx
-          "|" (* space)
-          (submatch
-           (or "abs" "batch" "capitalize" "capture" "center" "count" "default"
-               "dformat" "dictsort" "e" "escape" "filesizeformat" "first"
-               "float" "format" "getattribute" "getitem" "groupby" "indent"
-               "int" "join" "jsonencode" "last" "length" "lower" "markdown"
-               "pprint" "random" "replace" "reverse" "round" "rst" "slice"
-               "sort" "string" "striptags" "sum" "textile" "title" "trim"
-               "truncate" "upper" "urlencode" "urlize" "wordcount" "wordwrap"
-               "xmlattr")))
-         (list 1 font-lock-builtin-face))
-   )
-   "Minimal highlighting expressions for Jinja mode")
-
-(define-derived-mode jinja-mode nil "Jinja"
-  "Simple Jinja mode for use with `mumamo-mode'.
-This mode only provides syntax highlighting."
-  ;;(set (make-local-variable 'comment-start) "{#")
-  ;;(set (make-local-variable 'comment-end)   "#}")
-  (setq font-lock-defaults '(jinja-font-lock-keywords)))
-
-;; mumamo stuff
-
-(when (require 'mumamo nil t)
-
-  (defun mumamo-chunk-jinja3(pos min max)
-    "Find {# ... #}.  Return range and `jinja-mode'.
-See `mumamo-find-possible-chunk' for POS, MIN and MAX."
-    (mumamo-find-possible-chunk pos min max
-                                'mumamo-search-bw-exc-start-jinja3
-                                'mumamo-search-bw-exc-end-jinja3
-                                'mumamo-search-fw-exc-start-jinja3
-                                'mumamo-search-fw-exc-end-jinja3))
-
-  (defun mumamo-chunk-jinja2(pos min max)
-    "Find {{ ... }}.  Return range and `jinja-mode'.
-See `mumamo-find-possible-chunk' for POS, MIN and MAX."
-    (mumamo-find-possible-chunk pos min max
-                                'mumamo-search-bw-exc-start-jinja2
-                                'mumamo-search-bw-exc-end-jinja2
-                                'mumamo-search-fw-exc-start-jinja2
-                                'mumamo-search-fw-exc-end-jinja2))
-
-  (defun mumamo-chunk-jinja (pos min max)
-    "Find {% ... %}.  Return range and `jinja-mode'.
-See `mumamo-find-possible-chunk' for POS, MIN and MAX."
-    (mumamo-find-possible-chunk pos min max
-                                'mumamo-search-bw-exc-start-jinja
-                                'mumamo-search-bw-exc-end-jinja
-                                'mumamo-search-fw-exc-start-jinja
-                                'mumamo-search-fw-exc-end-jinja))
-
-  (defun mumamo-search-bw-exc-start-jinja (pos min)
-    "Helper for `mumamo-chunk-jinja'.
-POS is where to start search and MIN is where to stop."
-    (let ((exc-start (mumamo-chunk-start-bw-str-inc pos min "{%")))
-      (and exc-start
-           (<= exc-start pos)
-           (cons exc-start 'jinja-mode))))
-
-  (defun mumamo-search-bw-exc-start-jinja2(pos min)
-    "Helper for `mumamo-chunk-jinja2'.
-POS is where to start search and MIN is where to stop."
-    (let ((exc-start (mumamo-chunk-start-bw-str-inc pos min "{{")))
-      (and exc-start
-           (<= exc-start pos)
-           (cons exc-start 'jinja-mode))))
-
-  (defun mumamo-search-bw-exc-start-jinja3(pos min)
-    "Helper for `mumamo-chunk-jinja3'.
-POS is where to start search and MIN is where to stop."
-    (let ((exc-start (mumamo-chunk-start-bw-str-inc pos min "{#")))
-      (and exc-start
-           (<= exc-start pos)
-           (cons exc-start 'jinja-mode))))
-
-  (defun mumamo-search-bw-exc-end-jinja (pos min)
-    "Helper for `mumamo-chunk-jinja'.
-POS is where to start search and MIN is where to stop."
-    (mumamo-chunk-end-bw-str-inc pos min "%}"))
-
-  (defun mumamo-search-bw-exc-end-jinja2(pos min)
-    "Helper for `mumamo-chunk-jinja2'.
-POS is where to start search and MIN is where to stop."
-    (mumamo-chunk-end-bw-str-inc pos min "}}"))
-
-  (defun mumamo-search-bw-exc-end-jinja3(pos min)
-    "Helper for `mumamo-chunk-jinja3'.
-POS is where to start search and MIN is where to stop."
-    (mumamo-chunk-end-bw-str-inc pos min "#}"))
-
-  (defun mumamo-search-fw-exc-start-jinja (pos max)
-    "Helper for `mumamo-chunk-jinja'.
-POS is where to start search and MAX is where to stop."
-    (mumamo-chunk-start-fw-str-inc pos max "{%"))
-
-  (defun mumamo-search-fw-exc-start-jinja2(pos max)
-    "Helper for `mumamo-chunk-jinja2'.
-POS is where to start search and MAX is where to stop."
-    (mumamo-chunk-start-fw-str-inc pos max "{{"))
-
-  (defun mumamo-search-fw-exc-start-jinja3(pos max)
-    "Helper for `mumamo-chunk-jinja3'.
-POS is where to start search and MAX is where to stop."
-    (mumamo-chunk-start-fw-str-inc pos max "{#"))
-
-  (defun mumamo-search-fw-exc-end-jinja (pos max)
-    "Helper for `mumamo-chunk-jinja'.
-POS is where to start search and MAX is where to stop."
-    (mumamo-chunk-end-fw-str-inc pos max "%}"))
-
-  (defun mumamo-search-fw-exc-end-jinja2(pos max)
-    "Helper for `mumamo-chunk-jinja2'.
-POS is where to start search and MAX is where to stop."
-    (mumamo-chunk-end-fw-str-inc pos max "}}"))
-
-  (defun mumamo-search-fw-exc-end-jinja3(pos max)
-    "Helper for `mumamo-chunk-jinja3'.
-POS is where to start search and MAX is where to stop."
-    (mumamo-chunk-end-fw-str-inc pos max "#}"))
-
-;;;###autoload
-  (define-mumamo-multi-major-mode jinja-html-mumamo
-    "Turn on multiple major modes for Jinja with main mode `html-mode'.
-This also covers inlined style and javascript."
-    ("Jinja HTML Family" html-mode
-     (mumamo-chunk-jinja
-      mumamo-chunk-jinja2
-      mumamo-chunk-jinja3
-      mumamo-chunk-inlined-style
-      mumamo-chunk-inlined-script
-      mumamo-chunk-style=
-      mumamo-chunk-onjs=
-      )))
-
-;;;###autoload
-  (define-mumamo-multi-major-mode jinja-nxhtml-mumamo
-    "Turn on multiple major modes for Jinja with main mode `nxhtml-mode'.
-This also covers inlined style and javascript."
-    ("Jinja nXhtml Family" nxhtml-mode
-     (mumamo-chunk-jinja
-      mumamo-chunk-jinja2
-      mumamo-chunk-jinja3
-      mumamo-chunk-inlined-style
-      mumamo-chunk-inlined-script
-      mumamo-chunk-style=
-      mumamo-chunk-onjs=
-      )))
-  )
-
-(provide 'jinja)
-;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
-;;; jinja.el ends here
diff --git a/slider-agent/src/main/python/kazoo/client.py b/slider-agent/src/main/python/kazoo/client.py
index a315489..47545ee 100644
--- a/slider-agent/src/main/python/kazoo/client.py
+++ b/slider-agent/src/main/python/kazoo/client.py
@@ -220,7 +220,6 @@
         elif type(command_retry) is KazooRetry:
             self.retry = command_retry
 
-
         if type(self._conn_retry) is KazooRetry:
             if self.handler.sleep_func != self._conn_retry.sleep_func:
                 raise ConfigurationError("Retry handler and event handler "
@@ -228,19 +227,21 @@
 
         if type(self.retry) is KazooRetry:
             if self.handler.sleep_func != self.retry.sleep_func:
-                raise ConfigurationError("Command retry handler and event "
-                                         "handler must use the same sleep func")
+                raise ConfigurationError(
+                    "Command retry handler and event handler "
+                    "must use the same sleep func")
 
         if self.retry is None or self._conn_retry is None:
             old_retry_keys = dict(_RETRY_COMPAT_DEFAULTS)
             for key in old_retry_keys:
                 try:
                     old_retry_keys[key] = kwargs.pop(key)
-                    warnings.warn('Passing retry configuration param %s to the'
-                            ' client directly is deprecated, please pass a'
-                            ' configured retry object (using param %s)' % (
-                                key, _RETRY_COMPAT_MAPPING[key]),
-                            DeprecationWarning, stacklevel=2)
+                    warnings.warn(
+                        'Passing retry configuration param %s to the '
+                        'client directly is deprecated, please pass a '
+                        'configured retry object (using param %s)' % (
+                            key, _RETRY_COMPAT_MAPPING[key]),
+                        DeprecationWarning, stacklevel=2)
                 except KeyError:
                     pass
 
@@ -258,12 +259,13 @@
                     **retry_keys)
 
         self._conn_retry.interrupt = lambda: self._stopped.is_set()
-        self._connection = ConnectionHandler(self, self._conn_retry.copy(),
-            logger=self.logger)
+        self._connection = ConnectionHandler(
+            self, self._conn_retry.copy(), logger=self.logger)
 
         # Every retry call should have its own copy of the retry helper
         # to avoid shared retry counts
         self._retry = self.retry
+
         def _retry(*args, **kwargs):
             return self._retry.copy()(*args, **kwargs)
         self.retry = _retry
@@ -282,7 +284,7 @@
         self.Semaphore = partial(Semaphore, self)
         self.ShallowParty = partial(ShallowParty, self)
 
-         # If we got any unhandled keywords, complain like python would
+        # If we got any unhandled keywords, complain like Python would
         if kwargs:
             raise TypeError('__init__() got unexpected keyword arguments: %s'
                             % (kwargs.keys(),))
@@ -433,7 +435,8 @@
             return
 
         if state in (KeeperState.CONNECTED, KeeperState.CONNECTED_RO):
-            self.logger.info("Zookeeper connection established, state: %s", state)
+            self.logger.info("Zookeeper connection established, "
+                             "state: %s", state)
             self._live.set()
             self._make_state_change(KazooState.CONNECTED)
         elif state in LOST_STATES:
@@ -510,12 +513,12 @@
         self._queue.append((request, async_object))
 
         # wake the connection, guarding against a race with close()
-        write_pipe = self._connection._write_pipe
-        if write_pipe is None:
+        write_sock = self._connection._write_sock
+        if write_sock is None:
             async_object.set_exception(ConnectionClosedError(
                 "Connection has been closed"))
         try:
-            os.write(write_pipe, b'\0')
+            write_sock.send(b'\0')
         except:
             async_object.set_exception(ConnectionClosedError(
                 "Connection has been closed"))
@@ -585,7 +588,7 @@
 
         self._stopped.set()
         self._queue.append((CloseInstance, None))
-        os.write(self._connection._write_pipe, b'\0')
+        self._connection._write_sock.send(b'\0')
         self._safe_close()
 
     def restart(self):
@@ -622,7 +625,7 @@
         if not self._live.is_set():
             raise ConnectionLoss("No connection to server")
 
-        peer = self._connection._socket.getpeername()
+        peer = self._connection._socket.getpeername()[:2]
         sock = self.handler.create_connection(
             peer, timeout=self._session_timeout / 1000.0)
         sock.sendall(cmd)
@@ -786,7 +789,7 @@
         """
         acl = acl or self.default_acl
         return self.create_async(path, value, acl=acl, ephemeral=ephemeral,
-            sequence=sequence, makepath=makepath).get()
+                                 sequence=sequence, makepath=makepath).get()
 
     def create_async(self, path, value=b"", acl=None, ephemeral=False,
                      sequence=False, makepath=False):
@@ -828,7 +831,8 @@
 
         @capture_exceptions(async_result)
         def do_create():
-            result = self._create_async_inner(path, value, acl, flags, trailing=sequence)
+            result = self._create_async_inner(
+                path, value, acl, flags, trailing=sequence)
             result.rawlink(create_completion)
 
         @capture_exceptions(async_result)
@@ -867,10 +871,13 @@
         return async_result
 
     def ensure_path(self, path, acl=None):
-        """Recursively create a path if it doesn't exist.
+        """Recursively create a path if it doesn't exist. Also return value indicates
+        if path already existed or had to be created.
 
         :param path: Path of node.
         :param acl: Permissions for node.
+        :returns `True` if path existed, `False` otherwise.
+        :rtype: bool
 
         """
         return self.ensure_path_async(path, acl).get()
@@ -1291,6 +1298,13 @@
     Transactions are not thread-safe and should not be accessed from
     multiple threads at once.
 
+    .. note::
+
+        The ``committed`` attribute only indicates whether this
+        transaction has been sent to Zookeeper and is used to prevent
+        duplicate commits of the same transaction. The result should be
+        checked to determine if the transaction executed as desired.
+
     .. versionadded:: 0.6
         Requires Zookeeper 3.4+
 
diff --git a/slider-agent/src/main/python/kazoo/handlers/threading.py b/slider-agent/src/main/python/kazoo/handlers/threading.py
index 3ca9a8f..684a6b0 100644
--- a/slider-agent/src/main/python/kazoo/handlers/threading.py
+++ b/slider-agent/src/main/python/kazoo/handlers/threading.py
@@ -35,7 +35,7 @@
 log = logging.getLogger(__name__)
 
 
-class TimeoutError(Exception):
+class KazooTimeoutError(Exception):
     pass
 
 
@@ -104,7 +104,7 @@
                     raise self._exception
 
             # if we get to this point we timeout
-            raise TimeoutError()
+            raise KazooTimeoutError()
 
     def get_nowait(self):
         """Return the value or raise the exception without blocking.
@@ -174,7 +174,7 @@
 
     """
     name = "sequential_threading_handler"
-    timeout_exception = TimeoutError
+    timeout_exception = KazooTimeoutError
     sleep_func = staticmethod(time.sleep)
     queue_impl = Queue.Queue
     queue_empty = Queue.Empty
diff --git a/slider-agent/src/main/python/kazoo/handlers/utils.py b/slider-agent/src/main/python/kazoo/handlers/utils.py
index 60d6404..93cfdb5 100644
--- a/slider-agent/src/main/python/kazoo/handlers/utils.py
+++ b/slider-agent/src/main/python/kazoo/handlers/utils.py
@@ -8,7 +8,9 @@
     HAS_FNCTL = False
 import functools
 import os
-
+import sys
+import socket
+import errno
 
 def _set_fd_cloexec(fd):
     flags = fcntl.fcntl(fd, fcntl.F_GETFD)
@@ -21,18 +23,43 @@
         _set_fd_cloexec(sock)
     return sock
 
+def create_socket_pair(port=0):
+    """Create socket pair.
 
-def create_pipe():
-    """Create a non-blocking read/write pipe.
+    If socket.socketpair isn't available, we emulate it.
     """
-    r, w = os.pipe()
-    if HAS_FNCTL:
-        fcntl.fcntl(r, fcntl.F_SETFL, os.O_NONBLOCK)
-        fcntl.fcntl(w, fcntl.F_SETFL, os.O_NONBLOCK)
-        _set_fd_cloexec(r)
-        _set_fd_cloexec(w)
-    return r, w
+    # See if socketpair() is available.
+    have_socketpair = hasattr(socket, 'socketpair')
+    if have_socketpair:
+        client_sock, srv_sock = socket.socketpair()
+        return client_sock, srv_sock
 
+    # Create a non-blocking temporary server socket
+    temp_srv_sock = socket.socket()
+    temp_srv_sock.setblocking(False)
+    temp_srv_sock.bind(('', port))
+    port = temp_srv_sock.getsockname()[1]
+    temp_srv_sock.listen(1)
+
+    # Create non-blocking client socket
+    client_sock = socket.socket()
+    client_sock.setblocking(False)
+    try:
+        client_sock.connect(('localhost', port))
+    except socket.error as err:
+        # EWOULDBLOCK is not an error, as the socket is non-blocking
+        if err.errno != errno.EWOULDBLOCK:
+            raise
+
+    # Use select to wait for connect() to succeed.
+    import select
+    timeout = 1
+    readable = select.select([temp_srv_sock], [], [], timeout)[0]
+    if temp_srv_sock not in readable:
+        raise Exception('Client socket not connected in {} second(s)'.format(timeout))
+    srv_sock, _ = temp_srv_sock.accept()
+
+    return client_sock, srv_sock
 
 def create_tcp_socket(module):
     """Create a TCP socket with the CLOEXEC flag set.
diff --git a/slider-agent/src/main/python/kazoo/protocol/connection.py b/slider-agent/src/main/python/kazoo/protocol/connection.py
index 3cbb87f..6b89c18 100644
--- a/slider-agent/src/main/python/kazoo/protocol/connection.py
+++ b/slider-agent/src/main/python/kazoo/protocol/connection.py
@@ -17,7 +17,7 @@
     SessionExpiredError,
     NoNodeError
 )
-from kazoo.handlers.utils import create_pipe
+from kazoo.handlers.utils import create_socket_pair
 from kazoo.loggingsupport import BLATHER
 from kazoo.protocol.serialization import (
     Auth,
@@ -146,8 +146,8 @@
         self.connection_stopped.set()
         self.ping_outstanding = client.handler.event_object()
 
-        self._read_pipe = None
-        self._write_pipe = None
+        self._read_sock = None
+        self._write_sock = None
 
         self._socket = None
         self._xid = None
@@ -169,7 +169,7 @@
     def start(self):
         """Start the connection up"""
         if self.connection_closed.is_set():
-            self._read_pipe, self._write_pipe = create_pipe()
+            self._read_sock, self._write_sock = create_socket_pair()
             self.connection_closed.clear()
         if self._connection_routine:
             raise Exception("Unable to start, connection routine already "
@@ -192,12 +192,12 @@
         if not self.connection_stopped.is_set():
             raise Exception("Cannot close connection until it is stopped")
         self.connection_closed.set()
-        wp, rp = self._write_pipe, self._read_pipe
-        self._write_pipe = self._read_pipe = None
-        if wp is not None:
-            os.close(wp)
-        if rp is not None:
-            os.close(rp)
+        ws, rs = self._write_sock, self._read_sock
+        self._write_sock = self._read_sock = None
+        if ws is not None:
+            ws.close()
+        if rs is not None:
+            rs.close()
 
     def _server_pinger(self):
         """Returns a server pinger iterable, that will ping the next
@@ -238,8 +238,8 @@
         if xid:
             header, buffer, offset = self._read_header(timeout)
             if header.xid != xid:
-                raise RuntimeError('xids do not match, expected %r received %r',
-                                   xid, header.xid)
+                raise RuntimeError('xids do not match, expected %r '
+                                   'received %r', xid, header.xid)
             if header.zxid > 0:
                 zxid = header.zxid
             if header.err:
@@ -257,8 +257,9 @@
             try:
                 obj, _ = request.deserialize(msg, 0)
             except Exception:
-                self.logger.exception("Exception raised during deserialization"
-                                      " of request: %s", request)
+                self.logger.exception(
+                    "Exception raised during deserialization "
+                    "of request: %s", request)
 
                 # raise ConnectionDropped so connect loop will retry
                 raise ConnectionDropped('invalid server response')
@@ -276,8 +277,9 @@
         if request.type:
             b.extend(int_struct.pack(request.type))
         b += request.serialize()
-        self.logger.log((BLATHER if isinstance(request, Ping) else logging.DEBUG),
-                        "Sending request(xid=%s): %s", xid, request)
+        self.logger.log(
+            (BLATHER if isinstance(request, Ping) else logging.DEBUG),
+            "Sending request(xid=%s): %s", xid, request)
         self._write(int_struct.pack(len(b)) + b, timeout)
 
     def _write(self, msg, timeout):
@@ -358,8 +360,9 @@
                 try:
                     response = request.deserialize(buffer, offset)
                 except Exception as exc:
-                    self.logger.exception("Exception raised during deserialization"
-                                          " of request: %s", request)
+                    self.logger.exception(
+                        "Exception raised during deserialization "
+                        "of request: %s", request)
                     async_object.set_exception(exc)
                     return
                 self.logger.debug(
@@ -415,11 +418,11 @@
         except IndexError:
             # Not actually something on the queue, this can occur if
             # something happens to cancel the request such that we
-            # don't clear the pipe below after sending
+            # don't clear the socket below after sending
             try:
                 # Clear possible inconsistence (no request in the queue
-                # but have data in the read pipe), which causes cpu to spin.
-                os.read(self._read_pipe, 1)
+                # but have data in the read socket), which causes cpu to spin.
+                self._read_sock.recv(1)
             except OSError:
                 pass
             return
@@ -440,7 +443,7 @@
 
         self._submit(request, connect_timeout, xid)
         client._queue.popleft()
-        os.read(self._read_pipe, 1)
+        self._read_sock.recv(1)
         client._pending.append((request, async_object, xid))
 
     def _send_ping(self, connect_timeout):
@@ -492,7 +495,7 @@
 
     def _connect_attempt(self, host, port, retry):
         client = self.client
-        TimeoutError = self.handler.timeout_exception
+        KazooTimeoutError = self.handler.timeout_exception
         close_connection = False
 
         self._socket = None
@@ -519,7 +522,7 @@
                 jitter_time = random.randint(0, 40) / 100.0
                 # Ensure our timeout is positive
                 timeout = max([read_timeout / 2.0 - jitter_time, jitter_time])
-                s = self.handler.select([self._socket, self._read_pipe],
+                s = self.handler.select([self._socket, self._read_sock],
                                         [], [], timeout)[0]
 
                 if not s:
@@ -537,7 +540,7 @@
             self.logger.info('Closing connection to %s:%s', host, port)
             client._session_callback(KeeperState.CLOSED)
             return STOP_CONNECTING
-        except (ConnectionDropped, TimeoutError) as e:
+        except (ConnectionDropped, KazooTimeoutError) as e:
             if isinstance(e, ConnectionDropped):
                 self.logger.warning('Connection dropped: %s', e)
             else:
@@ -570,9 +573,9 @@
         self.logger.info('Connecting to %s:%s', host, port)
 
         self.logger.log(BLATHER,
-                          '    Using session_id: %r session_passwd: %s',
-                          client._session_id,
-                          hexlify(client._session_passwd))
+                        '    Using session_id: %r session_passwd: %s',
+                        client._session_id,
+                        hexlify(client._session_passwd))
 
         with self._socket_error_handling():
             self._socket = self.handler.create_connection(
@@ -584,7 +587,8 @@
                           client._session_id or 0, client._session_passwd,
                           client.read_only)
 
-        connect_result, zxid = self._invoke(client._session_timeout, connect)
+        connect_result, zxid = self._invoke(
+            client._session_timeout / 1000.0, connect)
 
         if connect_result.time_out <= 0:
             raise SessionExpiredError("Session has expired")
@@ -601,13 +605,13 @@
         client._session_passwd = connect_result.passwd
 
         self.logger.log(BLATHER,
-                          'Session created, session_id: %r session_passwd: %s\n'
-                          '    negotiated session timeout: %s\n'
-                          '    connect timeout: %s\n'
-                          '    read timeout: %s', client._session_id,
-                          hexlify(client._session_passwd),
-                          negotiated_session_timeout, connect_timeout,
-                          read_timeout)
+                        'Session created, session_id: %r session_passwd: %s\n'
+                        '    negotiated session timeout: %s\n'
+                        '    connect timeout: %s\n'
+                        '    read timeout: %s', client._session_id,
+                        hexlify(client._session_passwd),
+                        negotiated_session_timeout, connect_timeout,
+                        read_timeout)
 
         if connect_result.read_only:
             client._session_callback(KeeperState.CONNECTED_RO)
@@ -618,7 +622,7 @@
 
         for scheme, auth in client.auth_data:
             ap = Auth(0, scheme, auth)
-            zxid = self._invoke(connect_timeout, ap, xid=AUTH_XID)
+            zxid = self._invoke(connect_timeout / 1000.0, ap, xid=AUTH_XID)
             if zxid:
                 client.last_zxid = zxid
         return read_timeout, connect_timeout
diff --git a/slider-agent/src/main/python/kazoo/testing/__init__.py b/slider-agent/src/main/python/kazoo/testing/__init__.py
deleted file mode 100644
index 660546b..0000000
--- a/slider-agent/src/main/python/kazoo/testing/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-"""license: Apache License 2.0, see LICENSE for more details."""
-from kazoo.testing.harness import KazooTestCase
-from kazoo.testing.harness import KazooTestHarness
-
-
-__all__ = ('KazooTestHarness', 'KazooTestCase', )
diff --git a/slider-agent/src/main/python/kazoo/testing/common.py b/slider-agent/src/main/python/kazoo/testing/common.py
deleted file mode 100644
index b497a8e..0000000
--- a/slider-agent/src/main/python/kazoo/testing/common.py
+++ /dev/null
@@ -1,284 +0,0 @@
-"""license: Apache License 2.0, see LICENSE for more details."""
-#
-#  Copyright (C) 2010-2011, 2011 Canonical Ltd. All Rights Reserved
-#
-#  This file was originally taken from txzookeeper and modified later.
-#
-#  Authors:
-#   Kapil Thangavelu and the Kazoo team
-#
-#  txzookeeper is free software: you can redistribute it and/or modify
-#  it under the terms of the GNU Lesser General Public License as published by
-#  the Free Software Foundation, either version 3 of the License, or
-#  (at your option) any later version.
-#
-#  txzookeeper is distributed in the hope that it will be useful,
-#  but WITHOUT ANY WARRANTY; without even the implied warranty of
-#  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-#  GNU Lesser General Public License for more details.
-#
-#  You should have received a copy of the GNU Lesser General Public License
-#  along with txzookeeper.  If not, see <http://www.gnu.org/licenses/>.
-
-
-import code
-import os
-import os.path
-import shutil
-import signal
-import subprocess
-import tempfile
-import traceback
-
-from itertools import chain
-from collections import namedtuple
-from glob import glob
-
-
-def debug(sig, frame):
-    """Interrupt running process, and provide a python prompt for
-    interactive debugging."""
-    d = {'_frame': frame}         # Allow access to frame object.
-    d.update(frame.f_globals)  # Unless shadowed by global
-    d.update(frame.f_locals)
-
-    i = code.InteractiveConsole(d)
-    message = "Signal recieved : entering python shell.\nTraceback:\n"
-    message += ''.join(traceback.format_stack(frame))
-    i.interact(message)
-
-
-def listen():
-    if os.name != 'nt':  # SIGUSR1 is not supported on Windows
-        signal.signal(signal.SIGUSR1, debug)  # Register handler
-listen()
-
-
-def to_java_compatible_path(path):
-    if os.name == 'nt':
-        path = path.replace('\\', '/')
-    return path
-
-ServerInfo = namedtuple(
-    "ServerInfo", "server_id client_port election_port leader_port")
-
-
-class ManagedZooKeeper(object):
-    """Class to manage the running of a ZooKeeper instance for testing.
-
-    Note: no attempt is made to probe the ZooKeeper instance is
-    actually available, or that the selected port is free. In the
-    future, we may want to do that, especially when run in a
-    Hudson/Buildbot context, to ensure more test robustness."""
-
-    def __init__(self, software_path, server_info, peers=(), classpath=None):
-        """Define the ZooKeeper test instance.
-
-        @param install_path: The path to the install for ZK
-        @param port: The port to run the managed ZK instance
-        """
-        self.install_path = software_path
-        self._classpath = classpath</