AMBARI-619. Rename old-trunk to appropriate version based name and delete snafu branch

git-svn-id: https://svn.apache.org/repos/asf/incubator/ambari/branches/branch-0.1@1359938 13f79535-47bb-0310-9956-ffa450edef68
diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..c10ce5d
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,20 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#     http://www.apache.org/licenses/LICENSE-2.0
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+agent/src/main/python/hms_agent.egg-info
+target
+*~
+*.pyc
+.classpath
+.project
+.settings
diff --git a/CHANGES.txt b/CHANGES.txt
new file mode 100644
index 0000000..642ed78
--- /dev/null
+++ b/CHANGES.txt
@@ -0,0 +1,393 @@
+Ambari Change log
+
+Release 0.1.0 - unreleased
+
+  AMBARI-185. Remove NodeServers from NodeState. Instead use NodeRole to keep associated roles to node and their active state. (vgogate)
+
+  AMBARI-184. Ambari client node list command returns multiple entries for same node (vgogate)
+
+  AMBARI-183. Pass the appropriate component user to agent as specified in the stack (vgogate)
+
+  AMBARI-182. rename controller/src/main/resources/org/apache/ambari/acd/mapred-0.1.0.acd to mapreduce-0.1.0.acd (vgogate)
+
+  AMBARI-180. Fixes the agent to do better process management (ddas)
+
+  AMBARI-179. Set the component level user/group information in the flattened stack, 
+  inherit default user/group information if not set one for component. (vgogate)
+
+  AMBARI-178. Add support for Map/Reduce component in Ambari stack (vgogate)
+
+  AMBARI-176. Adds a first version of MapReduce ACD (ddas)
+
+  AMBARI-175. Removes the map from hostnames to heartbeat-responses. (ddas)
+
+  AMBARI-174. Controller marks nodes unhealthy upon command execution failures. Marks them
+  healthy when the corresponding agent is restarted (ddas)
+
+  AMBARI-173. Fixed RPM build for OpenSUSE. (Eric Yang)
+
+  AMBARI-172. Remove the "ambari" category from the configuration element, put it as "globals" elemement in the stack. (vgogate)
+
+  AMBARI-171. Agents retry failed actions for a configurable number of times
+  after a configurable delay (ddas)
+
+  AMBARI-170. Update the cluster state after state machine transitions it to final ACTIVE/INACTIVE state (vgogate)
+
+  AMBARI-168. trim the white spaces from host names returned through getHostnamesFromRageExpressions (vgogate)
+
+  AMBARI-163. Addresses failure handling in FSM. ((thejas via ddas)
+
+  AMBARI-165. Fix the component definition for HDFS. (omalley)
+
+  AMBARI-162. Fixed agent unit test failure when ethernet is not in
+  used. (Eric Yang)
+
+  AMBARI-161. Add puppet module for Hadoop to agent resources (vgogate)
+
+  AMBARI-159. Temporarily disabled security (until Ambari upgrades to 
+  Python2.7). (ddas)
+
+  AMBARI-160. Ambari client add stack command should allow both json
+  and xml (vgogate)
+
+  AMBARI-158. Move the JSON encoding to the natural one. (omalley)
+  
+  AMBARI-157. Enhances the agent to make it puppet aware (ddas)
+
+  AMBARI-156. Clean up the puppet example stack. (omalley)
+
+  AMBARI-155. Heartbeat response to a node should contain an empty list of 
+  actions if node doesn't belong to any cluster presently. (ddas)
+
+  AMBARI-154. Exception while starting the controller. Outdated JAXB
+  fields in HeartBeat class (vgogate)
+
+  AMBARI-153. Introduce a 'ambari.properties' configuration file that
+  can specify 'data.store' and a url. It defaults to 'zk://localhost:2181/',
+  but can be set to 'test:/' to get the static storage. (omalley)
+
+  AMBARI-152. Fixes issues in the shell scripts (ddas)
+
+  AMBARI-148. Refactors StateMachineInvoker (ddas)
+
+  AMBARI-151. Fix TestHardware when in offline mode. (omalley)
+
+  AMBARI-150. Simplifies states in controller state machine (thejas via ddas)
+
+  AMBARI-149. Filter the meta ambari category out of the flattened stacks.
+  (omalley)
+
+  AMBARI-141. Update the heartbeat on controller/agent (ddas)
+
+  AMBARI-147. Create a stack flattener and introduce Guice. (omalley)
+
+  AMBARI-145. FSMs are created for only those components that have 
+  active roles (thejas via ddas)
+
+  AMBARI-146. Fix test case failures in agent's FileUtil. (omalley)
+
+  AMBARI-144. Implement getInstallAndConfigureScript for a given
+  revision of cluster definition. (vgogate)
+
+  AMBARI-143. Fixes an annotation issue in HeartBeat class (ddas)
+
+  AMBZRI-142. Add cluster must validate if requested nodes are
+  pre-allocated to any other existing cluster (vgogate)
+
+  AMBARI-140. Refactors the heartbeat handling w.r.t simplification of 
+  state management. (ddas)
+
+  AMBARI-138. Implement stack persistence (vgogate)
+
+  AMBARI-135. Simplifies the heartbeat handling to not deal with 
+  install/configure methods on component plugin definitions (ddas)
+
+  AMBARI-134. Add Google analytics to the site. (omalley)
+
+  AMBARI-132. Fix update agent environment script location. (Ahmed
+  Fathalla via Eric Yang)
+
+  AMBARI-131. Fixed post installation script for Ambari Agent. (Eric Yang)
+
+  AMBARI-129. Rename agent package reference of HMS to Ambari. (Eric Yang)
+
+  AMBARI-128. Improved ethtool handling. (Ahmed Fathalla via Eric Yang)
+
+  AMBARI-127. Fixed mailing list address. (Ahmed Fathalla via Eric Yang)
+
+  AMBARI-126. Minor fixes to the FSM invocations (ddas)
+
+  AMBARI-125. Recover the the state of existing clusters after
+  controller restart (vgogate)
+
+  AMBARI-124. Add Zookeeper Data store and persist the cluster
+  definitions across controller restart (vgogate)
+
+  AMBARI-116. Change the name group to provider in
+  hadoop-security-0.xml stack definition (vgogate)
+
+  AMBARI-120. Fixed REST resource annotation bugs. (Eric Yang)
+
+  AMBARI-121. Added examples for returning REST resources. (Eric Yang)
+
+  AMBARI-119. Enhance agent to support workDirComponent. (Eric Yang)
+
+  AMBARI-118. Added safe guard mechanism to prevent agent crash on 
+  faulty action. (Eric Yang)
+
+  AMBARI-117. If some install action is sent in a heartbeat response, the
+  latter should also include the dependent components' installs. (ddas)
+
+  AMBARI-115. Fixed connection error handling for agent. (Eric Yang)
+
+  AMBARI-114. Fix issues in XMLComponentDefinition (ddas)
+
+  AMBARI-112. Fixes the blueprint/stack resolution in the Cluster class
+  (ddas)
+
+  AMBARI-112. Fix the url path conflict for /clusters used in both
+  ClustersResource and ClusterResource (vgogate)
+
+  AMBARI-111. Minor clean up of site documentation (omalley)
+
+  AMBARI-110. Add persistent data store interface (vgogate)
+
+  AMBARI-107. Added reporting section to aggregate javadocs. (Eric Yang)
+
+  AMBARI-109. Minor fixes to the CLI documentation. (omalley)
+
+  AMBARI-108. Change name blueprint to stack (vgogate)
+
+  AMBARI-106. Fixes some javadoc stuff (ddas)
+
+  AMBARI-105. Remove post on clusters resource to create new cluster
+  (instead use put operation on cluster resource along with update
+  cluster) (vgogate)
+
+  AMBARI-104. Polishes the CLI doc some (ddas)
+
+  AMBARI-103. Refactor agent entities package to 
+  org.apache.ambari.common.rest.agent. (Eric Yang)
+
+  AMBARI-102. Reduce heartbeat message content, when installedRoleState
+  is empty. (Eric Yang)
+
+  AMBARI-101. Remove clusterID and use cluster name as unique ID for
+  the cluster(vgogate)
+
+  AMBARI-100. Fixes the heartbeat to take into account install/uninstall 
+  of components (ddas)
+
+  AMBARI-99. Added schema and wadl generation to be part of the build system, 
+  and integrate with maven site. (Eric Yang)
+
+  AMBARI-98. Get cluster nodes with cluster in ATTIC state fails. (vgogate)
+
+  AMBARI-96. Updated ambar client to show usage screen. (Eric Yang)
+
+  AMBARI-93. Update -revision parameter to make it optional. (Eric Yang)
+
+  AMBARI-92. Added logic to retry heartbeat sending. (Eric Yang)
+
+  AMBARI-91. Move the example blueprints into xml resources. (omalley)
+
+  AMBARI-90. Implement nodes get/list CLI (vgogate)
+
+  AMBARI-89. Implement blueprint history CLI (vgogate)
+
+  AMBARI-88. Update cluster nodes reservation is giving null pointer
+  exception during cluster creation (vgogate)
+
+  AMBARI-87. Importing pre-existing blueprint to Ambari through CLI
+  "blueprint add" gives wrong error message (vgogate)
+
+  AMBARI-86. Validate blueprint referenced by cluster exist including
+  it's parent blueprints (vgogate)
+
+  AMBARI-85. Adds handling of new states to do with preinstall actions (ddas)
+
+  AMBARI-84. Added configuration file writer for Ambari Component. (Eric Yang)
+
+  AMBARI-83. Added python unit test framework. (Eric Yang)
+
+  AMBARI-82. Fix example clusters. (omalley)
+
+  AMBARI-81. Updated xslt document to show human readable stylesheet. (Eric 
+  Yang)
+
+  AMBARI-80. Implement blueprint get CLI
+
+  AMBARI-79. create default blueprint instance
+
+  AMBARI-78. Change the datatype of responseId in the heartbeat messages to 
+  short (ddas)
+
+  AMBARI-77. create default blueprint containing HDFS component (vgogate)
+
+  AMBARI-76. Register new node w/ Ambari controller (vgogate)
+
+  AMBARI-75. Centralize agent configuration parsing. (Eric Yang)
+
+  AMBARI-74. Throttle the frequency of checking action queue to 5 seconds.
+  (Eric Yang)
+
+  AMBARI-73. Implement cluster nodes CLI. (vgogate)
+
+  AMBARI-72. Adding (dummy) blueprints before (dummy) cluster definitions and
+             fixing null pointer exception when parent blueprint is set to null
+             for top level blueprint  (vgogate)
+
+  AMBARI-71. Fix broken packaging and startup scripts. (Eric Yang)
+
+  AMBARI-70. Implements the installation/configuration of gateway role (ddas)
+
+  AMBARI-66. Implemented compatible package install/uninstall action for 
+  plugin. (Eric Yang)
+
+  AMBARI-69. Added skeleton for Ambari component plugin library. (Eric Yang)
+
+  AMBARI-68. Implement add blueprint CLI (vgogate)
+
+  AMBARI-67. Implement cluster list, get CLI commands (vgogate)
+
+  AMBARI-65. Added directory structure actions. (Eric Yang)
+
+  AMBARI-60. Added permission check for RUN_ACTION, and
+  WRITE_FILE_ACTION. (Eric Yang)
+
+  AMBARI-64. Define components in terms of XML. (omalley)
+
+  AMBARI-63. Implement cluster update, rename and delete CLI commands (vgogte)
+
+  AMBARI-62. Adds the install/uninstall checks in the heartbeat handler (ddas)
+
+  AMBARI-60. Added permission check for RUN_ACTION, and
+  WRITE_FILE_ACTION.  (Eric Yang)
+
+  AMBARI-61. Rename cluster REST API. (vgogate)
+
+  AMBARI-59. Refactor to use clusterRevision instead of bluePrintName and 
+  bluePrintRevision. (Eric Yang)
+
+  AMBARI-57. Adds a state for monitoring safe-mode success/failure
+  checks in the ServiceFSM (ddas)
+
+  AMBARI-56. Refactor write config file command to write config file
+  action. (Eric Yang)
+
+  AMBARI-54. Refactor agent implementation to match AMBARI-53. (Eric Yang)
+
+  AMBARI-51. Refactor transport data model for commands to become
+  action. (Eric Yang)
+
+  AMBARI-56. Surface the write config file command to write config
+  file action. (Eric Yang)
+
+  AMBARI-55. release cluster nodes function (vgogate)
+
+  AMBARI-53. Refactor the HeartBeat to have Agents' states separated by 
+  component/role (ddas)
+
+  AMBARI-50. Refactor the REST apis. (omalley)
+
+  AMBARI-48. Move Cluster object from rest entities to controller(vgogate)
+
+  AMBARI-47. Implement Cluster definition re-visioning(vgogate)
+
+  AMBARI-45. Implement CLI command Cluster create (vgogate)
+
+  AMBARI-46. Implemented preservation of cluster id, blueprint name and
+  blueprint revision on agent. (Eric Yang)
+
+  AMBARI-44. Implemented blueprint name and revision in heartbeat. (Eric Yang)
+  
+  AMBARI-39. Bridged cluster reference gap between REST API with State machine.
+  (Eric Yang)
+
+  AMBARI-28. Clean up html encoded javadoc. (Eric Yang) 
+
+  AMBARI-23. Renamed agent API to /agent, and public API to /rest. (Eric Yang)
+
+  AMBARI-18. Implemented special command to write configuration file. (Eric 
+  Yang)
+
+  AMBARI-17. Added idle state for agent heartbeat. (Eric Yang)
+
+  AMBARI-15. Implemented agent side of authentication hooks. (Eric Yang)
+
+  AMBARI-12. Added transition state STARTING, STOPPING. (Eric Yang)
+
+  AMBARI-11. Implemented Agent to controller heartbeat communication. (Eric 
+  Yang)
+
+  AMBARI-7. Updated Jersey to 1.9 for automating wadl generation. (Eric Yang)
+
+  AMBARI-3. Move HMS prototype code to branch 0.0. (Eric Yang)
+
+  AMBARI-2. Added heartbeat/controller response data model, and wadl 
+  configuration. (Eric Yang)
+
+  AMBARI-42. Return the latest blueprint revision if revision is not specified
+  as query parameter.
+
+  AMBARI-43. Change the API StateMachineInvoker.getStateMachineClusterInstance 
+  to take blueprint related arguments. (ddas)
+
+  AMBARI-41. Rename the Role/Cluster/Service classes in the statemachine 
+  package to RoleFSM/ClusterFSM/ServiceFSM (ddas)
+
+  AMBARI-37. Tidies up a bit the statemachine API and related classes (ddas)
+
+  AMBARI-36. Add CLI interface document to Ambari site (vgogate)
+
+  AMBARI-35. Replaces the counters for keeping track of service/role start/stop
+  with iterators. (ddas)
+
+  AMBARI-34. Address the start cluster part of the statemachine implementation,
+  and handle the heartbeat. (ddas)
+
+  AMBARI-32. Remove stack resource Remove the Stack resource from Ambari 
+  (vgogate)
+
+  AMBARI-25. Clean up the configuration entity to collapse some levels. 
+  (omalley)
+
+  AMBARI-31. Fix JAXB annotations for Ambari resourcesi (vgogate)
+
+  AMBARI-30. Fix the build so that the client and controller tarballs are 
+  built. (omalley)
+
+  AMBARI-29. Implement Node Resource API. (vgogate)
+
+  AMBARI-24. Fix the versions in the pom.xml. (omalley)
+
+  AMBARI-22. Implement Blueprint Resource API (vgogate)
+
+  AMBARI-21. Fix the problem w/ Stacks Resource API (vgogate)
+
+  AMBARI-20. Fix the rest API for getting the cluster nodes (vgogate)
+
+  AMBARI-19. Fix Cluster resource API nodes reservation logic (vgogate)
+
+  AMBARI-16. Implement Stacks resource API (vgogate)
+
+  AMBARI-14. Implement Ambari REST API for cluster resource (vgogate)
+
+  AMBARI-13. Initial attempt at a website. (omalley)
+
+  AMBARI-10. Initial checkin of the heartbeat handling code (ddas)
+
+  AMBARI-9. Fix all of the files to have the Apache header and include
+  RAT in the build. (omalley)
+
+  AMBARI-8. Move dependencies into controller. (omalley)
+
+  AMBARI-6. Moving Clusters and Nodes container objects into controller
+  (vgogate)
+
+  AMBARI-5. Added some left over changes from git repository for Ambari REST 
+  APIs. (vgogate)
+
+  AMBARI-4. Created interface for component plugins. (omalley)
+
+  AMBARI-1. Initial code import (omalley)
diff --git a/DISCLAIMER.txt b/DISCLAIMER.txt
new file mode 100644
index 0000000..d32bbf5
--- /dev/null
+++ b/DISCLAIMER.txt
@@ -0,0 +1,15 @@
+Apache Ambari is an effort undergoing incubation at the Apache Software 
+Foundation (ASF), sponsored by the Apache Incubator PMC. 
+
+Incubation is required of all newly accepted projects until a further review 
+indicates that the infrastructure, communications, and decision making process 
+have stabilized in a manner consistent with other successful ASF projects. 
+
+While incubation status is not necessarily a reflection of the completeness 
+or stability of the code, it does indicate that the project has yet to be 
+fully endorsed by the ASF.
+
+For more information about the incubation status of the Ambari project you
+can go to the following page:
+
+http://incubator.apache.org/ambari/
diff --git a/LICENSE.txt b/LICENSE.txt
new file mode 100644
index 0000000..d645695
--- /dev/null
+++ b/LICENSE.txt
@@ -0,0 +1,202 @@
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "[]"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright [yyyy] [name of copyright owner]
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
diff --git a/NOTICE.txt b/NOTICE.txt
new file mode 100644
index 0000000..b8e8d93
--- /dev/null
+++ b/NOTICE.txt
@@ -0,0 +1,14 @@
+Apache Ambari
+Copyright 2011 The Apache Software Foundation
+
+This product includes software developed by The Apache Software
+Foundation (http://www.apache.org/).
+
+This product includes jQuery UI (jqueryui.com)
+Copyright 2011 http://jqueryui.com/about
+
+This product includes DataTables (www.datatables.net)
+Copyright 2008-2010 Allan Jardine
+
+This product includes wadl.xsl (Transforms WADL XML documents to HTML.)
+Copyright 2011 Mark Sawers
diff --git a/agent/BUILD.txt b/agent/BUILD.txt
new file mode 100644
index 0000000..51a1c73
--- /dev/null
+++ b/agent/BUILD.txt
@@ -0,0 +1,22 @@
+Setup developer environment
+---------------------------
+
+Make sure Python 2.4+ is installed on the build machine.
+
+Download bencode package for python from:
+
+http://pypi.python.org/pypi/bencode/
+
+To install dependent packages, run:
+
+cd bencode-1.0
+sudo python setup.py install
+
+The build system is now ready for building Ambari Agent.
+
+Build Ambari Agent
+------------------
+
+To build Ambari Agent, run:
+
+mvn clean package -P rpm
diff --git a/agent/bin/transmission-done.sh b/agent/bin/transmission-done.sh
deleted file mode 100644
index e92f774..0000000
--- a/agent/bin/transmission-done.sh
+++ /dev/null
@@ -1,12 +0,0 @@
-#!/bin/sh
-
-bin=`dirname "$0"`
-bin=`cd "$bin"; pwd`
-
-. "$bin"/hms-config.sh
-
-sleep 30
-transmission-remote -t 1 -r
-transmission-remote --exit
-result=$?
-echo $result > ${HMS_HOME}/var/tmp/tracker
diff --git a/agent/pom.xml b/agent/pom.xml
index bf80134..9d86304 100644
--- a/agent/pom.xml
+++ b/agent/pom.xml
@@ -19,20 +19,20 @@
 -->
 
     <modelVersion>4.0.0</modelVersion>
-    <groupId>org.apache.hms</groupId>
-    <artifactId>hms-agent</artifactId>
+    <groupId>org.apache.ambari</groupId>
+    <artifactId>ambari-agent</artifactId>
     <packaging>pom</packaging>
     <version>0.1.0</version>
     <name>agent</name>
-    <description>Hadoop Management System Agent</description>
+    <description>Ambari Agent</description>
 
     <properties>
         <final.name>${project.artifactId}-${project.version}</final.name>
         <package.release>1</package.release>
         <package.prefix>/usr</package.prefix>
-        <package.conf.dir>/etc/hms</package.conf.dir>
-        <package.log.dir>/var/log/hms</package.log.dir>
-        <package.pid.dir>/var/run/hms</package.pid.dir>
+        <package.conf.dir>/etc/ambari</package.conf.dir>
+        <package.log.dir>/var/log/ambari</package.log.dir>
+        <package.pid.dir>/var/run/ambari</package.pid.dir>
     </properties>
 
     <build>
@@ -62,15 +62,32 @@
                 <executions>
                     <execution>
                         <configuration>
-                            <executable>python</executable>
-                            <workingDirectory>target/hms-agent-${project.version}</workingDirectory>
+                            <executable>python2.6</executable>
+                            <workingDirectory>src/test/python</workingDirectory>
+                            <arguments>
+                                <argument>unitTests.py</argument>
+                            </arguments>    
+                            <environmentVariables>
+                                <PYTHONPATH>../../main/python:$PYTHONPATH</PYTHONPATH>
+                            </environmentVariables>
+                        </configuration>
+                        <id>python-test</id>
+                        <phase>test</phase>
+                        <goals>
+                            <goal>exec</goal>
+                        </goals>
+                    </execution>
+                    <execution>
+                        <configuration>
+                            <executable>python2.6</executable>
+                            <workingDirectory>target/ambari-agent-${project.version}</workingDirectory>
                             <arguments>
                                 <argument>${project.basedir}/src/main/python/setup.py</argument>
                                 <argument>clean</argument>
                                 <argument>bdist_dumb</argument>
                             </arguments>    
                             <environmentVariables>
-                            <PYTHONPATH>target/hms-agent-${project.version}:$PYTHONPATH</PYTHONPATH>
+                            <PYTHONPATH>target/ambari-agent-${project.version}:$PYTHONPATH</PYTHONPATH>
                             </environmentVariables>
                         </configuration>
                         <id>python-package</id>
@@ -82,6 +99,12 @@
                 </executions>
             </plugin>
         </plugins>
+      <extensions>
+        <extension>
+          <groupId>org.apache.maven.wagon</groupId>
+          <artifactId>wagon-ssh-external</artifactId>
+        </extension>
+      </extensions>
     </build>
 
   <profiles>
@@ -151,4 +174,12 @@
     </profile>
   </profiles>
 
+  <distributionManagement>
+    <site>
+      <id>apache-website</id>
+      <name>Apache website</name>
+      <url>scpexe://people.apache.org/www/incubator.apache.org/ambari/ambari-agent</url>
+    </site>
+  </distributionManagement>
+
 </project>
diff --git a/agent/src/main/java/org/apache/hms/agent/Agent.java b/agent/src/main/java/org/apache/hms/agent/Agent.java
deleted file mode 100755
index d9ca80a..0000000
--- a/agent/src/main/java/org/apache/hms/agent/Agent.java
+++ /dev/null
@@ -1,89 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.agent;
-
-import java.net.URL;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hms.common.util.ExceptionUtil;
-import org.mortbay.jetty.Server;
-import org.mortbay.jetty.servlet.Context;
-import org.mortbay.jetty.servlet.ServletHolder;
-import org.mortbay.xml.XmlConfiguration;
-
-import com.sun.jersey.spi.container.servlet.ServletContainer;
-
-public class Agent {
-  private static Log log = LogFactory.getLog(Agent.class);
-  
-  private static Agent instance = null;
-  private Server server = null;
-  private static URL serverConf = null;
-
-  public static Agent getInstance() {
-    if(instance==null) {
-      instance = new Agent();
-    }
-    return instance;
-  }
-
-  public void start() {
-    try {
-      System.out.close();
-      System.err.close();
-      instance = this;
-      run();
-    } catch(Exception e) {
-      log.error(ExceptionUtil.getStackTrace(e));
-      System.exit(-1);
-    }
-  }
-
-  public void run() {
-    server = new Server(4080);
-
-    XmlConfiguration configuration;
-    try {
-      Context root = new Context(server, "/", Context.SESSIONS);
-      ServletHolder sh = new ServletHolder(ServletContainer.class);
-      sh.setInitParameter("com.sun.jersey.config.property.resourceConfigClass", "com.sun.jersey.api.core.PackagesResourceConfig");
-      sh.setInitParameter("com.sun.jersey.config.property.packages", "org.apache.hms.agent.rest");
-      root.addServlet(sh, "/*");
-      server.setStopAtShutdown(true);
-      server.start();
-    } catch (Exception e) {
-      log.error(ExceptionUtil.getStackTrace(e));
-    }
-  }
-
-  public void stop() throws Exception {
-    try {
-      server.stop();
-    } catch (Exception e) {
-      log.error(ExceptionUtil.getStackTrace(e));
-    }
-  }
-
-  public static void main(String[] args) {
-    Agent agent = Agent.getInstance();
-    agent.start();
-  }
-
-}
diff --git a/agent/src/main/java/org/apache/hms/agent/dispatcher/DaemonRunner.java b/agent/src/main/java/org/apache/hms/agent/dispatcher/DaemonRunner.java
deleted file mode 100755
index f2514ba..0000000
--- a/agent/src/main/java/org/apache/hms/agent/dispatcher/DaemonRunner.java
+++ /dev/null
@@ -1,65 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.agent.dispatcher;
-
-import org.apache.hms.common.entity.ScriptCommand;
-import org.apache.hms.common.entity.agent.DaemonAction;
-import org.apache.hms.common.rest.Response;
-
-public class DaemonRunner {
-  
-  public Response startDaemon(DaemonAction dc) {
-    ScriptCommand cmd = new ScriptCommand();
-    StringBuilder sb = new StringBuilder();
-    sb.append("/etc/init.d/");
-    sb.append(dc.getDaemonName());
-    cmd.setScript(sb.toString());
-    String[] parms = new String[1];
-    parms[0] = "start";
-    cmd.setParms(parms);
-    ShellRunner shell = new ShellRunner();
-    return shell.run(cmd);
-  }
-  
-  public Response stopDaemon(DaemonAction dc) {
-    ScriptCommand cmd = new ScriptCommand();
-    StringBuilder sb = new StringBuilder();
-    sb.append("/etc/init.d/");
-    sb.append(dc.getDaemonName());
-    cmd.setScript(sb.toString());
-    String[] parms = new String[1];
-    parms[0] = "stop";
-    cmd.setParms(parms);
-    ShellRunner shell = new ShellRunner();
-    return shell.run(cmd);
-  }
-  
-  public Response checkDaemon(DaemonAction dc) {
-    ScriptCommand cmd = new ScriptCommand();
-    StringBuilder sb = new StringBuilder();
-    sb.append("/etc/init.d/");
-    sb.append(dc.getDaemonName());
-    cmd.setScript(sb.toString());
-    String[] parms = new String[1];
-    parms[0] = "status";
-    cmd.setParms(parms);
-    ShellRunner shell = new ShellRunner();
-    return shell.run(cmd);
-  }
-}
diff --git a/agent/src/main/java/org/apache/hms/agent/dispatcher/PackageRunner.java b/agent/src/main/java/org/apache/hms/agent/dispatcher/PackageRunner.java
deleted file mode 100755
index cefdff2..0000000
--- a/agent/src/main/java/org/apache/hms/agent/dispatcher/PackageRunner.java
+++ /dev/null
@@ -1,111 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.agent.dispatcher;
-
-import java.io.File;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hms.agent.Agent;
-import org.apache.hms.common.entity.PackageCommand;
-import org.apache.hms.common.entity.PackageInfo;
-import org.apache.hms.common.entity.ScriptCommand;
-import org.apache.hms.common.entity.agent.DaemonAction;
-import org.apache.hms.common.rest.Response;
-import org.apache.hms.common.util.FileUtil;
-
-public class PackageRunner {
-  private static Log log = LogFactory.getLog(PackageRunner.class);
-
-  public Response install(PackageCommand dc) {
-    dc.setCmd("install");
-    if(dc.getDryRun()) {
-      return dryRun(dc);
-    }
-    return helper(dc);
-  }
-  
-  public Response remove(PackageCommand dc) {
-    dc.setCmd("erase");
-    return helper(dc);
-  }
-  
-  public Response query(PackageCommand dc) {
-    dc.setCmd("info");
-    return helper(dc);
-  }
-  
-  private Response helper(PackageCommand dc) {
-    ScriptCommand cmd = new ScriptCommand();
-    cmd.setCmd(dc.getCmd());
-    Response r = null;
-    cmd.setScript("yum");
-    String[] parms = null;
-    if(dc.getPackages().length>0) {
-      parms = new String[dc.getPackages().length+2];
-      for(int i = 0; i< dc.getPackages().length;i++) {
-        parms[i+2] = dc.getPackages()[i].getName();
-      }
-    }
-    if(parms != null) {
-      parms[0] = dc.getCmd();
-      parms[1] = "-y";
-      cmd.setParms(parms);
-      ShellRunner shell = new ShellRunner();
-      r = shell.run(cmd);
-    } else {
-      r = new Response();
-      r.setCode(1);
-      r.setError("Invalid package name");
-    }
-    return r;    
-  }
-  
-  private Response dryRun(PackageCommand dc) {
-    Response r = null;
-    ScriptCommand cmd = new ScriptCommand();
-    cmd.setCmd(dc.getCmd());
-    cmd.setScript("yum");
-    PackageInfo[] packages = dc.getPackages();
-    String[] parms = new String[packages.length+4];
-    parms[0] = "install";
-    parms[1] = "-y";
-    parms[2] = "--downloadonly";
-    parms[3] = "--downloaddir=/tmp/system_update";
-    for(int i=0;i<packages.length;i++) {
-      parms[i+4] = packages[i].getName();
-    }
-    cmd.setParms(parms);
-    ShellRunner shell = new ShellRunner();
-    r = shell.run(cmd);
-    if(r.getCode()!=1) {
-      return r;
-    } else {
-      cmd.setScript("rpm");
-      String[] rpmParms = new String[3];
-      rpmParms[0] = "-i";
-      rpmParms[1] = "--test";
-      rpmParms[2] = "/tmp/system_update/*.rpm";
-      cmd.setParms(rpmParms);
-      r = shell.run(cmd);
-      FileUtil.deleteDir(new File("/tmp/system_update"));
-      return r;
-    }
-  }
-}
diff --git a/agent/src/main/java/org/apache/hms/agent/dispatcher/ShellRunner.java b/agent/src/main/java/org/apache/hms/agent/dispatcher/ShellRunner.java
deleted file mode 100755
index e9e95d8..0000000
--- a/agent/src/main/java/org/apache/hms/agent/dispatcher/ShellRunner.java
+++ /dev/null
@@ -1,76 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.agent.dispatcher;
-
-import java.io.DataInputStream;
-import java.io.IOException;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hms.agent.Agent;
-import org.apache.hms.common.entity.ScriptCommand;
-import org.apache.hms.common.rest.Response;
-import org.apache.hms.common.util.ExceptionUtil;
-
-public class ShellRunner {
-  private static Log log = LogFactory.getLog(ShellRunner.class);
-
-  public Response run(ScriptCommand cmd) {
-    Response r = new Response();
-    StringBuilder stdout = new StringBuilder();
-    StringBuilder errorBuffer = new StringBuilder();
-    Process proc;
-    try {
-      String[] parameters = cmd.getParms();
-      int size = 0;
-      if(parameters!=null) {
-        size = parameters.length;
-      }
-      String[] cmdArray = new String[size+1];
-      cmdArray[0]=cmd.getScript();
-      for(int i=0;i<size;i++) {
-        cmdArray[i+1]=parameters[i];      
-      }
-      proc = Runtime.getRuntime().exec(cmdArray);
-      DataInputStream in = new DataInputStream(proc.getInputStream());
-      DataInputStream err = new DataInputStream(proc.getErrorStream());
-      String str;
-      while ((str = in.readLine()) != null) {
-        stdout.append(str);
-        stdout.append("\n");
-      }
-      while ((str = err.readLine()) != null) {
-        errorBuffer.append(str);
-        errorBuffer.append("\n");
-      }
-      int exitCode = proc.waitFor();
-      r.setCode(exitCode);
-      r.setError(errorBuffer.toString());
-      r.setOutput(stdout.toString());
-    } catch (Exception e) {
-      r.setCode(1);
-      r.setError(ExceptionUtil.getStackTrace(e));
-      log.error(ExceptionUtil.getStackTrace(e));
-    }
-    log.info(cmd);
-    log.info(r);
-    return r;
-  }
-
-}
diff --git a/agent/src/main/java/org/apache/hms/agent/rest/DaemonManager.java b/agent/src/main/java/org/apache/hms/agent/rest/DaemonManager.java
deleted file mode 100755
index d78e704..0000000
--- a/agent/src/main/java/org/apache/hms/agent/rest/DaemonManager.java
+++ /dev/null
@@ -1,56 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.agent.rest;
-
-import javax.ws.rs.GET;
-import javax.ws.rs.POST;
-import javax.ws.rs.Path;
-import javax.ws.rs.PathParam;
-
-import org.apache.hms.common.entity.RestSource;
-import org.apache.hms.common.entity.agent.DaemonAction;
-import org.apache.hms.common.rest.Response;
-
-@Path("daemon")
-public class DaemonManager extends RestSource {
-  
-  @POST
-  @Path("start")
-  public Response start(DaemonAction dc) {
-    org.apache.hms.agent.dispatcher.DaemonRunner runner = new org.apache.hms.agent.dispatcher.DaemonRunner();
-    Response r = runner.startDaemon(dc);
-    return r;      
-  }
-  
-  @POST
-  @Path("stop")
-  public Response stop(DaemonAction dc) {
-    org.apache.hms.agent.dispatcher.DaemonRunner runner = new org.apache.hms.agent.dispatcher.DaemonRunner();
-    Response r = runner.stopDaemon(dc);
-    return r;  
-  }
-  
-  @POST
-  @Path("status")
-  public Response status(DaemonAction dc) {
-    org.apache.hms.agent.dispatcher.DaemonRunner runner = new org.apache.hms.agent.dispatcher.DaemonRunner();
-    Response r = runner.checkDaemon(dc);
-    return r;
-  }
-}
diff --git a/agent/src/main/java/org/apache/hms/agent/rest/PackageManager.java b/agent/src/main/java/org/apache/hms/agent/rest/PackageManager.java
deleted file mode 100755
index fb1186e..0000000
--- a/agent/src/main/java/org/apache/hms/agent/rest/PackageManager.java
+++ /dev/null
@@ -1,54 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.agent.rest;
-
-import javax.ws.rs.GET;
-import javax.ws.rs.POST;
-import javax.ws.rs.Path;
-
-import org.apache.hms.agent.dispatcher.PackageRunner;
-import org.apache.hms.common.entity.PackageCommand;
-import org.apache.hms.common.entity.RestSource;
-import org.apache.hms.common.rest.Response;
-
-@Path("package")
-public class PackageManager extends RestSource {
-  
-  @POST
-  @Path("install")
-  public Response install(PackageCommand pc) {
-    PackageRunner pr = new PackageRunner();
-    return pr.install(pc);
-  }
-  
-  @POST
-  @Path("remove")
-  public Response remove(PackageCommand pc) {
-    PackageRunner pr = new PackageRunner();
-    return pr.remove(pc);
-  }
-  
-  @GET
-  @Path("info")
-  public Response info(PackageCommand pc) {
-    PackageRunner pr = new PackageRunner();
-    return pr.query(pc);
-  }
-  
-}
diff --git a/agent/src/main/java/org/apache/hms/agent/rest/ShellManager.java b/agent/src/main/java/org/apache/hms/agent/rest/ShellManager.java
deleted file mode 100755
index 5074a4e..0000000
--- a/agent/src/main/java/org/apache/hms/agent/rest/ShellManager.java
+++ /dev/null
@@ -1,39 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.agent.rest;
-
-import javax.ws.rs.GET;
-import javax.ws.rs.POST;
-import javax.ws.rs.Path;
-
-import org.apache.hms.common.entity.RestSource;
-import org.apache.hms.common.entity.ScriptCommand;
-import org.apache.hms.common.rest.Response;
-
-@Path("shell")
-public class ShellManager extends RestSource {
-
-  @POST
-  @Path("run")
-  public Response run(ScriptCommand script) {
-    org.apache.hms.agent.dispatcher.ShellRunner runner = new org.apache.hms.agent.dispatcher.ShellRunner();
-    Response r = runner.run(script);
-    return r;
-  }
-}
diff --git a/agent/src/main/python/ambari_agent/ActionQueue.py b/agent/src/main/python/ambari_agent/ActionQueue.py
new file mode 100644
index 0000000..231fa82
--- /dev/null
+++ b/agent/src/main/python/ambari_agent/ActionQueue.py
@@ -0,0 +1,277 @@
+#!/usr/bin/env python2.6
+
+'''
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+'''
+
+import logging
+import traceback
+import logging.handlers
+import Queue
+import threading
+import AmbariConfig
+from shell import shellRunner
+from FileUtil import writeFile, createStructure, deleteStructure, getFilePath, appendToFile
+from shell import shellRunner
+import json
+import os
+import time
+import subprocess
+import copy
+
+logger = logging.getLogger()
+installScriptHash = -1
+
+class ActionQueue(threading.Thread):
+  global q, r, clusterId, clusterDefinitionRevision
+  q = Queue.Queue()
+  r = Queue.Queue()
+  clusterId = 'unknown'
+  clusterDefinitionRevision = 0
+
+  def __init__(self, config):
+    global clusterId, clusterDefinitionRevision 
+    super(ActionQueue, self).__init__()
+    #threading.Thread.__init__(self)
+    self.config = config
+    self.sh = shellRunner()
+    self._stop = threading.Event()
+    self.maxRetries = config.getint('command', 'maxretries') 
+    self.sleepInterval = config.getint('command', 'sleepBetweenRetries')
+
+  def stop(self):
+    self._stop.set()
+
+  def stopped(self):
+    return self._stop.isSet()
+
+  #For unittest
+  def getshellinstance(self):
+    return self.sh
+
+  def put(self, response):
+    if 'actions' in response:
+      actions = response['actions']
+      logger.debug(actions)
+      # for the servers, take a diff of what's running, and what the controller
+      # asked the agent to start. Kill all those servers that the controller
+      # didn't ask us to start
+      sh = shellRunner()
+      runningServers = sh.getServerTracker()
+
+      # get the list of servers the controller wants running
+      serversToRun = {}
+      for action in actions:
+        if action['kind'] == 'START_ACTION':
+          processKey = sh.getServerKey(action['clusterId'],action['clusterDefinitionRevision'],
+            action['component'], action['role'])
+          serversToRun[processKey] = 1
+
+      # create stop actions for the servers that the controller wants stopped
+      for server in runningServers.keys():
+        if server not in serversToRun:
+          sh.stopProcess(server)
+      # now put all the actions in the queue. The ordering is important (we stopped
+      # all unneeded servers first)
+      for action in actions:
+        q.put(action)
+
+  def run(self):
+    global clusterId, clusterDefinitionRevision
+    while not self.stopped():
+      while not q.empty():
+        action = q.get()
+        switches = {
+                     'START_ACTION'              : self.startAction,
+                     'RUN_ACTION'                : self.runAction,
+                     'CREATE_STRUCTURE_ACTION'   : self.createStructureAction,
+                     'DELETE_STRUCTURE_ACTION'   : self.deleteStructureAction,
+                     'WRITE_FILE_ACTION'         : self.writeFileAction,
+                     'INSTALL_AND_CONFIG_ACTION' : self.installAndConfigAction,
+                     'NO_OP_ACTION'              : self.noOpAction
+                   }
+        
+        exitCode = 1
+        retryCount = 1
+        while (exitCode != 0 and retryCount <= self.maxRetries):
+          result={}
+          try:
+            #pass a copy of action since we don't want anything to change in the 
+            #action dict 
+            actionCopy = copy.copy(action)
+            result = switches.get(action['kind'], self.unknownAction)(actionCopy)
+            if ('commandResult' in result):
+              commandResult = result['commandResult']
+              exitCode = commandResult['exitCode']
+              if (exitCode == 0):
+                break
+              else:
+                logger.warn(str(action) + " exited with code " + str(exitCode))
+            else:
+              #Really, no commandResult? Is this possible?
+              #TODO: check
+              exitCode = 0
+              break
+          except Exception, err:
+            traceback.print_exc()  
+            logger.warn(err)
+            if ('commandResult' in result):
+              commandResult = result['commandResult']
+              if ('exitCode' in commandResult):
+                exitCode = commandResult['exitCode']
+              else:
+                exitCode = 1
+            else:
+              result['commandResult'] = {'exitCode': 1, 'output':"", 'error':""}
+
+          #retry in some time  
+          logger.warn("Retrying %s in %d seconds" % (str(action),self.sleepInterval))
+          time.sleep(self.sleepInterval)
+          retryCount += 1
+          
+        if (exitCode != 0):
+          result['exitCode']=exitCode
+          result['retryActionCount'] = retryCount - 1
+        else:
+          result['retryActionCount'] = retryCount
+        # Update the result
+        r.put(result)
+      if not self.stopped():
+        time.sleep(5)
+
+  # Store action result to agent response queue
+  def result(self):
+    result = []
+    while not r.empty():
+      result.append(r.get())
+    return result
+
+  # Generate default action response
+  def genResult(self, action):
+    result={}
+    if (action['kind'] == 'INSTALL_AND_CONFIG_ACTION' or action['kind'] == 'NO_OP_ACTION'):
+      result = {
+               'id'                        : action['id'],
+               'kind'                      : action['kind'],
+             }
+    else:
+      result = { 
+               'id'                        : action['id'],
+               'clusterId'                 : action['clusterId'],
+               'kind'                      : action['kind'],
+               'clusterDefinitionRevision' : action['clusterDefinitionRevision'],
+               'componentName'             : action['component'],
+               'role'                      : action['role']
+             }
+    return result
+
+  # Run start action, start a server process and
+  # track the liveness of the children process
+  def startAction(self, action):
+    result = self.genResult(action)
+    return self.sh.startProcess(action['clusterId'],
+      action['clusterDefinitionRevision'],
+      action['component'], 
+      action['role'], 
+      action['command'], 
+      action['user'], result)
+
+  # Write file action
+  def writeFileAction(self, action, fileName=""):
+    result = self.genResult(action)
+    return writeFile(action, result, fileName)
+
+  # get the install file
+  def getInstallFilename(self,id):
+    return "ambari-install-file-"+id
+
+  # Install and configure action
+  def installAndConfigAction(self, action):
+    global installScriptHash
+    r=self.genResult(action)
+    w = self.writeFileAction(action,self.getInstallFilename(action['id']))
+    commandResult = {}
+    if w['exitCode']!=0:
+      commandResult['error'] = w['stderr'] 
+      commandResult['exitCode'] = w['exitCode']
+      r['commandResult'] = commandResult
+      return r
+     
+    if 'command' not in action:
+      # this is hardcoded to do puppet specific stuff for now
+      # append the content of the puppet file to the file written above
+      filepath = getFilePath(action,self.getInstallFilename(action['id'])) 
+      logger.info("File path for puppet top level script: " + filepath)
+      p = self.sh.run(['/bin/cat',AmbariConfig.config.get('puppet','driver')])
+      if p['exitCode']!=0:
+        commandResult['error'] = p['error']
+        commandResult['exitCode'] = p['exitCode']
+        r['commandResult'] = commandResult
+        return r
+      logger.debug("The contents of the static file " + p['output'])
+      appendToFile(p['output'],filepath) 
+      arr = [AmbariConfig.config.get('puppet','commandpath') , filepath]
+      logger.debug(arr)
+      action['command'] = arr
+    logger.debug(action['command'])
+    commandResult = self.sh.run(action['command'])
+    logger.debug("PUPPET COMMAND OUTPUT: " + commandResult['output'])
+    logger.debug("PUPPET COMMAND ERROR: " + commandResult['error'])
+    if commandResult['exitCode'] == 0:
+      installScriptHash = action['id'] 
+    r['commandResult'] = commandResult
+    return r
+
+  # Run command action
+  def runAction(self, action):
+    result = self.genResult(action)
+    return self.sh.runAction(action['clusterId'], 
+      action['component'],
+      action['role'],
+      action['user'], 
+      action['command'], 
+      action['cleanUpCommand'], result)
+
+  # Create directory structure for cluster
+  def createStructureAction(self, action):
+    result = self.genResult(action)
+    result['exitCode'] = 0
+    return createStructure(action, result)
+
+  # Delete directory structure for cluster
+  def deleteStructureAction(self, action):
+    result = self.genResult(action)
+    result['exitCode'] = 0
+    return deleteStructure(action, result)
+
+  def noOpAction(self, action):
+    r = {'id' : action['id']}
+    return r
+
+  # Handle unknown action
+  def unknownAction(self, action):
+    logger.error('Unknown action: %s' % action['id'])
+    result = { 'id': action['id'] }
+    return result
+
+  # Discover agent idle state
+  def isIdle(self):
+    return q.empty()
+
+  # Get the hash of the script currently used for install/config
+  def getInstallScriptHash(self):
+    return installScriptHash
diff --git a/agent/src/main/python/ambari_agent/ActionResults.py b/agent/src/main/python/ambari_agent/ActionResults.py
new file mode 100644
index 0000000..7603fa1
--- /dev/null
+++ b/agent/src/main/python/ambari_agent/ActionResults.py
@@ -0,0 +1,52 @@
+#!/usr/bin/env python2.6
+
+'''
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+'''
+
+import logging
+import logging.handlers
+import Queue
+import ActionQueue
+
+logger = logging.getLogger()
+
+class ActionResults:
+  global r
+
+  # Build action results list from memory queue
+  def build(self):
+    results = []
+    while not ActionQueue.r.empty():
+      result = { 
+                 'clusterId': 'unknown',
+                 'id' : 'action-001',
+                 'kind' : 'STOP_ACTION',
+                 'commandResults' : [],
+                 'cleanUpCommandResults' : [],
+                 'serverName' : 'hadoop.datanode'
+               }
+      results.append(result)
+    logger.info(results)
+    return results
+
+def main(argv=None):
+  ar = ActionResults()
+  print ar.build()
+
+if __name__ == '__main__':
+  main()
diff --git a/agent/src/main/python/ambari_agent/AmbariConfig.py b/agent/src/main/python/ambari_agent/AmbariConfig.py
new file mode 100644
index 0000000..f9b4a86
--- /dev/null
+++ b/agent/src/main/python/ambari_agent/AmbariConfig.py
@@ -0,0 +1,65 @@
+#!/usr/bin/env python2.6
+
+'''
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+'''
+
+import logging
+import logging.handlers
+import ConfigParser
+import StringIO
+
+config = ConfigParser.RawConfigParser()
+content = """
+[controller]
+url=http://localhost:4080
+user=controller
+password=controller
+
+[agent]
+prefix=/tmp/ambari
+
+[stack]
+installprefix=/var/ambari
+
+[puppet]
+prefix=/homes/ddas/puppet
+commandpath=/usr/local/bin/puppet apply --modulepath /home/puppet/puppet-ambari/modules
+driver=/home/puppet/puppet-ambari/manifests/site.pp
+
+[command]
+maxretries=2
+sleepBetweenRetries=1
+
+"""
+s = StringIO.StringIO(content)
+config.readfp(s)
+
+class AmbariConfig:
+  def getConfig(self):
+    global config
+    return config
+
+def setConfig(customConfig):
+  global config
+  config = customConfig
+
+def main():
+  print config
+
+if __name__ == "__main__":
+  main()
diff --git a/agent/src/main/python/ambari_agent/Controller.py b/agent/src/main/python/ambari_agent/Controller.py
new file mode 100755
index 0000000..0f0e2d2
--- /dev/null
+++ b/agent/src/main/python/ambari_agent/Controller.py
@@ -0,0 +1,119 @@
+#!/usr/bin/env python2.6
+
+'''
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+'''
+
+import logging
+import logging.handlers
+import signal
+import json
+import socket
+import sys, traceback
+import time
+import threading
+import urllib2
+from urllib2 import Request, urlopen, URLError
+import AmbariConfig
+from Heartbeat import Heartbeat
+from ActionQueue import ActionQueue
+from optparse import OptionParser
+
+logger = logging.getLogger()
+
+class Controller(threading.Thread):
+
+  def __init__(self, config):
+    threading.Thread.__init__(self)
+    logger.debug('Initializing Controller RPC thread.')
+    self.lock = threading.Lock()
+    self.safeMode = True
+    self.credential = None
+    self.config = config
+    #Disabled security until we have fix for AMBARI-157
+    #if(config.get('controller', 'user')!=None and config.get('controller', 'password')!=None):
+    #  self.credential = { 'user' : config.get('controller', 'user'),
+    #                      'password' : config.get('controller', 'password')
+    #  }
+    self.url = config.get('controller', 'url') + '/agent/controller/heartbeat/' + socket.gethostname()
+
+  def start(self):
+    self.actionQueue = ActionQueue(self.config)
+    self.actionQueue.start()
+    self.heartbeat = Heartbeat(self.actionQueue)
+
+  def __del__(self):
+    logger.info("Controller connection disconnected.")
+
+  def run(self):
+    id='-1'
+    if self.credential!=None:
+      auth_handler = urllib2.HTTPBasicAuthHandler()
+      auth_handler.add_password(realm="Controller",
+                                uri=self.url,
+                                user=self.credential['user'],
+                                passwd=self.credential['password'])
+      opener = urllib2.build_opener(auth_handler)
+      urllib2.install_opener(opener)
+    retry=False
+    firstTime=True
+    while True:
+      try:
+        if retry==False:
+          data = json.dumps(self.heartbeat.build(id))
+          logger.debug(data)
+        req = urllib2.Request(self.url, data, {'Content-Type': 'application/json'})
+        f = urllib2.urlopen(req)
+        response = f.read()
+        f.close()
+        data = json.loads(response)
+        id=int(data['responseId'])
+        self.actionQueue.put(data)
+        if retry==True or firstTime==True:
+          logger.info("Controller connection established")
+          firstTime=False
+        retry=False
+      except Exception, err:
+        retry=True
+        if "code" in err:
+          logger.error(err.code)
+        else:
+          logger.error("Unable to connect to: "+self.url,exc_info=True)
+      if self.actionQueue.isIdle():
+        time.sleep(30)
+      else:
+        time.sleep(1)
+
+def main(argv=None):
+  # Allow Ctrl-C
+  signal.signal(signal.SIGINT, signal.SIG_DFL)
+
+  logger.setLevel(logging.INFO)
+  formatter = logging.Formatter("%(asctime)s %(filename)s:%(lineno)d - %(message)s")
+  stream_handler = logging.StreamHandler()
+  stream_handler.setFormatter(formatter)
+  logger.addHandler(stream_handler)
+
+  logger.info('Starting Controller RPC Thread: %s' % ' '.join(sys.argv))
+
+  config = AmbariConfig.config
+  controller = Controller(config)
+  controller.start()
+  controller.run()
+
+if __name__ == '__main__':
+  main()
diff --git a/agent/src/main/python/hms_agent/DaemonHandler.py b/agent/src/main/python/ambari_agent/DaemonHandler.py
similarity index 97%
rename from agent/src/main/python/hms_agent/DaemonHandler.py
rename to agent/src/main/python/ambari_agent/DaemonHandler.py
index 2701b95..b726621 100755
--- a/agent/src/main/python/hms_agent/DaemonHandler.py
+++ b/agent/src/main/python/ambari_agent/DaemonHandler.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python2.6
 
 '''
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/agent/src/main/python/ambari_agent/FileUtil.py b/agent/src/main/python/ambari_agent/FileUtil.py
new file mode 100644
index 0000000..f24046b
--- /dev/null
+++ b/agent/src/main/python/ambari_agent/FileUtil.py
@@ -0,0 +1,185 @@
+#!/usr/bin/env python2.6
+
+'''
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+'''
+
+from pwd import getpwnam
+from grp import getgrnam
+import logging
+import logging.handlers
+import getpass
+import os, errno
+import sys, traceback
+import ConfigParser
+import shutil
+import StringIO
+import AmbariConfig
+
+logger = logging.getLogger()
+
+def getFilePath(action, fileName=""):
+  #Change the method signature to take the individual action fields
+  pathComp=""
+  if 'clusterId' in action:
+    pathComp = action['clusterId']
+  if 'role' in action:
+    pathComp = pathComp + "-" + action['role'] 
+  path = os.path.join(AmbariConfig.config.get('agent','prefix'),
+                      "clusters", 
+                      pathComp)
+  fullPathName=""
+  if fileName != "":
+    fullPathName=os.path.join(path, fileName)
+  else:
+    fileInfo = action['file']
+    fullPathName=os.path.join(path, fileInfo['path'])
+  return fullPathName
+  
+def appendToFile(data,absolutePath):
+  f = open(absolutePath, 'a')
+  f.write(data)
+  f.close()
+
+def writeFile(action, result, fileName=""):
+  fileInfo = action['file']
+  pathComp=""
+  if 'clusterId' in action:
+    pathComp = action['clusterId']
+  if 'role' in action:
+    pathComp = pathComp + "-" + action['role'] 
+  try:
+    path = os.path.join(AmbariConfig.config.get('agent','prefix'),
+                        "clusters", 
+                        pathComp)
+    user=getpass.getuser()
+    if 'owner' in fileInfo:
+      user=fileInfo['owner']
+    group=os.getgid()
+    if 'group' in fileInfo:
+      group=fileInfo['group']
+    fullPathName=""
+    if fileName != "":
+      fullPathName=os.path.join(path, fileName)
+    else:
+      fullPathName=os.path.join(path, fileInfo['path'])
+    logger.debug("path in writeFile: %s" % fullPathName)
+    content=fileInfo['data']
+    try:
+      if isinstance(user, int)!=True:
+        user=getpwnam(user)[2]
+      if isinstance(group, int)!=True:
+        group=getgrnam(group)[2]
+    except Exception:
+      logger.warn("can not find user uid/gid: (%s/%s) for writing %s" % (user, group, fullPathName))
+    if 'permission' in fileInfo:
+      if fileInfo['permission'] is not None:
+        permission=fileInfo['permission']
+    else:
+      permission=0750
+    oldMask = os.umask(0)
+    if 'umask' in fileInfo:
+      if fileInfo['umask'] is not None: 
+        umask=int(fileInfo['umask'])
+    else:
+      umask=oldMask 
+    os.umask(int(umask))
+    prefix = os.path.dirname(fullPathName)
+    try:
+      os.makedirs(prefix)
+    except OSError as err:
+      if err.errno == errno.EEXIST:
+        pass
+      else:
+        raise
+    f = open(fullPathName, 'w')
+    f.write(content)
+    f.close()
+    if os.getuid()==0:
+      os.chmod(fullPathName, permission)
+      os.chown(fullPathName, user, group)
+    os.umask(oldMask)
+    result['exitCode'] = 0
+  except Exception, err:
+    traceback.print_exc()
+    result['exitCode'] = 1
+    result['error'] = traceback.format_exc()
+  return result
+
+def createStructure(action, result):
+  try:
+    workdir = action['workDirComponent']
+    path = AmbariConfig.config.get('agent','prefix')+"/clusters/"+workdir
+    shutil.rmtree(path, 1)
+    os.makedirs(path+"/stack")
+    os.makedirs(path+"/logs")
+    os.makedirs(path+"/data")
+    os.makedirs(path+"/pkgs")
+    os.makedirs(path+"/config")
+    result['exitCode'] = 0
+  except Exception, err:
+    traceback.print_exc()
+    result['exitCode'] = 1
+    result['error'] = traceback.format_exc()
+  return result
+
+def deleteStructure(action, result):
+  try:
+    workdir = action['workDirComponent']
+    path = AmbariConfig.config.get('agent','prefix')+"/clusters/"+workdir
+    if os.path.exists(path):
+      shutil.rmtree(path)
+    result['exitCode'] = 0
+  except Exception, err:
+    result['exitCode'] = 1
+    result['error'] = traceback.format_exc()
+  return result
+
+def main():
+
+  action = { 'clusterId' : 'abc', 'role' : 'hdfs' }
+  result = {}
+  print createStructure(action, result)
+
+  configFile = {
+    "data"       : "test", 
+    "owner"      : os.getuid(), 
+    "group"      : os.getgid() , 
+    "permission" : 0700, 
+    "path"       : "/tmp/ambari_file_test/_file_write_test", 
+    "umask"      : 022 
+  }
+  action = { 'file' : configFile }
+  result = { }
+  print writeFile(action, result)
+
+  configFile = { 
+    "data"       : "test", 
+    "owner"      : "eyang", 
+    "group"      : "staff", 
+    "permission" : "0700", 
+    "path"       : "/tmp/ambari_file_test/_file_write_test", 
+    "umask"      : "022" 
+  }
+  result = { }
+  action = { 'file' : configFile }
+  print writeFile(action, result)
+
+  print deleteStructure(action, result)
+
+if __name__ == "__main__":
+  main()
diff --git a/agent/src/main/python/ambari_agent/Hardware.py b/agent/src/main/python/ambari_agent/Hardware.py
new file mode 100644
index 0000000..648aabc
--- /dev/null
+++ b/agent/src/main/python/ambari_agent/Hardware.py
@@ -0,0 +1,93 @@
+#!/usr/bin/env python2.6
+
+'''
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+'''
+
+from shell import shellRunner
+import multiprocessing
+import platform
+
+class Hardware:
+  def __init__(self):
+    self.scanOS()
+    self.scanDisk()
+    self.scanRam()
+    self.scanCpu()
+    self.scanNet()
+    self.hardware = { 'coreCount' : self.cpuCount,
+                      'cpuSpeed' : self.cpuSpeed,
+                      'cpuFlag' : self.cpuFlag,
+                      'diskCount' : self.diskCount,
+                      'netSpeed' : self.netSpeed,
+                      'ramSize' : self.ramSize
+                    }
+
+  def get(self):
+    return self.hardware
+
+  def scanDisk(self):
+    self.diskCount = 0
+
+  def scanRam(self):
+    self.ramSize = 0
+
+  def scanCpu(self):
+    self.cpuCount = multiprocessing.cpu_count()
+    self.cpuSpeed = 0
+    self.cpuFlag = ""
+
+  def scanNet(self):
+    switches = {
+                'Linux': self.ethtool,
+                'Darwin': self.ifconfig
+               }
+    switches.get(self.os, self.ethtool)()
+
+  def ethtool(self):
+    sh = shellRunner()
+    script = [ 'ethtool', 'eth0', '|', 'grep', 'Speed:', '|', 'sed', "'s/\s*Speed:\s*//'", '|', 'sed', "'s/Mb\/s//'" ]
+    result = sh.run(script)
+    if "ethtool: not found\n" in result['error']:
+      # ethtool not installed, assume a speed of 0Mb/s
+      self.netSpeed = 0
+    else:
+      self.netSpeed = int(result['output'].rstrip())
+
+  def ifconfig(self):
+    sh = shellRunner()
+    script = [ 'ifconfig', 'en0', '|', 'grep', 'media:', '|', 'sed', "'s/.*(//'", '|', 'sed', "'s/ .*//'", '|', 'sed', "'s/baseT//'" ]
+    result = sh.run(script)
+    if "none" in result['output']:
+      # No ethernet detected, detect airport
+      script = [ '/System/Library/PrivateFrameworks/Apple80211.framework/Versions/A/Resources/airport', '-I', '|', 'grep', 'lastTxRate:', '|', 'sed', "'s/.*: //'", '|', 'sed', "'s/$//'"]
+      result = sh.run(script)
+    try:
+      self.netSpeed = int(result['output'].rstrip())
+    except Exception:
+      self.netSpeed = 0
+
+  def scanOS(self):
+    self.arch = platform.processor()
+    self.os = platform.system()
+
+def main(argv=None):
+  hardware = Hardware()
+  print hardware.get()
+
+if __name__ == '__main__':
+  main()
diff --git a/agent/src/main/python/ambari_agent/Heartbeat.py b/agent/src/main/python/ambari_agent/Heartbeat.py
new file mode 100644
index 0000000..1b22249
--- /dev/null
+++ b/agent/src/main/python/ambari_agent/Heartbeat.py
@@ -0,0 +1,62 @@
+#!/usr/bin/env python2.6
+
+'''
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+'''
+
+import json
+from Hardware import Hardware
+from ActionQueue import ActionQueue
+from ServerStatus import ServerStatus
+import socket
+import time
+
+firstContact = True
+class Heartbeat:
+
+  def __init__(self, actionQueue):
+    self.actionQueue = actionQueue
+    self.hardware = Hardware()
+
+  def build(self, id='-1'):
+    global clusterId, clusterDefinitionRevision, firstContact
+    serverStatus = ServerStatus()
+    timestamp = int(time.time()*1000)
+    queueResult = self.actionQueue.result()
+    installedRoleStates = serverStatus.build()
+    heartbeat = { 'responseId'        : int(id),
+                  'timestamp'         : timestamp,
+                  'hostname'          : socket.gethostname(),
+                  'hardwareProfile'   : self.hardware.get(),
+                  'idle'              : self.actionQueue.isIdle(),
+                  'installScriptHash' : self.actionQueue.getInstallScriptHash(),
+                  'firstContact'      : firstContact
+                }
+    if len(queueResult)!=0:
+      heartbeat['actionResults'] = queueResult
+    if len(installedRoleStates)!=0:
+      heartbeat['installedRoleStates'] = installedRoleStates
+    firstContact = False
+    return heartbeat
+
+def main(argv=None):
+  actionQueue = ActionQueue()
+  heartbeat = Heartbeat(actionQueue)
+  print json.dumps(heartbeat.build())
+
+if __name__ == '__main__':
+  main()
diff --git a/agent/src/main/python/hms_agent/PackageHandler.py b/agent/src/main/python/ambari_agent/PackageHandler.py
similarity index 98%
rename from agent/src/main/python/hms_agent/PackageHandler.py
rename to agent/src/main/python/ambari_agent/PackageHandler.py
index b44275a..be05575 100755
--- a/agent/src/main/python/hms_agent/PackageHandler.py
+++ b/agent/src/main/python/ambari_agent/PackageHandler.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python2.6
 
 '''
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/agent/src/main/python/hms_agent/Runner.py b/agent/src/main/python/ambari_agent/Runner.py
similarity index 98%
rename from agent/src/main/python/hms_agent/Runner.py
rename to agent/src/main/python/ambari_agent/Runner.py
index a7bcaa0..85d1794 100644
--- a/agent/src/main/python/hms_agent/Runner.py
+++ b/agent/src/main/python/ambari_agent/Runner.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python2.6
 
 '''
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/agent/src/main/python/ambari_agent/ServerStatus.py b/agent/src/main/python/ambari_agent/ServerStatus.py
new file mode 100644
index 0000000..53a0a9a
--- /dev/null
+++ b/agent/src/main/python/ambari_agent/ServerStatus.py
@@ -0,0 +1,50 @@
+#!/usr/bin/env python2.6
+
+'''
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+'''
+
+from shell import shellRunner
+import logging
+import logging.handlers
+
+logger = logging.getLogger()
+global serverTracker
+
+class ServerStatus:
+  def build(self):
+    sh = shellRunner()
+    list = []
+    servers = sh.getServerTracker()
+    for server in servers:
+      (clusterId, clusterDefinitionRevision, component, role) = server.split("/")
+      result = {
+                 'clusterId'                 : clusterId,
+                 'clusterDefinitionRevision' : clusterDefinitionRevision,
+                 'componentName'             : component,
+                 'roleName'                  : role,
+                 'serverStatus'              : 'STARTED'
+               }
+      list.append(result)
+    return list
+
+def main(argv=None):
+  serverStatus = ServerStatus()
+  print serverStatus.build()
+
+if __name__ == '__main__':
+  main()
diff --git a/agent/src/main/python/hms_agent/ShellHandler.py b/agent/src/main/python/ambari_agent/ShellHandler.py
similarity index 97%
rename from agent/src/main/python/hms_agent/ShellHandler.py
rename to agent/src/main/python/ambari_agent/ShellHandler.py
index 23e3968..69e599c 100755
--- a/agent/src/main/python/hms_agent/ShellHandler.py
+++ b/agent/src/main/python/ambari_agent/ShellHandler.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python2.6
 
 '''
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/agent/src/main/python/hms_agent/Zeroconf.py b/agent/src/main/python/ambari_agent/Zeroconf.py
similarity index 84%
rename from agent/src/main/python/hms_agent/Zeroconf.py
rename to agent/src/main/python/ambari_agent/Zeroconf.py
index 438554d..365f209 100644
--- a/agent/src/main/python/hms_agent/Zeroconf.py
+++ b/agent/src/main/python/ambari_agent/Zeroconf.py
@@ -1,4 +1,19 @@
-#!/usr/bin/env python
+#!/usr/bin/env python2.6
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
 
 import logging
 import logging.handlers
diff --git a/agent/src/main/python/hms_agent/ZooKeeperCommunicator.py b/agent/src/main/python/ambari_agent/ZooKeeperCommunicator.py
similarity index 99%
rename from agent/src/main/python/hms_agent/ZooKeeperCommunicator.py
rename to agent/src/main/python/ambari_agent/ZooKeeperCommunicator.py
index e66e78e..576a4a4 100755
--- a/agent/src/main/python/hms_agent/ZooKeeperCommunicator.py
+++ b/agent/src/main/python/ambari_agent/ZooKeeperCommunicator.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python2.6
 
 '''
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/agent/src/main/python/hms_agent/__init__.py b/agent/src/main/python/ambari_agent/__init__.py
similarity index 82%
rename from agent/src/main/python/hms_agent/__init__.py
rename to agent/src/main/python/ambari_agent/__init__.py
index e7c2fdf..3bfb534 100755
--- a/agent/src/main/python/hms_agent/__init__.py
+++ b/agent/src/main/python/ambari_agent/__init__.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python2.6
 """
 Licensed to the Apache Software Foundation (ASF) under one
 or more contributor license agreements.  See the NOTICE file
@@ -16,7 +16,7 @@
 See the License for the specific language governing permissions and
 limitations under the License.
 
-Hadoop Management System Agent
+Ambari Agent
 
 """
 
@@ -28,17 +28,11 @@
     "Kan Zhang <kanzhangmail@yahoo.com>"
 ]
 __license__ = "Apache License v2.0"
-__contributors__ = "see http://incubator.apache.org/hms/contributors"
+__contributors__ = "see http://incubator.apache.org/ambari/contributors"
 
 import logging
 import logging.handlers
-import web
-import mimeparse
-import mimerender
-import simplejson
-import bencode
 import threading
-import zookeeper
 import sys
 import time
 import signal
diff --git a/agent/src/main/python/hms_agent/createDaemon.py b/agent/src/main/python/ambari_agent/createDaemon.py
similarity index 92%
rename from agent/src/main/python/hms_agent/createDaemon.py
rename to agent/src/main/python/ambari_agent/createDaemon.py
index 0fb2c3d..764211c 100755
--- a/agent/src/main/python/hms_agent/createDaemon.py
+++ b/agent/src/main/python/ambari_agent/createDaemon.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python2.6
 
 '''
 Licensed to the Apache Software Foundation (ASF) under one
@@ -76,7 +76,7 @@
    except OSError, e:
       raise Exception, "%s [%d]" % (e.strerror, e.errno)
 
-   if (pid == 0):	# The first child.
+   if (pid == 0):       # The first child.
       # To become the session leader of this new session and the process group
       # leader of the new process group, we call os.setsid().  The process is
       # also guaranteed not to have a controlling terminal.
@@ -120,11 +120,11 @@
          # based systems).  This second fork guarantees that the child is no
          # longer a session leader, preventing the daemon from ever acquiring
          # a controlling terminal.
-         pid = os.fork()	# Fork a second child.
+         pid = os.fork()        # Fork a second child.
       except OSError, e:
          raise Exception, "%s [%d]" % (e.strerror, e.errno)
 
-      if (pid == 0):	# The second child.
+      if (pid == 0):    # The second child.
          # Since the current working directory may be a mounted filesystem, we
          # avoid the issue of not being able to unmount the filesystem at
          # shutdown time by changing it to the root directory.
@@ -134,7 +134,7 @@
          os.umask(UMASK)
       else:
          # exit() or _exit()?  See below.
-         os._exit(0)	# Exit parent (the first child) of the second child.
+         os._exit(0)    # Exit parent (the first child) of the second child.
    else:
       # exit() or _exit()?
       # _exit is like exit(), but it doesn't call any functions registered
@@ -143,7 +143,7 @@
       # streams to be flushed twice and any temporary files may be unexpectedly
       # removed.  It's therefore recommended that child branches of a fork()
       # and the parent branch(es) of a daemon use _exit().
-      os._exit(0)	# Exit parent of the first child.
+      os._exit(0)       # Exit parent of the first child.
 
    # Close all open file descriptors.  This prevents the child from keeping
    # open any file descriptors inherited from the parent.  There is a variety
@@ -171,7 +171,7 @@
    # that can be opened by this process.  If there is not limit on the
    # resource, use the default value.
    #
-   import resource		# Resource usage information.
+   import resource              # Resource usage information.
    maxfd = resource.getrlimit(resource.RLIMIT_NOFILE)[1]
    if (maxfd == resource.RLIM_INFINITY):
       maxfd = MAXFD
@@ -180,7 +180,7 @@
    for fd in range(0, maxfd):
       try:
          os.close(fd)
-      except OSError:	# ERROR, fd wasn't open to begin with (ignored)
+      except OSError:   # ERROR, fd wasn't open to begin with (ignored)
          pass
 
    # Redirect the standard I/O file descriptors to the specified file.  Since
@@ -190,11 +190,11 @@
 
    # This call to open is guaranteed to return the lowest file descriptor,
    # which will be 0 (stdin), since it was closed above.
-   os.open(REDIRECT_TO, os.O_RDWR)	# standard input (0)
+   os.open(REDIRECT_TO, os.O_RDWR)      # standard input (0)
 
    # Duplicate standard input to standard output and standard error.
-   os.dup2(0, 1)			# standard output (1)
-   os.dup2(0, 2)			# standard error (2)
+   os.dup2(0, 1)                        # standard output (1)
+   os.dup2(0, 2)                        # standard error (2)
 
    return(0)
 
diff --git a/agent/src/main/python/hms_agent/daemon.py b/agent/src/main/python/ambari_agent/daemon.py
similarity index 97%
rename from agent/src/main/python/hms_agent/daemon.py
rename to agent/src/main/python/ambari_agent/daemon.py
index d90c5e6..31607f5 100755
--- a/agent/src/main/python/hms_agent/daemon.py
+++ b/agent/src/main/python/ambari_agent/daemon.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python2.6
 
 '''
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/agent/src/main/python/ambari_agent/main.py b/agent/src/main/python/ambari_agent/main.py
new file mode 100755
index 0000000..a157db3
--- /dev/null
+++ b/agent/src/main/python/ambari_agent/main.py
@@ -0,0 +1,143 @@
+#!/usr/bin/env python2.6
+
+'''
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+'''
+
+import logging
+import logging.handlers
+import code
+import signal
+import sys, traceback
+import os
+import time
+import ConfigParser
+from createDaemon import createDaemon
+from Controller import Controller
+from shell import getTempFiles
+from shell import killstaleprocesses 
+import AmbariConfig
+
+logger = logging.getLogger()
+agentPid = os.getpid()
+
+if 'AMBARI_PID_DIR' in os.environ:
+  pidfile = os.environ['AMBARI_PID_DIR'] + "/ambari-agent.pid"
+else:
+  pidfile = "/var/run/ambari/ambari-agent.pid"
+
+if 'AMBARI_LOG_DIR' in os.environ:
+  logfile = os.environ['AMBARI_LOG_DIR'] + "/ambari-agent.log"
+else:
+  logfile = "/var/log/ambari/ambari-agent.log"
+
+def signal_handler(signum, frame):
+  #we want the handler to run only for the agent process and not
+  #for the children (e.g. namenode, etc.)
+  if (os.getpid() != agentPid):
+    os._exit(0)
+  logger.info('signal received, exiting.')
+  try:
+    os.unlink(pidfile)
+  except Exception:
+    logger.warn("Unable to remove: "+pidfile)
+    traceback.print_exc()
+
+  tempFiles = getTempFiles()
+  for tempFile in tempFiles:
+    if os.path.exists(tempFile):
+      try:
+        os.unlink(tempFile)
+      except Exception:
+        traceback.print_exc()
+        logger.warn("Unable to remove: "+tempFile)
+  os._exit(0)
+
+def debug(sig, frame):
+    """Interrupt running process, and provide a python prompt for
+    interactive debugging."""
+    d={'_frame':frame}         # Allow access to frame object.
+    d.update(frame.f_globals)  # Unless shadowed by global
+    d.update(frame.f_locals)
+
+    message  = "Signal recieved : entering python shell.\nTraceback:\n"
+    message += ''.join(traceback.format_stack(frame))
+    logger.info(message)
+
+def main():
+  global config
+  default_cfg = { 'agent' : { 'prefix' : '/home/ambari' } }
+  config = ConfigParser.RawConfigParser(default_cfg)
+  signal.signal(signal.SIGINT, signal_handler)
+  signal.signal(signal.SIGTERM, signal_handler)
+  signal.signal(signal.SIGUSR1, debug)
+  if (len(sys.argv) >1) and sys.argv[1]=='stop':
+    # stop existing Ambari agent
+    try:
+      f = open(pidfile, 'r')
+      pid = f.read()
+      pid = int(pid)
+      f.close()
+      os.kill(pid, signal.SIGTERM)
+      time.sleep(5)
+      if os.path.exists(pidfile):
+        raise Exception("PID file still exists.")
+      os._exit(0)
+    except Exception, err:
+      os.kill(pid, signal.SIGKILL)
+      os._exit(1)
+
+  # Check if there is another instance running
+  if os.path.isfile(pidfile):
+    print("%s already exists, exiting" % pidfile)
+    sys.exit(1)
+  else:
+    # Daemonize current instance of Ambari Agent
+    #retCode = createDaemon()
+    pid = str(os.getpid())
+    file(pidfile, 'w').write(pid)
+
+
+  logger.setLevel(logging.INFO)
+  formatter = logging.Formatter("%(asctime)s %(filename)s:%(lineno)d - %(message)s")
+  rotateLog = logging.handlers.RotatingFileHandler(logfile, "a", 10000000, 10)
+  rotateLog.setFormatter(formatter)
+  logger.addHandler(rotateLog)
+  credential = None
+
+  # Check for ambari configuration file.
+  try:
+    config = AmbariConfig.config
+    if(os.path.exists('/etc/ambari/ambari.ini')):
+      config.read('/etc/ambari/ambari.ini')
+      AmbariConfig.setConfig(config)
+    else:
+      raise Exception("No config found, use default")
+  except Exception, err:
+    logger.warn(err)
+
+  killstaleprocesses()
+  logger.info("Connecting to controller at: "+config.get('controller', 'url'))
+
+  # Launch Controller communication
+  controller = Controller(config) 
+  controller.start()
+  controller.run()
+  logger.info("finished")
+    
+if __name__ == "__main__":
+  main()
diff --git a/agent/src/main/python/ambari_agent/shell.py b/agent/src/main/python/ambari_agent/shell.py
new file mode 100755
index 0000000..7554120
--- /dev/null
+++ b/agent/src/main/python/ambari_agent/shell.py
@@ -0,0 +1,279 @@
+#!/usr/bin/env python2.6
+
+'''
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+'''
+
+from pwd import getpwnam
+from grp import getgrnam
+import AmbariConfig
+import logging
+import logging.handlers
+import subprocess
+import os
+import tempfile
+import signal
+import sys
+import threading
+import time
+import traceback
+import shutil
+
+global serverTracker
+serverTracker = {}
+logger = logging.getLogger()
+
+threadLocal = threading.local()
+
+tempFiles = [] 
+def noteTempFile(filename):
+  tempFiles.append(filename)
+
+def getTempFiles():
+  return tempFiles
+
+def killstaleprocesses():
+  logger.info ("Killing stale processes")
+  prefix = AmbariConfig.config.get('stack','installprefix')
+  files = os.listdir(prefix)
+  for file in files:
+    if str(file).endswith(".pid"):
+      pid = str(file).split('.')[0]
+      killprocessgrp(int(pid))
+      os.unlink(os.path.join(prefix,file))
+  logger.info ("Killed stale processes")
+
+def killprocessgrp(pid):
+  try:
+    os.killpg(pid, signal.SIGTERM)
+    time.sleep(5)
+    try:
+      os.killpg(pid, signal.SIGKILL)
+    except:
+      logger.warn("Failed to send SIGKILL to PID %d. Process exited?" % (pid))
+  except:
+    logger.warn("Failed to kill PID %d" % (pid))      
+
+def changeUid():
+  try:
+    os.setuid(threadLocal.uid)
+  except Exception:
+    logger.warn("can not switch user for running command.")
+
+class shellRunner:
+  # Run any command
+  def run(self, script, user=None):
+    try:
+      if user!=None:
+        user=getpwnam(user)[2]
+      else:
+        user = os.getuid()
+      threadLocal.uid = user
+    except Exception:
+      logger.warn("can not switch user for RUN_COMMAND.")
+    code = 0
+    cmd = " "
+    cmd = cmd.join(script)
+    p = subprocess.Popen(cmd, preexec_fn=changeUid, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, close_fds=True)
+    out, err = p.communicate()
+    code = p.wait()
+    logger.debug("Exitcode for %s is %d" % (cmd,code))
+    return {'exitCode': code, 'output': out, 'error': err}
+
+  # dispatch action types
+  def runAction(self, clusterId, component, role, user, command, cleanUpCommand, result):
+    oldDir = os.getcwd()
+    #TODO: handle this better. Don't like that it is doing a chdir for the main process
+    os.chdir(self.getWorkDir(clusterId, role))
+    oldUid = os.getuid()
+    try:
+      if user is not None:
+        user=getpwnam(user)[2]
+      else:
+        user = oldUid
+      threadLocal.uid = user
+    except Exception:
+      logger.warn("%s %s %s can not switch user for RUN_ACTION." % (clusterId, component, role))
+    code = 0
+    cmd = sys.executable
+    tempfilename = tempfile.mktemp()
+    tmp = open(tempfilename, 'w')
+    tmp.write(command['script'])
+    tmp.close()
+    cmd = "%s %s %s" % (cmd, tempfilename, " ".join(command['param']))
+    commandResult = {}
+    p = subprocess.Popen(cmd, preexec_fn=changeUid, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, close_fds=True)
+    out, err = p.communicate()
+    code = p.wait()
+    if code != 0:
+      commandResult['output'] = out
+      commandResult['error'] = err
+    commandResult['exitCode'] = code
+    result['commandResult'] = commandResult
+    os.unlink(tempfilename)
+    if code != 0:
+      tempfilename = tempfile.mktemp()
+      tmp = open(tempfilename, 'w')
+      tmp.write(command['script'])
+      tmp.close()
+      cmd = sys.executable
+      cmd = "%s %s %s" % (cmd, tempfilename, " ".join(cleanUpCommand['param']))
+      cleanUpCode = 0
+      cleanUpResult = {}
+      p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, close_fds=True)
+      out, err = p.communicate()
+      cleanUpCode = p.wait()
+      if cleanUpCode != 0:
+        cleanUpResult['output'] = out
+        cleanUpResult['error'] = err
+      cleanUpResult['exitCode'] = cleanUpCode
+      result['cleanUpResult'] = cleanUpResult
+      os.unlink(tempfilename)
+      os._exit(1)
+    try:
+      os.chdir(oldDir)
+    except Exception:
+      logger.warn("%s %s %s can not restore environment for RUN_ACTION." % (clusterId, component, role))
+    return result
+
+  # Start a process and presist its state
+  def startProcess(self, clusterId, clusterDefinitionRevision, component, role, script, user, result):
+    global serverTracker
+    oldDir = os.getcwd()
+    try:
+      os.chdir(self.getWorkDir(clusterId,role))
+    except Exception:
+      logger.warn("%s %s %s can not switch dir for START_ACTION." % (clusterId, component, role))
+    oldUid = os.getuid()
+    try:
+      if user is not None:
+        user=getpwnam(user)[2]
+      else:
+        user = os.getuid()
+      threadLocal.uid = user
+    except Exception:
+      logger.warn("%s %s %s can not switch user for START_ACTION." % (clusterId, component, role))
+    code = 0
+    commandResult = {}
+    process = self.getServerKey(clusterId,clusterDefinitionRevision,component,role)
+    if not process in serverTracker:
+      try:
+        plauncher = processlauncher(script,user)
+        plauncher.start()
+        plauncher.blockUntilProcessCreation()
+      except Exception:
+        traceback.print_exc()
+        logger.warn("Can not launch process for %s %s %s" % (clusterId, component, role))
+        code = -1
+      serverTracker[process] = plauncher
+      commandResult['exitCode'] = code 
+      result['commandResult'] = commandResult
+    try:
+      os.chdir(oldDir)
+    except Exception:
+      logger.warn("%s %s %s can not restore environment for START_ACTION." % (clusterId, component, role))
+    return result
+
+  # Stop a process and remove presisted state
+  def stopProcess(self, processKey):
+    global serverTracker
+    keyFragments = processKey.split('/')
+    process = self.getServerKey(keyFragments[0],keyFragments[1],keyFragments[2],keyFragments[3])
+    if process in serverTracker:
+      logger.info ("Sending %s with PID %d the SIGTERM signal" % (process,serverTracker[process].getpid()))
+      killprocessgrp(serverTracker[process].getpid())
+      del serverTracker[process]
+
+  def getServerTracker(self):
+    return serverTracker
+
+  def getServerKey(self,clusterId, clusterDefinitionRevision, component, role):
+    return clusterId+"/"+str(clusterDefinitionRevision)+"/"+component+"/"+role
+
+  def getWorkDir(self, clusterId, role):
+    prefix = AmbariConfig.config.get('stack','installprefix')
+    return str(os.path.join(prefix, clusterId, role))
+
+
+class processlauncher(threading.Thread):
+  def __init__(self,script,uid):
+    threading.Thread.__init__(self)
+    self.script = script
+    self.serverpid = -1
+    self.uid = uid
+    self.out = None
+    self.err = None
+
+  def run(self):
+    try:
+      tempfilename = tempfile.mktemp()
+      noteTempFile(tempfilename)
+      pythoncmd = sys.executable
+      tmp = open(tempfilename, 'w')
+      tmp.write(self.script['script'])
+      tmp.close()
+      threadLocal.uid = self.uid
+      self.cmd = "%s %s %s" % (pythoncmd, tempfilename, " ".join(self.script['param']))
+      logger.info("Launching %s as uid %d" % (self.cmd,self.uid) )
+      p = subprocess.Popen(self.cmd, preexec_fn=self.changeUidAndSetSid, stdout=subprocess.PIPE, 
+                           stderr=subprocess.PIPE, shell=True, close_fds=True)
+      logger.info("Launched %s; PID %d" % (self.cmd,p.pid))
+      self.serverpid = p.pid
+      self.out, self.err = p.communicate()
+      self.code = p.wait()
+      logger.info("%s; PID %d exited with code %d \nSTDOUT: %s\nSTDERR %s" % 
+                 (self.cmd,p.pid,self.code,self.out,self.err))
+    except:
+      logger.warn("Exception encountered while launching : " + self.cmd)
+      traceback.print_exc()
+
+    os.unlink(self.getpidfile())
+    os.unlink(tempfilename)
+
+  def blockUntilProcessCreation(self):
+    self.getpid()
+ 
+  def getpid(self):
+    sleepCount = 1
+    while (self.serverpid == -1):
+      time.sleep(1)
+      logger.info("Waiting for process %s to start" % self.cmd)
+      if sleepCount > 10:
+        logger.warn("Couldn't start process %s even after %d seconds" % (self.cmd,sleepCount))
+        os._exit(1)
+    return self.serverpid
+
+  def getpidfile(self):
+    prefix = AmbariConfig.config.get('stack','installprefix')
+    pidfile = os.path.join(prefix,str(self.getpid())+".pid")
+    return pidfile
+ 
+  def changeUidAndSetSid(self):
+    prefix = AmbariConfig.config.get('stack','installprefix')
+    pidfile = os.path.join(prefix,str(os.getpid())+".pid")
+    #TODO remove try/except (when there is a way to provide
+    #config files for testcases). The default config will want
+    #to create files in /var/ambari which may not exist unless
+    #specifically created.
+    #At that point add a testcase for the pid file management.
+    try: 
+      f = open(pidfile,'w')
+      f.close()
+    except:
+      logger.warn("Couldn't write pid file %s for %s" % (pidfile,self.cmd))
+    changeUid()
+    os.setsid() 
diff --git a/agent/src/main/python/ambari_component/ConfigWriter.py b/agent/src/main/python/ambari_component/ConfigWriter.py
new file mode 100755
index 0000000..ce9dc38
--- /dev/null
+++ b/agent/src/main/python/ambari_component/ConfigWriter.py
@@ -0,0 +1,77 @@
+#!/usr/bin/env python
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os, errno
+import logging
+import logging.handlers
+import sys
+
+logger = logging.getLogger()
+
+class ConfigWriter:
+
+  def shell(self, owner, group, permission, category, options):
+    content = ""
+    for key in options:
+      content+="export "+key+"=\""+options[key]+"\"\n"
+    return self.write(owner, group, permission, "config/"+category+".sh", content)
+
+  def xml(self, owner, group, permission, category, options):
+    content = """<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<configuration>
+"""
+    for key in options:
+      content+="  <property>\n"
+      content+="    <name>"+key+"</name>\n"
+      content+="    <value>"+options[key]+"</value>\n"
+      content+="  </property>\n"
+    content+= "</configuration>\n"
+    return self.write(owner, group, permission, "config/"+category+".xml", content)
+
+  def plist(self, owner, group, permission, category, options):
+    content = ""
+    for key in options:
+      content+=key+"="+options[key]+"\n"
+    return self.write(owner, group, permission, "config/"+category+".properties", content)
+
+  def write(self, owner, group, permission, path, content):
+    try:
+      f = open(path, 'w')
+      f.write(content)
+      f.close()
+      if os.getuid()==0:
+        os.chmod(path, permission)
+        os.chown(path, owner, group)
+      result = { 'exitCode' : 0 }
+    except Exception:
+      result = { 'exitCode' : 1 }
+    return result
+
+def main():
+  logger.setLevel(logging.DEBUG)
+  formatter = logging.Formatter("%(asctime)s %(filename)s:%(lineno)d - %(message)s")
+  stream_handler = logging.StreamHandler()
+  stream_handler.setFormatter(formatter)
+  logger.addHandler(stream_handler)
+  try:
+    print "Ambari Component Library"
+  except Exception, err:
+    logger.exception(str(err))
+    
+if __name__ == "__main__":
+  main()
diff --git a/agent/src/main/python/ambari_component/__init__.py b/agent/src/main/python/ambari_component/__init__.py
new file mode 100755
index 0000000..fdaaf06
--- /dev/null
+++ b/agent/src/main/python/ambari_component/__init__.py
@@ -0,0 +1,49 @@
+#!/usr/bin/env python
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Ambari Plugin Library"""
+
+from __future__ import generators
+
+__version__ = "0.1.0"
+__author__ = [
+    "see http://incubator.apache.org/ambari/team-list.html"
+]
+__license__ = "Apache License v2.0"
+__contributors__ = "see http://incubator.apache.org/ambari"
+
+import logging
+import logging.handlers
+import sys
+import time
+import signal
+from ConfigWriter import ConfigWriter
+
+def copySh(owner, group, permission, config, options):
+  result = ConfigWriter().shell(owner, group, permission, config, options)
+  return result
+
+def copyXml(owner, group, permission, config, options):
+  result = ConfigWriter().xml(owner, group, permission, config, options)
+  return result
+
+def copyProperties(owner, group, permission, config, options):
+  result = ConfigWriter().plist(owner, group, permission, config, options)
+  return result
+
+def install(cluster, role, packages):
+  return package.install(cluster, role, packages)
diff --git a/agent/src/main/python/ambari_component/main.py b/agent/src/main/python/ambari_component/main.py
new file mode 100755
index 0000000..b3a341e
--- /dev/null
+++ b/agent/src/main/python/ambari_component/main.py
@@ -0,0 +1,56 @@
+#!/usr/bin/env python
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os, errno
+import logging
+import logging.handlers
+from ConfigWriter import ConfigWriter
+import threading
+import sys
+import time
+import signal
+
+logger = logging.getLogger()
+
+def copySh(config, options):
+  result = ConfigWriter().shell(config, options)
+  return result
+
+def copyXml(config, options):
+  result = ConfigWriter().xml(config, options)
+  return result
+
+def copyPlist(config, options):
+  result = ConfigWriter().plist(config, options)
+  return result
+
+def install(cluster, role, packages):
+  return package.install(cluster, role, packages)
+
+def main():
+  logger.setLevel(logging.DEBUG)
+  formatter = logging.Formatter("%(asctime)s %(filename)s:%(lineno)d - %(message)s")
+  stream_handler = logging.StreamHandler()
+  stream_handler.setFormatter(formatter)
+  logger.addHandler(stream_handler)
+  try:
+    print "Ambari Component Library"
+  except Exception, err:
+    logger.exception(str(err))
+    
+if __name__ == "__main__":
+  main()
diff --git a/agent/src/main/python/hms_agent/package.py b/agent/src/main/python/ambari_component/package.py
similarity index 74%
rename from agent/src/main/python/hms_agent/package.py
rename to agent/src/main/python/ambari_component/package.py
index 64e6bb9..f2d87fc 100755
--- a/agent/src/main/python/hms_agent/package.py
+++ b/agent/src/main/python/ambari_component/package.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python2.6
 
 '''
 Licensed to the Apache Software Foundation (ASF) under one
@@ -19,7 +19,7 @@
 '''
 
 import string
-from shell import shellRunner
+from ambari_agent.shell import shellRunner
 import os
 import sys
 import time
@@ -36,40 +36,45 @@
 q = Queue.Queue()
 
 class packageRunner:
-    hmsPrefix = '/home/hms'
-    softwarePrefix = '/home/hms/apps'
-    downloadDir = '/home/hms/var/cache/downloads/'
+    global config
+    ambariPrefix = config.get('agent','prefix')
+    softwarePrefix = config.get('agent','prefix')
+    downloadDir = config.get('agent','prefix')+'/var/cache/downloads/'
 
-    def install(self, packages, dryRun):
+    def install(self, cluster, role, packages, dryRun, result):
         try:
             for package in packages:
                 packageName=package['name']
                 if string.find(packageName, ".torrent")>0:
-                    self.torrentInstall(packageName, dryRun)
+                    self.torrentInstall(cluster, role, packageName, dryRun)
                 elif string.find(packageName, ".tar.gz")>0 or string.find(packageName, ".tgz")>0:
                     packageName = self.tarballDownload(packageName)
-                    self.tarballInstall(packageName)
+                    self.tarballInstall(cluster, role, packageName)
                 elif string.find(packageName, ".rpm")>0 and (string.find(packageName, "http://")==0 or string.find(packageName, "https://")==0):
                     rpmName = self.rpmDownload(packageName)
                     list = [ rpmName ]
                     test = self.rpmInstall(list)
                     if test['exit_code']!=0:
                         raise Exception(test['error'])
-                else:
+                elif string.find(packageName, "yum:///")==0:
+                    packageName = packageName[7:]
                     self.yumInstall(packageName, dryRun)
-            result = {'exit_code': 0, 'output': 'Install Successfully', 'error': ''}
+                else:
+                    raise Exception("Unknown package hanlding type:"+packageName)
+            result['exitCode']=0
+            result['output']='Install Successfully'
         except Exception, err:
             logger.exception(str(err))
-            result = {'exit_code': 1, 'output': packageName+" installation failed", 'error': str(err)}
+            result['exitCode']=0
+            result['output']=packageName+" installation failed"
+            result['error']=str(err)
         return result
     
     def remove(self, packages, dryRun):
         try:
             for package in packages:
                 packageName=package['name']
-                if string.find(packageName, ".tar.gz")>0 or string.find(package, ".tgz")>0:
-                    self.tarballRemove(packageName, dryRun)
-                else:
+                if string.find(packageName, ".tar.gz")==-1 and string.find(package, ".tgz")==-1:
                     self.yumRemove(packageName, dryRun)
             result = {'exit_code': 0, 'output': 'Remove Successfully', 'error': ''}
         except Exception:
@@ -124,7 +129,7 @@
     def torrentDownload(self, package):
         sh = shellRunner()
         startTime = time.time()
-        script = ['transmission-daemon', '-y', '-O', '-M', '-w', packageRunner.downloadDir, '-g', packageRunner.hmsPrefix+'/var/cache/config']
+        script = ['transmission-daemon', '-y', '-O', '-M', '-w', packageRunner.downloadDir, '-g', packageRunner.ambariPrefix+'/var/cache/config']
         result = sh.run(script)
         for wait in [ 1, 1, 2, 2, 5 ]:
             script = ['transmission-remote', '-l']
@@ -135,11 +140,11 @@
 
         if result['exit_code']!=0:
             raise Exception('Unable to start transmission-daemon, exit_code:'+str(result['exit_code']))
-        script = ['transmission-remote', '-a', package, '--torrent-done-script', '/usr/bin/hms-torrent-callback']
+        script = ['transmission-remote', '-a', package, '--torrent-done-script', '/usr/bin/ambari-torrent-callback']
         result = sh.run(script)
         if result['exit_code']!=0:
             raise Exception('Unable to issue transmission-remote command')
-        trackerComplete = packageRunner.hmsPrefix+'/var/tmp/tracker'
+        trackerComplete = packageRunner.ambariPrefix+'/var/tmp/tracker'
         while True:
             if os.path.exists(trackerComplete):
                 break
@@ -169,7 +174,7 @@
                 break
         return {'exit_code': code, 'output': output, 'error': ''}
     
-    def torrentInstall(self, package, dryRun):
+    def torrentInstall(self, cluster, role, package, dryRun):
         if string.find(package, "http://")==0:
             urllib.urlretrieve(package, packageRunner.downloadDir+os.path.basename(package))
             tFile = packageRunner.downloadDir+os.path.basename(package)
@@ -185,7 +190,7 @@
                 if dryRun=='true':
                     continue
                 if string.find(p, ".tar.gz")>0:
-                    result = self.tarballInstall(p)
+                    result = self.tarballInstall(cluster, role, p)
                 elif string.find(p, ".rpm")>0:
                     list.append(packageRunner.downloadDir+p)
                 else:
@@ -208,56 +213,20 @@
         
     def tarballInfo(self, package):
         sh = shellRunner()
-        script = [ 'cat', packageRunner.hmsPrefix+'/var/repos/'+package+'/info' ]
+        script = [ 'cat', packageRunner.ambariPrefix+'/var/repos/'+package+'/info' ]
         return sh.run(script)
 
-    def tarballInstall(self, package):
+    def tarballInstall(self, cluster, role, package):
+        softwarePrefix = packageRunner.ambariPrefix+'/clusters/'+cluster+'-'+role+'/stack'
         sh = shellRunner()
-        src = packageRunner.hmsPrefix+'/var/cache/downloads/'+package
-        script = [ 'tar', 'fxz', src, '-C', packageRunner.softwarePrefix ]
+        src = packageRunner.ambariPrefix+'/var/cache/downloads/'+package
+        script = [ 'tar', 'fxz', src, '--strip-components', '1', '-C', softwarePrefix ]
         result = sh.run(script)
         if result['exit_code']!=0:
             err = 'Tarball decompress error, exit code: %d' % result['exit_code']
             raise Exception(err)
-#        script = [ packageRunner.hmsPrefix+'/var/repos/'+package+'/preinstall' ]
-#        result = sh.run(script)
-#        if result['exit_code']!=0:
-#            err = 'Preinstall script exit code: %d' % result['exit_code']
-#            raise Exception(err)
-#        script = [ 'tar', 'fxz', package, '-C', softwarePrefix ]
-#        result = sh.run(script)
-#        if result['exit_code']!=0:
-#            err = 'Tarball decompress error, exit code: %d' % result['exit_code']
-#            raise Exception(err)
-#        script = [ packageRunner.hmsPrefix+'/var/repos'+package+'/postinstall' ]
         return result
         
-    def tarballRemove(self, package, dryRun):
-        sh = shellRunner()
-        package = os.path.basename(package)
-        if string.find(package, '.tgz')!=-1:
-            package = package[:-4]            
-        elif string.find(package, '.tar.gz')!=-1:
-            package = package[:-7]
-        src = packageRunner.softwarePrefix+'/'+package
-        try:
-            if dryRun!='true':
-                shutil.rmtree(src)
-            else:
-                if os.path.exists(src)!=True:
-                    err = packageRunner.softwarePrefix+'/'+package+' does not exist.'
-                    raise Exception(err)
-            result = {'exit_code': 0, 'output': package+' deleted', 'error': ''}
-        except Exception, err:
-            result = {'exit_code': 1, 'output': 'Error in deleting '+package, 'error': str(err)}
-#        script = [ packageRunner.hmsPrefix+'/var/repos/'+package+'/prerm' ]
-#        result = sh.run(script)
-#        if result['exit_code']!=0:
-#            err = 'Pre-remove script exit code: %d' % result['exit_code']
-#            raise Exception(err)
-#        script = [ packageRunner.hmsPrefix+'/var/repos/'+package+'/postrm' ]
-        return result
-    
     def rpmInstall(self, packages):
         sh = shellRunner()
         list = ' '.join([str(x) for x in packages])
diff --git a/agent/src/packages/deb/hms-agent.control/preinst b/agent/src/main/python/ambari_torrent/__init__.py
similarity index 66%
copy from agent/src/packages/deb/hms-agent.control/preinst
copy to agent/src/main/python/ambari_torrent/__init__.py
index ac00d82..0791888 100755
--- a/agent/src/packages/deb/hms-agent.control/preinst
+++ b/agent/src/main/python/ambari_torrent/__init__.py
@@ -1,4 +1,4 @@
-#!/bin/sh
+#!/usr/bin/env python
 
 # Licensed to the Apache Software Foundation (ASF) under one or more
 # contributor license agreements.  See the NOTICE file distributed with
@@ -15,7 +15,20 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-getent group hadoop 2>/dev/null >/dev/null || /usr/sbin/groupadd -r hadoop
+"""Ambari Torrent Callback"""
 
-/usr/sbin/useradd --comment "Hadoop Management System" --shell /bin/bash -M -r --groups hadoop --home /home/hms hms 2> /dev/null || :
+from __future__ import generators
 
+__version__ = "0.1.0"
+__author__ = [
+    "Eric Yang <eyang@apache.org>",
+    "Kan Zhang <kanzhangmail@yahoo.com>"
+]
+__license__ = "Apache License v2.0"
+__contributors__ = "see http://incubator.apache.org/ambari/contributors"
+
+import logging
+import logging.handlers
+import sys
+import time
+import signal
diff --git a/agent/src/main/python/ambari_torrent/main.py b/agent/src/main/python/ambari_torrent/main.py
new file mode 100755
index 0000000..c12312c
--- /dev/null
+++ b/agent/src/main/python/ambari_torrent/main.py
@@ -0,0 +1,68 @@
+#!/usr/bin/env python
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os, errno
+import logging
+import logging.handlers
+from ambari_agent.shell import shellRunner
+import threading
+import sys
+import time
+import signal
+#from config import Config
+
+logger = logging.getLogger()
+
+def mkdir_p(path):
+    try:
+        os.makedirs(path)
+    except OSError, exc:
+        if exc.errno == errno.EEXIST:
+            pass
+        else: 
+            raise
+
+def main():
+    logger.setLevel(logging.DEBUG)
+    formatter = logging.Formatter("%(asctime)s %(filename)s:%(lineno)d - %(message)s")
+    stream_handler = logging.StreamHandler()
+    stream_handler.setFormatter(formatter)
+    logger.addHandler(stream_handler)
+    try:
+#        try:
+#            f = file('/etc/ambari/agent.cfg')
+#            cfg = Config(f)
+#            if cfg.ambariPrefix != None:
+#                ambariPrefix = cfg.ambariPrefix
+#            else:
+#                ambariPrefix = '/opt/ambari'
+#        except Exception, cfErr:
+        ambariPrefix = '/home/ambari'
+        time.sleep(15)
+        workdir = ambariPrefix + '/var/tmp'
+        if not os.path.exists(workdir):
+          mkdir_p(workdir)
+          
+        tracker = workdir + '/tracker'
+        f = open(tracker, 'w')
+        f.write(str(0))
+        f.close()
+    except Exception, err:
+        logger.exception(str(err))
+    
+if __name__ == "__main__":
+    main()
diff --git a/agent/src/main/python/hms_agent.egg-info/PKG-INFO b/agent/src/main/python/hms_agent.egg-info/PKG-INFO
deleted file mode 100644
index 0a23cb3..0000000
--- a/agent/src/main/python/hms_agent.egg-info/PKG-INFO
+++ /dev/null
@@ -1,11 +0,0 @@
-Metadata-Version: 1.0
-Name: hms-agent
-Version: 0.1.0
-Summary: Hadoop Management System agent
-Home-page: http://hms.apache.org
-Author: Apache Software Foundation
-Author-email: user@hms.apache.org
-License: Apache License v2.0
-Description: This package implements the Hadoop Management System agent for install and configure software on large scale clusters.
-Keywords: hadoop
-Platform: any
diff --git a/agent/src/main/python/hms_agent.egg-info/SOURCES.txt b/agent/src/main/python/hms_agent.egg-info/SOURCES.txt
deleted file mode 100644
index c9efa31..0000000
--- a/agent/src/main/python/hms_agent.egg-info/SOURCES.txt
+++ /dev/null
@@ -1,21 +0,0 @@
-setup.cfg
-setup.py
-hms_agent/DaemonHandler.py
-hms_agent/PackageHandler.py
-hms_agent/Runner.py
-hms_agent/ShellHandler.py
-hms_agent/Zeroconf.py
-hms_agent/ZooKeeperCommunicator.py
-hms_agent/__init__.py
-hms_agent/createDaemon.py
-hms_agent/daemon.py
-hms_agent/main.py
-hms_agent/package.py
-hms_agent/shell.py
-hms_agent.egg-info/PKG-INFO
-hms_agent.egg-info/SOURCES.txt
-hms_agent.egg-info/dependency_links.txt
-hms_agent.egg-info/entry_points.txt
-hms_agent.egg-info/top_level.txt
-hms_torrent/__init__.py
-hms_torrent/main.py
\ No newline at end of file
diff --git a/agent/src/main/python/hms_agent.egg-info/dependency_links.txt b/agent/src/main/python/hms_agent.egg-info/dependency_links.txt
deleted file mode 100644
index 8b13789..0000000
--- a/agent/src/main/python/hms_agent.egg-info/dependency_links.txt
+++ /dev/null
@@ -1 +0,0 @@
-
diff --git a/agent/src/main/python/hms_agent.egg-info/entry_points.txt b/agent/src/main/python/hms_agent.egg-info/entry_points.txt
deleted file mode 100644
index 80ecc6f..0000000
--- a/agent/src/main/python/hms_agent.egg-info/entry_points.txt
+++ /dev/null
@@ -1,4 +0,0 @@
-[console_scripts]
-hms-agent = hms_agent.main:main
-hms-torrent-callback = hms_torrent.main:main
-
diff --git a/agent/src/main/python/hms_agent.egg-info/top_level.txt b/agent/src/main/python/hms_agent.egg-info/top_level.txt
deleted file mode 100644
index dc9cebd..0000000
--- a/agent/src/main/python/hms_agent.egg-info/top_level.txt
+++ /dev/null
@@ -1,2 +0,0 @@
-hms_torrent
-hms_agent
diff --git a/agent/src/main/python/hms_agent/main.py b/agent/src/main/python/hms_agent/main.py
deleted file mode 100755
index 3e8548e..0000000
--- a/agent/src/main/python/hms_agent/main.py
+++ /dev/null
@@ -1,134 +0,0 @@
-#!/usr/bin/env python
-
-'''
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-    http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-'''
-
-import logging
-import logging.handlers
-from mimerender import mimerender
-import mimeparse
-from Runner import Runner
-import code
-import signal
-import simplejson
-import sys, traceback
-import web
-import os
-import time
-import ConfigParser
-from PackageHandler import PackageHandler
-from DaemonHandler import DaemonHandler
-from ShellHandler import ShellHandler
-from ZooKeeperCommunicator import ZooKeeperCommunicator
-from createDaemon import createDaemon
-from Zeroconf import dnsResolver
-
-logger = logging.getLogger()
-
-urls = (
-    '/package/info/(.*)', 'PackageHandler',
-    '/package/(.*)', 'PackageHandler',
-    '/daemon/status/(.*)', 'DaemonHandler',
-    '/daemon/(.*)', 'DaemonHandler',
-    '/shell/(.*)', 'ShellHandler'
-)
-app = web.application(urls, globals())
-
-if 'HMS_PID_DIR' in os.environ:
-  pidfile = os.environ['HMS_PID_DIR'] + "/hms-agent.pid"
-else:
-  pidfile = "/var/run/hms/hms-agent.pid"    
-
-if 'HMS_LOG_DIR' in os.environ:
-  logfile = os.environ['HMS_LOG_DIR'] + "/hms-agent.log"
-else:
-  logfile = "/var/log/hms/hms-agent.log"
-
-def signal_handler(signum, frame):
-  logger.info('signal received, exiting.')
-  os.unlink(pidfile)
-  os._exit(0)
-
-def debug(sig, frame):
-    """Interrupt running process, and provide a python prompt for
-    interactive debugging."""
-    d={'_frame':frame}         # Allow access to frame object.
-    d.update(frame.f_globals)  # Unless shadowed by global
-    d.update(frame.f_locals)
-
-    message  = "Signal recieved : entering python shell.\nTraceback:\n"
-    message += ''.join(traceback.format_stack(frame))
-    logger.info(message)
-      
-def main():
-  signal.signal(signal.SIGINT, signal_handler)
-  signal.signal(signal.SIGTERM, signal_handler)
-  signal.signal(signal.SIGUSR1, debug)
-  if (len(sys.argv) >1) and sys.argv[1]=='stop':
-    try:
-      f = open(pidfile, 'r')
-      pid = f.read()
-      pid = int(pid)
-      f.close()
-      os.kill(pid, signal.SIGTERM)
-      time.sleep(5)
-      if os.path.exists(pidfile):
-        raise Exception("PID file still exists.")
-      os._exit(0)
-    except Exception, err:
-      traceback.print_exc(file=sys.stdout)
-      os._exit(1)
-  if os.path.isfile(pidfile):
-    print("%s already exists, exiting" % pidfile)
-    sys.exit(1)
-  else:
-    retCode = createDaemon()
-    pid = str(os.getpid())
-    file(pidfile, 'w').write(pid)
-  logger.setLevel(logging.DEBUG)
-  formatter = logging.Formatter("%(asctime)s %(filename)s:%(lineno)d - %(message)s")
-  rotateLog = logging.handlers.RotatingFileHandler(logfile, "a", 10000000, 10)
-  rotateLog.setFormatter(formatter)
-  logger.addHandler(rotateLog)
-  zeroconf = dnsResolver()
-  credential = None
-  if(os.path.exists('/etc/hms/hms.ini')):
-    config = ConfigParser.RawConfigParser()
-    config.read('/etc/hms/hms.ini')
-    zkservers = config.get('zookeeper', 'quorum')
-    try:
-      credential = config.get('zookeeper', 'user')+":"+config.get('zookeeper', 'password')
-    except Exception, err:
-      credential = None
-  else:
-    zkservers = ""
-  while zkservers=="":
-    zkservers = zeroconf.find('_zookeeper._tcp')
-    if zkservers=="":
-      logger.warn("Unable to locate zookeeper, sleeping 30 seconds")
-      loop = 0
-      while loop < 10:
-        time.sleep(3)
-        loop = loop + 1
-  logger.info("Connecting to "+zkservers+".")
-  zc = ZooKeeperCommunicator(zkservers, credential)
-  zc.start()
-  zc.run()
-    
-if __name__ == "__main__":
-  main()
diff --git a/agent/src/main/python/hms_torrent/__init__.py b/agent/src/main/python/hms_torrent/__init__.py
deleted file mode 100755
index 6ed206d..0000000
--- a/agent/src/main/python/hms_torrent/__init__.py
+++ /dev/null
@@ -1,18 +0,0 @@
-#!/usr/bin/env python
-"""Hadoop Management System Torrent Callback"""
-
-from __future__ import generators
-
-__version__ = "0.1.0"
-__author__ = [
-    "Eric Yang <eyang@apache.org>",
-    "Kan Zhang <kanzhangmail@yahoo.com>"
-]
-__license__ = "Apache License v2.0"
-__contributors__ = "see http://incubator.apache.org/hms/contributors"
-
-import logging
-import logging.handlers
-import sys
-import time
-import signal
diff --git a/agent/src/main/python/hms_torrent/main.py b/agent/src/main/python/hms_torrent/main.py
deleted file mode 100755
index 8312555..0000000
--- a/agent/src/main/python/hms_torrent/main.py
+++ /dev/null
@@ -1,53 +0,0 @@
-#!/usr/bin/env python
-
-import os, errno
-import logging
-import logging.handlers
-from hms_agent.shell import shellRunner
-import threading
-import sys
-import time
-import signal
-#from config import Config
-
-logger = logging.getLogger()
-
-def mkdir_p(path):
-    try:
-        os.makedirs(path)
-    except OSError, exc:
-        if exc.errno == errno.EEXIST:
-            pass
-        else: 
-            raise
-
-def main():
-    logger.setLevel(logging.DEBUG)
-    formatter = logging.Formatter("%(asctime)s %(filename)s:%(lineno)d - %(message)s")
-    stream_handler = logging.StreamHandler()
-    stream_handler.setFormatter(formatter)
-    logger.addHandler(stream_handler)
-    try:
-#        try:
-#            f = file('/etc/hms/agent.cfg')
-#            cfg = Config(f)
-#            if cfg.hmsPrefix != None:
-#                hmsPrefix = cfg.hmsPrefix
-#            else:
-#                hmsPrefix = '/opt/hms'
-#        except Exception, cfErr:
-        hmsPrefix = '/home/hms'
-        time.sleep(15)
-        workdir = hmsPrefix + '/var/tmp'
-        if not os.path.exists(workdir):
-          mkdir_p(workdir)
-          
-        tracker = workdir + '/tracker'
-        f = open(tracker, 'w')
-        f.write(str(0))
-        f.close()
-    except Exception, err:
-        logger.exception(str(err))
-    
-if __name__ == "__main__":
-    main()
diff --git a/agent/src/main/python/setup.cfg b/agent/src/main/python/setup.cfg
index 7f75bb2..a73754e 100644
--- a/agent/src/main/python/setup.cfg
+++ b/agent/src/main/python/setup.cfg
@@ -1,3 +1,17 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
 [egg_info]
 tag_build =
 tag_date = 0
diff --git a/agent/src/main/python/setup.py b/agent/src/main/python/setup.py
index 3f79b8e..1e4a8e4 100755
--- a/agent/src/main/python/setup.py
+++ b/agent/src/main/python/setup.py
@@ -1,22 +1,37 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
 from setuptools import setup
 
 setup(
-    name = "hms-agent",
+    name = "ambari-agent",
     version = "0.1.0",
-    packages = ['hms_agent', 'hms_torrent'],
+    packages = ['ambari_agent', 'ambari_torrent', 'ambari_component'],
     # metadata for upload to PyPI
     author = "Apache Software Foundation",
-    author_email = "user@hms.apache.org",
-    description = "Hadoop Management System agent",
+    author_email = "ambari-dev@incubator.apache.org",
+    description = "Ambari agent",
     license = "Apache License v2.0",
-    keywords = "hadoop",
-    url = "http://hms.apache.org",
-    long_description = "This package implements the Hadoop Management System agent for install and configure software on large scale clusters.",
+    keywords = "hadoop, ambari",
+    url = "http://incubator.apache.org/ambari",
+    long_description = "This package implements the Ambari agent for installing Hadoop on large clusters.",
     platforms=["any"],
     entry_points = {
         "console_scripts": [
-            "hms-agent = hms_agent.main:main",
-            "hms-torrent-callback = hms_torrent.main:main",
+            "ambari-agent = ambari_agent.main:main",
+            "ambari-torrent-callback = ambari_torrent.main:main",
         ],
     }
 )
diff --git a/agent/src/main/resources/WEB-INF/jetty.xml b/agent/src/main/resources/WEB-INF/jetty.xml
index 98c3379..b2da495 100644
--- a/agent/src/main/resources/WEB-INF/jetty.xml
+++ b/agent/src/main/resources/WEB-INF/jetty.xml
@@ -1,6 +1,23 @@
 <?xml version="1.0"?>
 <!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure//EN" "http://jetty.mortbay.org/configure.dtd">
 
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
 <!-- =============================================================== -->
 <!-- Configure the Jetty Server                                      -->
 <!--                                                                 -->
@@ -52,8 +69,8 @@
             <Set name="Acceptors">2</Set>
             <Set name="statsOn">false</Set>
             <Set name="confidentialPort">8443</Set>
-	    <Set name="lowResourcesConnections">5000</Set>
-	    <Set name="lowResourcesMaxIdleTime">5000</Set>
+            <Set name="lowResourcesConnections">5000</Set>
+            <Set name="lowResourcesMaxIdleTime">5000</Set>
           </New>
       </Arg>
     </Call>
@@ -153,9 +170,9 @@
         <New class="org.mortbay.jetty.deployer.WebAppDeployer">
           <Set name="contexts"><Ref id="Contexts"/></Set>
           <Set name="webAppDir"><SystemProperty name="HMS_HOME" default="."/>/webapps</Set>
-	  <Set name="parentLoaderPriority">false</Set>
-	  <Set name="extract">false</Set>
-	  <Set name="allowDuplicates">false</Set>
+          <Set name="parentLoaderPriority">false</Set>
+          <Set name="extract">false</Set>
+          <Set name="allowDuplicates">false</Set>
         </New>
       </Arg>
     </Call> -->
diff --git a/agent/src/main/resources/WEB-INF/web.xml b/agent/src/main/resources/WEB-INF/web.xml
index 4cb1405f..021c33a 100644
--- a/agent/src/main/resources/WEB-INF/web.xml
+++ b/agent/src/main/resources/WEB-INF/web.xml
@@ -1,5 +1,22 @@
 <?xml version="1.0" encoding="ISO-8859-1"?>

 

+<!--

+   Licensed to the Apache Software Foundation (ASF) under one or more

+   contributor license agreements.  See the NOTICE file distributed with

+   this work for additional information regarding copyright ownership.

+   The ASF licenses this file to You under the Apache License, Version 2.0

+   (the "License"); you may not use this file except in compliance with

+   the License.  You may obtain a copy of the License at

+

+       http://www.apache.org/licenses/LICENSE-2.0

+

+   Unless required by applicable law or agreed to in writing, software

+   distributed under the License is distributed on an "AS IS" BASIS,

+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+   See the License for the specific language governing permissions and

+   limitations under the License.

+-->

+

 <web-app xmlns="http://java.sun.com/xml/ns/javaee"

    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

    xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"

@@ -74,8 +91,8 @@
       <servlet-class>com.sun.jersey.spi.container.servlet.ServletContainer

       </servlet-class>

       <init-param>

-	<param-name>com.sun.jersey.config.property.packages</param-name>

-	<param-value>org.apache.hms.agent.rest</param-value>

+        <param-name>com.sun.jersey.config.property.packages</param-name>

+        <param-value>org.apache.hms.agent.rest</param-value>

       </init-param>

       <load-on-startup>1</load-on-startup>

     </servlet>

diff --git a/agent/src/main/resources/puppet/manifests/site.pp b/agent/src/main/resources/puppet/manifests/site.pp
new file mode 100644
index 0000000..0e02456
--- /dev/null
+++ b/agent/src/main/resources/puppet/manifests/site.pp
@@ -0,0 +1,113 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+stage {"pre": before => Stage["main"]}
+
+yumrepo { "Bigtop":
+    baseurl => "http://bigtop01.cloudera.org:8080/job/Bigtop-trunk-matrix/label=centos5/lastSuccessfulBuild/artifact/output/",
+    descr => "Bigtop packages",
+    enabled => 1,
+    gpgcheck => 0,
+}
+
+package { "jdk":
+   ensure => "installed",
+}
+
+node default {
+  notice($fqdn)
+
+  /* Assign defaults. Agent is supposed to fill-in this value */
+  if !$ambari_stack_install_dir {
+    $ambari_stack_install_dir = "/var/ambari/"
+  } 
+  notice ($ambari_stack_install_dir)
+
+  /* 
+   * Ensure cluster directory path is present 
+   * Owned by root up to cluster directory
+   */
+  $stack_path = "${ambari_stack_install_dir}/${ambari_cluster_name}"
+  $stack_path_intermediate_dirs = dirs_between ($stack_path)
+  file {$stack_path_intermediate_dirs:
+    ensure => directory,
+    owner => root,
+    group => root,
+    mode => 755
+  }
+
+  /* 
+   * Define users and groups
+   */
+  $groups = get_map_keys($unique_groups)
+  hadoop::define_group {$groups:
+    groups_map => $unique_groups
+  }
+  $users = get_map_keys($unique_users)
+  hadoop::define_user {$users:
+    users_map => $unique_users
+  }
+
+
+  if ($fqdn in $role_to_nodes[namenode]) {
+    hadoop::role {"namenode":
+        ambari_role_name => "namenode",
+        ambari_role_prefix => "${stack_path}/namenode",
+        user => $ambari_hdfs_user,
+        group => $ambari_hdfs_group
+    }
+  } 
+
+  /* hadoop.security.authentication make global variable */
+  if ($fqdn in $role_to_nodes[datanode]) {
+    hadoop::role {"datanode":
+        ambari_role_name => "datanode",
+        ambari_role_prefix => "${stack_path}/datanode",
+        user => $ambari_hdfs_user,
+        group => $ambari_hdfs_group,
+        auth_type => "simple"
+    }
+  } 
+
+  if ($fqdn in $role_to_nodes[jobtracker]) {
+    hadoop::role {"jobtracker":
+        ambari_role_name => "jobtracker",
+        ambari_role_prefix => "${stack_path}/jobtracker",
+        user => $ambari_mapreduce_user,
+        group => $ambari_mapreduce_group
+    }
+  } 
+
+  /* hadoop.security.authentication make global variable */
+  if ($fqdn in $role_to_nodes[tasktracker]) {
+    hadoop::role {"tasktracker":
+        ambari_role_name => "tasktracker",
+        ambari_role_prefix => "${stack_path}/tasktracker",
+        user => $ambari_mapreduce_user,
+        group => $ambari_mapreduce_group,
+    }
+  } 
+
+  if ($fqdn in $role_to_nodes['client']) {
+    hadoop::client {"client":
+        ambari_role_name => "client",
+        ambari_role_prefix => "${stack_path}/client",
+        user => $ambari_default_user,
+        group => $ambari_default_group
+    }
+  } 
+}
+
+Yumrepo<||> -> Package<||>
diff --git a/agent/src/main/resources/puppet/modules/hadoop/lib/facter/hadoop_storage_locations.rb b/agent/src/main/resources/puppet/modules/hadoop/lib/facter/hadoop_storage_locations.rb
new file mode 100644
index 0000000..b9c935c
--- /dev/null
+++ b/agent/src/main/resources/puppet/modules/hadoop/lib/facter/hadoop_storage_locations.rb
@@ -0,0 +1,40 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# We make the assumption hadoop's data files will be located in /data/
+# Puppet needs to know where they are
+Facter.add("hadoop_storage_locations") do
+        setcode do
+
+            data_dir_path = "/data/"
+            storage_locations = ""
+
+            # We need to check /data/ exist
+            if File.directory?(data_dir_path)
+
+              # We assume all data directory will be a number
+              Dir.foreach(data_dir_path) { |directory|
+                  storage_locations += (data_dir_path + directory + ';') if directory =~ /\d+/
+              }
+            end
+
+            # Return the list of storage locations for hadoop
+            if storage_locations == ""
+              storage_locations = "/mnt"
+            end
+            storage_locations
+        end
+end
+
diff --git a/agent/src/main/resources/puppet/modules/hadoop/lib/puppet/parser/functions/dirs_between.rb b/agent/src/main/resources/puppet/modules/hadoop/lib/puppet/parser/functions/dirs_between.rb
new file mode 100644
index 0000000..8633e84
--- /dev/null
+++ b/agent/src/main/resources/puppet/modules/hadoop/lib/puppet/parser/functions/dirs_between.rb
@@ -0,0 +1,25 @@
+module Puppet::Parser::Functions
+  newfunction(:dirs_between, :type => :rvalue, :doc => "Generate a list of pathnames") do |args|
+    subbottom = args[0]
+    subdirs = []
+    while subbottom != "/"
+        subbottom, component = File.split(subbottom)
+        subdirs.unshift(component)
+    end
+    dir = '/'
+    paths = [ ]
+    newpaths = [ ]
+    while subdirs.length > 0
+        component = subdirs.shift()
+        dir = File.join(dir, component)
+        paths.push(dir)
+    end
+    paths.each do |d| 
+      if !File.exists?(d)
+#        Dir.mkdir(d)
+        newpaths.push(d)
+      end
+    end
+    return newpaths
+  end
+end
diff --git a/agent/src/main/resources/puppet/modules/hadoop/lib/puppet/parser/functions/get_category_name.rb b/agent/src/main/resources/puppet/modules/hadoop/lib/puppet/parser/functions/get_category_name.rb
new file mode 100644
index 0000000..4428057
--- /dev/null
+++ b/agent/src/main/resources/puppet/modules/hadoop/lib/puppet/parser/functions/get_category_name.rb
@@ -0,0 +1,5 @@
+module Puppet::Parser::Functions
+  newfunction(:get_category_name, :type => :rvalue) do |args|
+    return File.basename(args[0])
+  end
+end
diff --git a/agent/src/main/resources/puppet/modules/hadoop/lib/puppet/parser/functions/get_files.rb b/agent/src/main/resources/puppet/modules/hadoop/lib/puppet/parser/functions/get_files.rb
new file mode 100644
index 0000000..28ac098
--- /dev/null
+++ b/agent/src/main/resources/puppet/modules/hadoop/lib/puppet/parser/functions/get_files.rb
@@ -0,0 +1,10 @@
+module Puppet::Parser::Functions
+  newfunction(:get_files, :type => :rvalue) do |args|
+    hadoop_conf_dir = args[0]
+    hadoop_stack_conf = args[1]
+    role_name = args[2]
+    files = Array.new
+    hadoop_stack_conf[role_name].keys.each {|fname| files << ""+hadoop_conf_dir+"/"+fname}
+    return files
+  end
+end
diff --git a/agent/src/main/resources/puppet/modules/hadoop/lib/puppet/parser/functions/get_map_keys.rb b/agent/src/main/resources/puppet/modules/hadoop/lib/puppet/parser/functions/get_map_keys.rb
new file mode 100644
index 0000000..fa977d4
--- /dev/null
+++ b/agent/src/main/resources/puppet/modules/hadoop/lib/puppet/parser/functions/get_map_keys.rb
@@ -0,0 +1,8 @@
+module Puppet::Parser::Functions
+  newfunction(:get_map_keys, :type => :rvalue) do |args|
+    map = args[0]
+    keys = Array.new
+    map.keys.each {|key| keys << key}
+    return keys
+  end
+end
diff --git a/agent/src/main/resources/puppet/modules/hadoop/lib/puppet/parser/functions/get_parent_dirs.rb b/agent/src/main/resources/puppet/modules/hadoop/lib/puppet/parser/functions/get_parent_dirs.rb
new file mode 100644
index 0000000..1d50e9c
--- /dev/null
+++ b/agent/src/main/resources/puppet/modules/hadoop/lib/puppet/parser/functions/get_parent_dirs.rb
@@ -0,0 +1,10 @@
+module Puppet::Parser::Functions
+  newfunction(:get_parent_dirs, :type => :rvalue) do |args|
+    dir = args[0]
+    dirs = Array.new
+    temp = "/"
+    # dir.split('/').each {|x| temp = temp + x + "/", dirs << temp}
+    dir.split('/').each {|x| temp = temp+x+"/", dirs << temp}
+    return dirs
+  end
+end
diff --git a/agent/src/main/resources/puppet/modules/hadoop/manifests/init.pp b/agent/src/main/resources/puppet/modules/hadoop/manifests/init.pp
new file mode 100644
index 0000000..a88ea94
--- /dev/null
+++ b/agent/src/main/resources/puppet/modules/hadoop/manifests/init.pp
@@ -0,0 +1,153 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+class hadoop {
+  /**
+   * Common definitions for hadoop nodes.
+   * They all need these files so we can access hdfs/jobs from any node
+   */
+  class common {
+    package { "hadoop":
+      ensure => latest,
+      require => [Package["jdk"]],
+    }
+
+    package { "hadoop-native":
+      ensure => latest,
+      require => [Package["hadoop"]],
+    }
+  }
+  
+  define role ($ambari_role_name = "datanode", 
+               $ambari_role_prefix, $user = "hdfs", $group = "hdfs", $auth_type = "simple") {
+
+    include common 
+
+    realize Group[$group]
+    realize User[$user]
+
+    /*
+     * Create conf directory for datanode 
+     */
+    $hadoop_conf_dir = "${ambari_role_prefix}/etc/hadoop"
+    file {["${ambari_role_prefix}", "${ambari_role_prefix}/etc", "${ambari_role_prefix}/etc/hadoop"]:
+      ensure => directory,
+      owner => $user,
+      group => $group,
+      mode => 755
+    }     
+    notice ($ambari_role_prefix)
+    notice ($ambari_role_name)
+    $files = get_files ($hadoop_conf_dir, $::hadoop_stack_conf, $ambari_role_name)
+    notice($files)
+
+    /* Create config files for each category */
+    create_config_file {$files:
+                           conf_map => $::hadoop_stack_conf[$title],
+                           require => [Package["hadoop"]],
+                           owner => $user,
+                           group => $group,
+                           mode => 644
+                       }
+
+    package { "hadoop-${ambari_role_name}":
+      ensure => latest,
+      require => [Package["hadoop"]],
+    }
+
+    if ($ambari_role_name == "datanode") {
+      if ($auth_type == "kerberos") {
+        package { "hadoop-sbin":
+          ensure => latest,
+          require => [Package["hadoop"]],
+        }
+      }
+    }
+  }
+
+  define client ($ambari_role_name = "client", $ambari_role_prefix,
+                 $user = "hadoop", $group = "hadoop") {
+
+    include common 
+
+    realize Group[$group]
+    realize User[$user]
+
+    $hadoop_conf_dir = "${ambari_role_prefix}/etc/conf"
+    file {["${ambari_role_prefix}", "${ambari_role_prefix}/etc", "${ambari_role_prefix}/etc/conf"]:
+      ensure => directory,
+      owner => $user,
+      group => $group,
+      mode => 755
+    }     
+    notice ($ambari_role_prefix)
+    $files = get_files ($hadoop_conf_dir, $::hadoop_stack_conf, $ambari_role_name)
+    notice($files)
+
+    /* Create config files for each category */
+    create_config_file {$files:
+                           conf_map => $::hadoop_stack_conf[$title],
+                           require => [Package["hadoop"]],
+                           owner => $user,
+                           group => $group,
+                           mode => 644
+                       }
+
+    package { ["hadoop-doc", "hadoop-source", "hadoop-debuginfo", 
+               "hadoop-fuse", "hadoop-libhdfs", "hadoop-pipes"]:
+      ensure => latest,
+      require => [Package["hadoop"]],  
+    }
+  }
+
+  define define_group ($groups_map) {
+    @group {$title:
+      ensure => present
+    }
+  }
+
+  /*
+   * TODO: currently uid is auto selected which may cause different uids on each node..
+   */
+  define define_user ($users_map) {
+    @user {$title:
+      ensure => present,
+      gid => $users_map[$title]['GROUP'],
+      require => Group[$users_map[$title]['GROUP']]
+    }
+  }
+
+  define create_config_file ($conf_map, $owner, $group, $mode) {
+    $category = get_category_name ($title)
+    $conf_category_map = $conf_map[$category]
+    if $category == 'hadoop-env.sh' {
+      file {"$title":
+        ensure => present,
+        content => template('hadoop/config_env.erb'),
+        owner => $owner,
+        group => $group,
+        mode => 755
+      } 
+    } else {
+      file {"$title":
+        ensure => present,
+        content => template('hadoop/config_properties.erb'),
+        owner => $owner,
+        group => $group,
+        mode => $mode
+      } 
+    }
+  }
+}
diff --git a/agent/src/main/resources/puppet/modules/hadoop/templates/config_env.erb b/agent/src/main/resources/puppet/modules/hadoop/templates/config_env.erb
new file mode 100644
index 0000000..c2ec86f
--- /dev/null
+++ b/agent/src/main/resources/puppet/modules/hadoop/templates/config_env.erb
@@ -0,0 +1,24 @@
+#-- Licensed to the Apache Software Foundation (ASF) under one or more       -->
+#-- contributor license agreements.  See the NOTICE file distributed with    -->
+#-- this work for additional information regarding copyright ownership.      -->
+#-- The ASF licenses this file to You under the Apache License, Version 2.0  -->
+#-- (the "License"); you may not use this file except in compliance with     -->
+#-- the License.  You may obtain a copy of the License at                    -->
+#--                                                                          -->
+#--     http://www.apache.org/licenses/LICENSE-2.0                           -->
+#--                                                                          -->
+#-- Unless required by applicable law or agreed to in writing, software      -->
+#-- distributed under the License is distributed on an "AS IS" BASIS,        -->
+#-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -->
+#-- See the License for the specific language governing permissions and      -->
+#-- limitations under the License.                                           -->
+
+# Hadoop specific environment 
+
+<% require 'erubis' -%>
+<% conf_category_map.each do |key,value| -%>
+<% while (value.include? '<%=') do -%>
+<% value=Erubis::Eruby.new(value).result(binding) -%>
+<% end -%>
+export <%= key %>="<%= value %>"
+<% end -%>
diff --git a/agent/src/main/resources/puppet/modules/hadoop/templates/config_properties.erb b/agent/src/main/resources/puppet/modules/hadoop/templates/config_properties.erb
new file mode 100644
index 0000000..ba65f8e
--- /dev/null
+++ b/agent/src/main/resources/puppet/modules/hadoop/templates/config_properties.erb
@@ -0,0 +1,30 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+
+<!-- Licensed to the Apache Software Foundation (ASF) under one or more       -->
+<!-- contributor license agreements.  See the NOTICE file distributed with    -->
+<!-- this work for additional information regarding copyright ownership.      -->
+<!-- The ASF licenses this file to You under the Apache License, Version 2.0  -->
+<!-- (the "License"); you may not use this file except in compliance with     -->
+<!-- the License.  You may obtain a copy of the License at                    -->
+<!--                                                                          -->
+<!--     http://www.apache.org/licenses/LICENSE-2.0                           -->
+<!--                                                                          -->
+<!-- Unless required by applicable law or agreed to in writing, software      -->
+<!-- distributed under the License is distributed on an "AS IS" BASIS,        -->
+<!-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -->
+<!-- See the License for the specific language governing permissions and      -->
+<!-- limitations under the License.                                           -->
+
+<configuration>
+<% require 'erubis' -%>
+<% conf_category_map.each do |key,value| -%>
+  <property>
+    <name><%= key %></name>
+<% while (value.include? '<%=') do -%>
+<% value=Erubis::Eruby.new(value).result(binding) -%>
+<% end -%>
+    <value><%= value %></value>
+  </property>
+<% end -%>
+</configuration>
diff --git a/agent/src/packages/build.xml b/agent/src/packages/build.xml
index 848131a..b751bce 100644
--- a/agent/src/packages/build.xml
+++ b/agent/src/packages/build.xml
@@ -17,7 +17,7 @@
    limitations under the License.
 -->
 
-<project name="hms agent packaging">
+<project name="Ambari agent packaging">
   <target name="move-tarball">
     <move todir="${project.build.directory}">
       <fileset dir="${project.build.directory}/${final.name}/dist">
@@ -30,13 +30,13 @@
     <taskdef name="deb"
            classname="org.vafer.jdeb.ant.DebAntTask">
     </taskdef>
-    <mkdir dir="${project.build.directory}/deb/hms-agent.control" />
-    <copy todir="${project.build.directory}/deb/hms-agent.control">
-      <fileset dir="${basedir}/src/packages/deb/hms-agent.control">
+    <mkdir dir="${project.build.directory}/deb/ambari-agent.control" />
+    <copy todir="${project.build.directory}/deb/ambari-agent.control">
+      <fileset dir="${basedir}/src/packages/deb/ambari-agent.control">
         <exclude name="control" />
       </fileset>
     </copy>
-    <copy file="src/packages/deb/hms-agent.control/control" todir="${basedir}/target/deb/hms-agent.control">
+    <copy file="src/packages/deb/ambari-agent.control/control" todir="${basedir}/target/deb/ambari-agent.control">
       <filterchain>
         <replacetokens>
           <token key="version" value="${project.version}" />
@@ -49,12 +49,12 @@
       </fileset>
     </path> 
     <property name="source.file" refid="source.id"/>
-    <deb destfile="${project.build.directory}/${artifactId}_${project.version}-${package.release}_${os.arch}.deb" control="${basedir}/target/deb/hms-agent.control">
+    <deb destfile="${project.build.directory}/${artifactId}_${project.version}-${package.release}_${os.arch}.deb" control="${basedir}/target/deb/ambari-agent.control">
       <data src="${source.file}">
         <mapper type="prefix" strip="1" prefix="${package.prefix}" />
         <include name="**" />
       </data>
-      <tarfileset dir="${basedir}/src/packages/deb/init.d" filemode="755" prefix="${package.prefix}/share/hms/sbin">
+      <tarfileset dir="${basedir}/src/packages/deb/init.d" filemode="755" prefix="${package.prefix}/share/ambari/sbin">
         <exclude name=".svn" />
         <include name="**" />
       </tarfileset>
@@ -68,20 +68,22 @@
       </fileset>
     </path> 
     <property name="source.file" refid="source.id"/>
-    <delete dir="${project.build.directory}/rpm/hms/buildroot" />
-    <mkdir dir="${project.build.directory}/rpm/hms/SOURCES" />
-    <mkdir dir="${project.build.directory}/rpm/hms/BUILD" />
-    <mkdir dir="${project.build.directory}/rpm/hms/RPMS" />
-    <mkdir dir="${project.build.directory}/rpm/hms/buildroot" />
-    <copy file="${source.file}" tofile="${project.build.directory}/rpm/hms/SOURCES/${final.name}.tar.gz" />
-    <copy file="src/packages/rpm/spec/hms-agent.spec" todir="target/rpm/hms/SPECS">
+    <echo message="${final.name}.linux*.tar.gz"/>
+    <echo message="${source.file}"/>
+    <delete dir="${project.build.directory}/rpm/ambari/buildroot" />
+    <mkdir dir="${project.build.directory}/rpm/ambari/SOURCES" />
+    <mkdir dir="${project.build.directory}/rpm/ambari/BUILD" />
+    <mkdir dir="${project.build.directory}/rpm/ambari/RPMS" />
+    <mkdir dir="${project.build.directory}/rpm/ambari/buildroot" />
+    <copy file="${source.file}" tofile="${project.build.directory}/rpm/ambari/SOURCES/${final.name}.tar.gz" />
+    <copy file="src/packages/rpm/spec/ambari-agent.spec" todir="target/rpm/ambari/SPECS">
       <filterchain>
         <replacetokens>
           <token key="final.name" value="${final.name}" />
           <token key="version" value="${project.version}" />
           <token key="package.name" value="${source.file}" />
           <token key="package.release" value="${package.release}" />
-          <token key="package.build.dir" value="${project.build.directory}/rpm/hms/BUILD" />
+          <token key="package.build.dir" value="${project.build.directory}/rpm/ambari/BUILD" />
           <token key="package.prefix" value="${package.prefix}" />
           <token key="package.conf.dir" value="${package.conf.dir}" />
           <token key="package.log.dir" value="${package.log.dir}" />
@@ -89,9 +91,9 @@
         </replacetokens>
       </filterchain>
     </copy>
-    <rpm specFile="hms-agent.spec" command="-bb" topDir="${project.build.directory}/rpm/hms" cleanBuildDir="true" failOnError="true"/>
+    <rpm specFile="ambari-agent.spec" command="-bb --buildroot=${project.build.directory}/rpm/ambari/BUILDROOT" topDir="${project.build.directory}/rpm/ambari" cleanBuildDir="true" failOnError="true"/>
     <copy todir="${project.build.directory}" flatten="true">
-      <fileset dir="${project.build.directory}/rpm/hms/RPMS">
+      <fileset dir="${project.build.directory}/rpm/ambari/RPMS">
         <include name="**/*.rpm" />
       </fileset>
     </copy>
diff --git a/agent/src/packages/deb/hms-agent.control/postrm b/agent/src/packages/deb/ambari-agent.control/conffile
old mode 100755
new mode 100644
similarity index 91%
copy from agent/src/packages/deb/hms-agent.control/postrm
copy to agent/src/packages/deb/ambari-agent.control/conffile
index a6876c3..b54ea37
--- a/agent/src/packages/deb/hms-agent.control/postrm
+++ b/agent/src/packages/deb/ambari-agent.control/conffile
@@ -1,5 +1,3 @@
-#!/bin/sh
-
 # Licensed to the Apache Software Foundation (ASF) under one or more
 # contributor license agreements.  See the NOTICE file distributed with
 # this work for additional information regarding copyright ownership.
@@ -15,6 +13,4 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-/usr/sbin/userdel hms 2> /dev/null >/dev/null
-exit 0
-
+/etc/ambari/ambari-env.sh
diff --git a/agent/src/packages/deb/hms-agent.control/preinst b/agent/src/packages/deb/ambari-agent.control/control
old mode 100755
new mode 100644
similarity index 70%
copy from agent/src/packages/deb/hms-agent.control/preinst
copy to agent/src/packages/deb/ambari-agent.control/control
index ac00d82..d40db41
--- a/agent/src/packages/deb/hms-agent.control/preinst
+++ b/agent/src/packages/deb/ambari-agent.control/control
@@ -1,5 +1,3 @@
-#!/bin/sh
-
 # Licensed to the Apache Software Foundation (ASF) under one or more
 # contributor license agreements.  See the NOTICE file distributed with
 # this work for additional information regarding copyright ownership.
@@ -14,8 +12,12 @@
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 # See the License for the specific language governing permissions and
 # limitations under the License.
-
-getent group hadoop 2>/dev/null >/dev/null || /usr/sbin/groupadd -r hadoop
-
-/usr/sbin/useradd --comment "Hadoop Management System" --shell /bin/bash -M -r --groups hadoop --home /home/hms hms 2> /dev/null || :
-
+Package: ambari-agent
+Version: @version@
+Section: misc
+Priority: optional
+Architecture: all
+Depends: python, ethtool
+Maintainer: Apache Software Foundation <ambari-dev@incubator.apache.org>
+Description: Ambari Agent manage software installation and configuration for Hadoop software stack.
+Distribution: development
diff --git a/agent/src/packages/deb/hms-agent.control/postinst b/agent/src/packages/deb/ambari-agent.control/postinst
similarity index 86%
rename from agent/src/packages/deb/hms-agent.control/postinst
rename to agent/src/packages/deb/ambari-agent.control/postinst
index 1d1d76d..b1eeddc 100755
--- a/agent/src/packages/deb/hms-agent.control/postinst
+++ b/agent/src/packages/deb/ambari-agent.control/postinst
@@ -15,10 +15,10 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-bash /usr/sbin/update-hms-env.sh \
+bash /usr/sbin/update-ambari-agent-env.sh \
   --prefix=/usr \
   --bin-dir=/usr/bin \
-  --conf-dir=/etc/hms \
-  --log-dir=/var/log/hms \
-  --pid-dir=/var/run/hms
+  --conf-dir=/etc/ambari \
+  --log-dir=/var/log/ambari \
+  --pid-dir=/var/run/ambari
 
diff --git a/agent/src/packages/deb/hms-agent.control/postrm b/agent/src/packages/deb/ambari-agent.control/postrm
similarity index 93%
rename from agent/src/packages/deb/hms-agent.control/postrm
rename to agent/src/packages/deb/ambari-agent.control/postrm
index a6876c3..0e58d88 100755
--- a/agent/src/packages/deb/hms-agent.control/postrm
+++ b/agent/src/packages/deb/ambari-agent.control/postrm
@@ -15,6 +15,6 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-/usr/sbin/userdel hms 2> /dev/null >/dev/null
+/usr/sbin/userdel ambari 2> /dev/null >/dev/null
 exit 0
 
diff --git a/agent/src/packages/deb/hms-agent.control/preinst b/agent/src/packages/deb/ambari-agent.control/preinst
similarity index 85%
rename from agent/src/packages/deb/hms-agent.control/preinst
rename to agent/src/packages/deb/ambari-agent.control/preinst
index ac00d82..a87cfb7 100755
--- a/agent/src/packages/deb/hms-agent.control/preinst
+++ b/agent/src/packages/deb/ambari-agent.control/preinst
@@ -15,7 +15,7 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-getent group hadoop 2>/dev/null >/dev/null || /usr/sbin/groupadd -r hadoop
+getent group hadoop 2>/dev/null >/dev/null || /usr/sbin/groupadd -g 123 -r hadoop
 
-/usr/sbin/useradd --comment "Hadoop Management System" --shell /bin/bash -M -r --groups hadoop --home /home/hms hms 2> /dev/null || :
+/usr/sbin/useradd --comment "Ambari" -u 210 --shell /bin/bash -M -r --groups hadoop --home /home/ambari ambari 2> /dev/null || :
 
diff --git a/agent/src/packages/deb/hms-agent.control/prerm b/agent/src/packages/deb/ambari-agent.control/prerm
similarity index 82%
rename from agent/src/packages/deb/hms-agent.control/prerm
rename to agent/src/packages/deb/ambari-agent.control/prerm
index 3fbbaec..c7bd0e6 100755
--- a/agent/src/packages/deb/hms-agent.control/prerm
+++ b/agent/src/packages/deb/ambari-agent.control/prerm
@@ -15,12 +15,12 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-/etc/init.d/hms-agent stop 2>/dev/null >/dev/null
-bash /usr/sbin/update-hms-env.sh \
+/etc/init.d/ambari-agent stop 2>/dev/null >/dev/null
+bash /usr/sbin/update-ambari-agent-env.sh \
   --prefix=/usr \
   --bin-dir=/usr/bin \
-  --conf-dir=/etc/hms \
-  --log-dir=/var/log/hms \
-  --pid-dir=/var/run/hms \
+  --conf-dir=/etc/ambari \
+  --log-dir=/var/log/ambari \
+  --pid-dir=/var/run/ambari \
   --uninstal
 
diff --git a/agent/src/packages/deb/hms-agent.control/conffile b/agent/src/packages/deb/hms-agent.control/conffile
deleted file mode 100644
index cd69503..0000000
--- a/agent/src/packages/deb/hms-agent.control/conffile
+++ /dev/null
@@ -1 +0,0 @@
-/etc/hms/hms-env.sh
diff --git a/agent/src/packages/deb/hms-agent.control/control b/agent/src/packages/deb/hms-agent.control/control
deleted file mode 100644
index 3f6f761..0000000
--- a/agent/src/packages/deb/hms-agent.control/control
+++ /dev/null
@@ -1,9 +0,0 @@
-Package: hms-agent
-Version: @version@
-Section: misc
-Priority: optional
-Architecture: all
-Depends: openjdk-6-jre-headless
-Maintainer: Apache Software Foundation <hms-dev@incubator.apache.org>
-Description: Hadoop Management System Agent manage software installation and configuration for Hadoop software stack.
-Distribution: development
diff --git a/agent/src/packages/deb/init.d/hms-agent b/agent/src/packages/deb/init.d/ambari-agent
similarity index 66%
rename from agent/src/packages/deb/init.d/hms-agent
rename to agent/src/packages/deb/init.d/ambari-agent
index 564bb6b..74c2009 100755
--- a/agent/src/packages/deb/init.d/hms-agent
+++ b/agent/src/packages/deb/init.d/ambari-agent
@@ -16,7 +16,7 @@
 # limitations under the License.
 
 ### BEGIN INIT INFO
-# Provides:		hms-agent
+# Provides:		ambari-agent
 # Required-Start:	$remote_fs $syslog
 # Required-Stop:	$remote_fs $syslog
 # Default-Start:	2 3 4 5
@@ -26,12 +26,12 @@
 
 set -e
 
-# /etc/init.d/hms-agent: start and stop the Apache HMS Agent daemon
+# /etc/init.d/ambari-agent: start and stop the Apache HMS Agent daemon
 
 umask 022
 
-if test -f /etc/default/hms-env.sh; then
-    . /etc/default/hms-env.sh
+if test -f /etc/default/ambari-env.sh; then
+    . /etc/default/ambari-env.sh
 fi
 
 . /lib/lsb/init-functions
@@ -42,13 +42,13 @@
 }
 
 check_for_no_start() {
-    # forget it if we're trying to start, and /etc/hms/hms-agent_not_to_be_run exists
-    if [ -e /etc/hms/hms-agent_not_to_be_run ]; then 
+    # forget it if we're trying to start, and /etc/ambari/ambari-agent_not_to_be_run exists
+    if [ -e /etc/ambari/ambari-agent_not_to_be_run ]; then 
 	if [ "$1" = log_end_msg ]; then
 	    log_end_msg 0
 	fi
 	if ! run_by_init; then
-	    log_action_msg "Apache HMS Agent not in use (/etc/hms/hms-agent_not_to_be_run)"
+	    log_action_msg "Apache HMS Agent not in use (/etc/ambari/ambari-agent_not_to_be_run)"
 	fi
 	exit 0
     fi
@@ -59,16 +59,16 @@
 case "$1" in
   start)
 	check_for_no_start
-	log_daemon_msg "Starting Apache HMS Agent" "hms-agent"
-	if start-stop-daemon --start --quiet --oknodo --pidfile ${HMS_PID_DIR}/hms-agent.pid -x /usr/bin/hms-agent; then
+	log_daemon_msg "Starting Apache HMS Agent" "ambari-agent"
+	if start-stop-daemon --start --quiet --oknodo --pidfile ${HMS_PID_DIR}/ambari-agent.pid -x /usr/bin/ambari-agent; then
 	    log_end_msg 0
 	else
 	    log_end_msg 1
 	fi
 	;;
   stop)
-	log_daemon_msg "Stopping Apache HMS Agent" "hms-agent"
-	if start-stop-daemon --stop --quiet --oknodo --pidfile ${HMS_PID_DIR}/hms-agent.pid; then
+	log_daemon_msg "Stopping Apache HMS Agent" "ambari-agent"
+	if start-stop-daemon --stop --quiet --oknodo --pidfile ${HMS_PID_DIR}/ambari-agent.pid; then
 	    log_end_msg 0
 	else
 	    log_end_msg 1
@@ -77,10 +77,10 @@
 
   restart)
 	check_privsep_dir
-	log_daemon_msg "Restarting Apache HMS Agent" "hms-agent"
-	start-stop-daemon --stop --quiet --oknodo --retry 30 --pidfile ${HMS_PID_DIR}/hms-agent.pid
+	log_daemon_msg "Restarting Apache HMS Agent" "ambari-agent"
+	start-stop-daemon --stop --quiet --oknodo --retry 30 --pidfile ${HMS_PID_DIR}/ambari-agent.pid
 	check_for_no_start log_end_msg
-	if start-stop-daemon --start --quiet --oknodo --pidfile ${HMS_PID_DIR}/hms-agent.pid -x /usr/bin/hms-agent; then
+	if start-stop-daemon --start --quiet --oknodo --pidfile ${HMS_PID_DIR}/ambari-agent.pid -x /usr/bin/ambari-agent; then
 	    log_end_msg 0
 	else
 	    log_end_msg 1
@@ -88,11 +88,11 @@
 	;;
 
   status)
-	status_of_proc -p ${HMS_PID_DIR}/hms-agent.pid /usr/bin/hms-agent hms-agent && exit 0 || exit $?
+	status_of_proc -p ${HMS_PID_DIR}/ambari-agent.pid /usr/bin/ambari-agent ambari-agent && exit 0 || exit $?
 	;;
 
   *)
-	log_action_msg "Usage: /etc/init.d/hms-agent {start|stop|restart|status}"
+	log_action_msg "Usage: /etc/init.d/ambari-agent {start|stop|restart|status}"
 	exit 1
 esac
 
diff --git a/agent/src/packages/rpm/init.d/hms-agent b/agent/src/packages/rpm/init.d/ambari-agent
similarity index 73%
rename from agent/src/packages/rpm/init.d/hms-agent
rename to agent/src/packages/rpm/init.d/ambari-agent
index 0d0e06f..7eb1b72 100755
--- a/agent/src/packages/rpm/init.d/hms-agent
+++ b/agent/src/packages/rpm/init.d/ambari-agent
@@ -22,27 +22,27 @@
 # description: HBase master
 
 source /etc/rc.d/init.d/functions
-source /etc/default/hms-agent-env.sh
+source /etc/default/ambari-env.sh
 
 RETVAL=0
-PIDFILE="${HMS_PID_DIR}/hms-agent.pid"
-desc="HMS agent daemon"
+PIDFILE="${HMS_PID_DIR}/ambari-agent.pid"
+desc="Ambari agent daemon"
 
 start() {
-  echo -n $"Starting $desc (hms-agent): "
-  daemon /usr/bin/hms-agent
+  echo -n $"Starting $desc (ambari-agent): "
+  daemon /usr/bin/ambari-agent
   RETVAL=$?
   echo
-  [ $RETVAL -eq 0 ] && touch /var/lock/subsys/hms-agent
+  [ $RETVAL -eq 0 ] && touch /var/lock/subsys/ambari-agent
   return $RETVAL
 }
 
 stop() {
-  echo -n $"Stopping $desc (hms-agent): "
-  daemon /usr/bin/hms-agent stop
+  echo -n $"Stopping $desc (ambari-agent): "
+  daemon /usr/bin/ambari-agent stop
   RETVAL=$?
   echo
-  [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/hms-agent $PIDFILE
+  [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/ambari-agent $PIDFILE
 }
 
 restart() {
@@ -51,12 +51,12 @@
 }
 
 checkstatus(){
-  status -p $PIDFILE hms-agent
+  status -p $PIDFILE ambari-agent
   RETVAL=$?
 }
 
 condrestart(){
-  [ -e /var/lock/subsys/hms-agent ] && restart || :
+  [ -e /var/lock/subsys/ambari-agent ] && restart || :
 }
 
 case "$1" in
diff --git a/agent/src/packages/rpm/spec/hms-agent.spec b/agent/src/packages/rpm/spec/ambari-agent.spec
similarity index 73%
rename from agent/src/packages/rpm/spec/hms-agent.spec
rename to agent/src/packages/rpm/spec/ambari-agent.spec
index 504463d..6588202 100644
--- a/agent/src/packages/rpm/spec/hms-agent.spec
+++ b/agent/src/packages/rpm/spec/ambari-agent.spec
@@ -17,7 +17,7 @@
 # RPM Spec file for HBase version @version@
 #
 
-%define name         hms-agent
+%define name         ambari-agent
 %define version      @version@
 %define release      @package.release@
 
@@ -34,7 +34,7 @@
 %define _man_dir     %{_prefix}/man
 %define _pid_dir     @package.pid.dir@
 %define _sbin_dir    %{_prefix}/sbin
-%define _share_dir   %{_prefix}/share/hms
+%define _share_dir   %{_prefix}/share/ambari
 %define _src_dir     %{_prefix}/src
 %define _var_dir     %{_prefix}/var/lib
 
@@ -44,7 +44,7 @@
 
 Summary: Hadoop Management System Agent
 License: Apache License, Version 2.0
-URL: http://incubator.apache.org/hms
+URL: http://incubator.apache.org/ambari
 Vendor: Apache Software Foundation
 Group: Development/Libraries
 Name: %{name}
@@ -56,12 +56,12 @@
 Prefix: %{_log_dir}
 Prefix: %{_pid_dir}
 Buildroot: %{_build_dir}
-Requires: sh-utils, textutils, /usr/sbin/useradd, /usr/sbin/usermod, /sbin/chkconfig, /sbin/service, transmission-cli, zkpython, zookeeper-lib, BitTorrent-bencode, mimerender, simplejson, mimeparse, web.py, python-setuptools, libevent >= 2.0.10, avahi-tools, python-iniparse
+Requires: sh-utils, textutils, /usr/sbin/useradd, /usr/sbin/usermod, /sbin/chkconfig, /sbin/service, transmission-cli, zkpython, zookeeper-lib, BitTorrent-bencode, mimerender, simplejson, mimeparse, web.py, python-setuptools, libevent >= 2.0.10, avahi-tools, python-iniparse, /sbin/ethtool
 AutoReqProv: no
-Provides: hms-agent
+Provides: ambari-agent
 
 %description
-Hadoop Management System Agent manage software installation and configuration for Hadoop software stack.
+Ambari Agent manage software installation and configuration for Hadoop software stack.
 
 %prep
 
@@ -85,32 +85,29 @@
 mkdir -p ${RPM_BUILD_DIR}%{_conf_dir}
 mkdir -p ${RPM_BUILD_DIR}/etc/init.d
 
-cp ${RPM_BUILD_DIR}/../../../../src/packages/rpm/init.d/hms-agent ${RPM_BUILD_DIR}/etc/init.d/hms-agent
-chmod 0755 ${RPM_BUILD_DIR}/etc/init.d/hms-agent
+cp ${RPM_BUILD_DIR}/../../../../src/packages/rpm/init.d/ambari-agent ${RPM_BUILD_DIR}/etc/init.d/ambari-agent
+chmod 0755 ${RPM_BUILD_DIR}/etc/init.d/ambari-agent
+
+cp -a ${RPM_BUILD_DIR}/* ${RPM_BUILD_DIR}/../BUILDROOT
 
 %preun
-rm -rf /etc/default/hms-agent-env.sh
+rm -rf /etc/default/ambari-agent-env.sh
 
 %pre
+getent group hadoop 2>/dev/null >/dev/null || /usr/sbin/groupadd -g 123 -r hadoop
+/usr/sbin/useradd --comment "Ambari" -u 210 --shell /bin/bash -M -r --groups hadoop --home /home/ambari ambari 2> /dev/null || :
 
 %post
 mkdir -p ${RPM_INSTALL_PREFIX2}
 mkdir -p ${RPM_INSTALL_PREFIX3}
-echo "HMS_LOG_DIR=${RPM_INSTALL_PREFIX2}" > /etc/default/hms-agent-env.sh
-echo "HMS_PID_DIR=${RPM_INSTALL_PREFIX3}" >> /etc/default/hms-agent-env.sh
-mkdir -p /home/hms/var/tmp
-mkdir -p /home/hms/var/cache/downloads
-mkdir -p /home/hms/apps
-
-#${RPM_INSTALL_PREFIX0}/share/hms/sbin/update-hms-agent-env.sh \
-#       --prefix=${RPM_INSTALL_PREFIX0} \
-#       --bin-dir=${RPM_INSTALL_PREFIX0}/bin \
-#       --conf-dir=${RPM_INSTALL_PREFIX1} \
-#       --log-dir=${RPM_INSTALL_PREFIX2} \
-#       --pid-dir=${RPM_INSTALL_PREFIX3}
+echo "AMBARI_LOG_DIR=${RPM_INSTALL_PREFIX2}" > /etc/default/ambari-agent-env.sh
+echo "AMBARI_PID_DIR=${RPM_INSTALL_PREFIX3}" >> /etc/default/ambari-agent-env.sh
+mkdir -p /home/ambari/var/tmp
+mkdir -p /home/ambari/var/cache/downloads
+mkdir -p /home/ambari/apps
 
 %files
 %defattr(-,root,root)
 %{_prefix}
-/etc/init.d/hms-agent
+/etc/init.d/ambari-agent
 %config %{_conf_dir}
diff --git a/agent/src/packages/tarball/all.xml b/agent/src/packages/tarball/all.xml
index 04ca020..0e4f34b 100644
--- a/agent/src/packages/tarball/all.xml
+++ b/agent/src/packages/tarball/all.xml
@@ -1,4 +1,20 @@
 <?xml version="1.0"?>
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
 <assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.1"
           xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
           xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.1 http://maven.apache.org/xsd/assembly-1.1.1.xsd">
diff --git a/agent/src/packages/update-hms-agent-env.sh b/agent/src/packages/update-ambari-agent-env.sh
similarity index 73%
rename from agent/src/packages/update-hms-agent-env.sh
rename to agent/src/packages/update-ambari-agent-env.sh
index f7836dc..f9320df 100644
--- a/agent/src/packages/update-hms-agent-env.sh
+++ b/agent/src/packages/update-ambari-agent-env.sh
@@ -15,7 +15,7 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-# This script configures hms-agent-env.sh and symlinkis directories for 
+# This script configures ambari-agent-env.sh and symlinkis directories for 
 # relocating RPM locations.
 
 usage() {
@@ -27,8 +27,8 @@
   Optional parameters:
      --arch=i386                 OS Architecture
      --bin-dir=PREFIX/bin        Executable directory
-     --conf-dir=/etc/hms         Configuration directory
-     --log-dir=/var/log/hms      Log directory
+     --conf-dir=/etc/ambari         Configuration directory
+     --log-dir=/var/log/ambari      Log directory
      --pid-dir=/var/run          PID file location
   "
   exit 1
@@ -115,15 +115,15 @@
       rm -f ${BIN_DIR}/${var}
     done
   fi
-  if [ -f /etc/default/hms-agent-env.sh ]; then
-    rm -f /etc/default/hms-agent-env.sh
+  if [ -f /etc/default/ambari-agent-env.sh ]; then
+    rm -f /etc/default/ambari-agent-env.sh
   fi
   if [ "${CONF_DIR}" != "${PREFIX}/conf" ]; then
     rm -f ${PREFIX}/conf
   fi
 
-  rm -f ${PREFIX}/sbin/hms-agent
-  rm -f /etc/init.d/hms-agent
+  rm -f ${PREFIX}/sbin/ambari-agent
+  rm -f /etc/init.d/ambari-agent
 
 else
   # Create symlinks
@@ -136,21 +136,21 @@
     ln -sf ${CONF_DIR} ${PREFIX}/conf
   fi
 
-  chmod 755 ${PREFIX}/share/hms/sbin/*
+  chmod 755 ${PREFIX}/share/ambari/sbin/*
 
-  ln -sf ${PREFIX}/sbin/hms-agent /etc/init.d/hms-agent
+  ln -sf ${PREFIX}/sbin/ambari-agent /etc/init.d/ambari-agent
 
-  ln -sf ${CONF_DIR}/hms-agent-env.sh /etc/default/hms-agent-env.sh
+  ln -sf ${CONF_DIR}/ambari-agent-env.sh /etc/default/ambari-agent-env.sh
 
   mkdir -p ${PID_DIR}
   mkdir -p ${LOG_DIR}
 
   TFILE="/tmp/$(basename $0).$$.tmp"
-  grep -v "^export HMS_HOME" ${CONF_DIR}/hms-agent-env.sh | \
-  grep -v "^export HMS_CONF_DIR" | \
-  grep -v "^export HMS_CLASSPATH" | \
-  grep -v "^export HMS_PID_DIR" | \
-  grep -v "^export HMS_LOG_DIR" | \
+  grep -v "^export AMBARI_HOME" ${CONF_DIR}/ambari-agent-env.sh | \
+  grep -v "^export AMBARI_CONF_DIR" | \
+  grep -v "^export AMBARI_CLASSPATH" | \
+  grep -v "^export AMBARI_PID_DIR" | \
+  grep -v "^export AMBARI_LOG_DIR" | \
   grep -v "^export JAVA_HOME" > ${TFILE}
   if [ -z "${JAVA_HOME}" ]; then
     if [ -e /etc/lsb-release ]; then
@@ -162,12 +162,12 @@
   if [ "${JAVA_HOME}xxx" != "xxx" ]; then
     echo "export JAVA_HOME=${JAVA_HOME}" >> ${TFILE}
   fi
-  echo "export HMS_IDENT_STRING=\`whoami\`" >> ${TFILE}
-  echo "export HMS_HOME=${PREFIX}/share/hms" >> ${TFILE}
-  echo "export HMS_CONF_DIR=${CONF_DIR}" >> ${TFILE}
-  echo "export HMS_CLASSPATH=${CONF_DIR}:${HADOOP_CONF_DIR}:${HADOOP_JARS}:${ZOOKEEPER_JARS}" >> ${TFILE}
-  echo "export HMS_PID_DIR=${PID_DIR}" >> ${TFILE}
-  echo "export HMS_LOG_DIR=${LOG_DIR}" >> ${TFILE}
-  cp ${TFILE} ${CONF_DIR}/hms-agent-env.sh
+  echo "export AMBARI_IDENT_STRING=\`whoami\`" >> ${TFILE}
+  echo "export AMBARI_HOME=${PREFIX}/share/ambari" >> ${TFILE}
+  echo "export AMBARI_CONF_DIR=${CONF_DIR}" >> ${TFILE}
+  echo "export AMBARI_CLASSPATH=${CONF_DIR}:${HADOOP_CONF_DIR}:${HADOOP_JARS}:${ZOOKEEPER_JARS}" >> ${TFILE}
+  echo "export AMBARI_PID_DIR=${PID_DIR}" >> ${TFILE}
+  echo "export AMBARI_LOG_DIR=${LOG_DIR}" >> ${TFILE}
+  cp ${TFILE} ${CONF_DIR}/ambari-agent-env.sh
   rm -f ${TFILE}
 fi
diff --git a/agent/src/test/python/TestActionQueue.py b/agent/src/test/python/TestActionQueue.py
new file mode 100644
index 0000000..fbfbf24
--- /dev/null
+++ b/agent/src/test/python/TestActionQueue.py
@@ -0,0 +1,94 @@
+#!/usr/bin/env python2.6
+
+'''
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+'''
+
+from unittest import TestCase
+from ambari_agent.ActionQueue import ActionQueue
+from ambari_agent.AmbariConfig import AmbariConfig
+from ambari_agent.FileUtil import getFilePath
+import os, errno, time
+
+class TestActionQueue(TestCase):
+  def test_ActionQueueStartStop(self):
+    actionQueue = ActionQueue(AmbariConfig().getConfig())
+    actionQueue.start()
+    actionQueue.stop()
+    actionQueue.join()
+    self.assertEqual(actionQueue.stopped(), True, 'Action queue is not stopped.') 
+
+  def test_RetryAction(self):
+    action={'id' : 'tttt'}
+    config = AmbariConfig().getConfig()
+    actionQueue = ActionQueue(config)
+    path = actionQueue.getInstallFilename(action['id'])
+    configFile = {
+      "data"       : "test",
+      "owner"      : os.getuid(),
+      "group"      : os.getgid() ,
+      "permission" : 0700,
+      "path"       : path,
+      "umask"      : 022
+    }
+
+    #note that the command in the action is just a listing of the path created
+    #we just want to ensure that 'ls' can run on the data file (in the actual world
+    #this 'ls' would be a puppet or a chef command that would work on a data
+    #file
+    badAction = {
+      'id' : 'tttt',
+      'kind' : 'INSTALL_AND_CONFIG_ACTION',
+      'workDirComponent' : 'abc-hdfs',
+      'file' : configFile,
+      'clusterDefinitionRevision' : 12,
+      'command' : ['/bin/ls',"/foo/bar/badPath1234"]
+    }
+    path=getFilePath(action,path)
+    goodAction = {
+      'id' : 'tttt',
+      'kind' : 'INSTALL_AND_CONFIG_ACTION',
+      'workDirComponent' : 'abc-hdfs',
+      'file' : configFile,
+      'clusterDefinitionRevision' : 12,
+      'command' : ['/bin/ls',path]
+    }
+    actionQueue.start()
+    response = {'actions' : [badAction,goodAction]}
+    actionQueue.maxRetries = 2
+    actionQueue.sleepInterval = 1
+    result = actionQueue.put(response)
+    results = actionQueue.result()
+    sleptCount = 1
+    while (len(results) < 2 and sleptCount < 15):
+        time.sleep(1)
+        sleptCount += 1
+        results = actionQueue.result()
+    actionQueue.stop()
+    actionQueue.join()
+    self.assertEqual(len(results), 2, 'Number of results is not 2.')
+    result = results[0]
+    maxretries = config.get('command', 'maxretries')
+    self.assertEqual(int(result['retryActionCount']), 
+                     int(maxretries),
+                     "Number of retries is %d and not %d" % 
+                     (int(result['retryActionCount']), int(str(maxretries))))
+    result = results[1]
+    self.assertEqual(int(result['retryActionCount']), 
+                     1,
+                     "Number of retries is %d and not %d" % 
+                     (int(result['retryActionCount']), 1))        
diff --git a/agent/src/test/python/TestAgentActions.py b/agent/src/test/python/TestAgentActions.py
new file mode 100644
index 0000000..e844bd9
--- /dev/null
+++ b/agent/src/test/python/TestAgentActions.py
@@ -0,0 +1,102 @@
+#!/usr/bin/env python2.6
+
+'''
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+'''
+
+from unittest import TestCase
+import os, errno, getpass
+from ambari_agent.ActionQueue import ActionQueue
+from ambari_agent.AmbariConfig import AmbariConfig
+from ambari_agent.FileUtil import getFilePath
+from ambari_agent import shell
+from ambari_agent.shell import serverTracker
+import time
+
+class TestAgentActions(TestCase):
+  def test_installAndConfigAction(self):
+    action={'id' : 'tttt'}
+    actionQueue = ActionQueue(AmbariConfig().getConfig())
+    path = actionQueue.getInstallFilename(action['id'])
+    configFile = {
+      "data"       : "test",
+      "owner"      : os.getuid(),
+      "group"      : os.getgid() ,
+      "permission" : 0700,
+      "path"       : path,
+      "umask"      : 022
+    }
+
+    #note that the command in the action is just a listing of the path created
+    #we just want to ensure that 'ls' can run on the data file (in the actual world
+    #this 'ls' would be a puppet or a chef command that would work on a data
+    #file
+    path=getFilePath(action,path)
+    action = { 
+      'id' : 'tttt',
+      'kind' : 'INSTALL_AND_CONFIG_ACTION',
+      'workDirComponent' : 'abc-hdfs',
+      'file' : configFile,
+      'clusterDefinitionRevision' : 12,
+      'command' : ['/bin/ls',path]
+    }
+    result = { }
+    actionQueue = ActionQueue(AmbariConfig().getConfig())
+    result = actionQueue.installAndConfigAction(action)
+    cmdResult = result['commandResult']
+    self.assertEqual(cmdResult['exitCode'], 0, "installAndConfigAction test failed. Returned %d " % cmdResult['exitCode'])
+    self.assertEqual(cmdResult['output'], path + "\n", "installAndConfigAction test failed Returned %s " % cmdResult['output'])
+
+  def test_startAndStopAction(self):
+    command = {'script' : 'import os,sys,time\ni = 0\nwhile (i < 1000):\n  print "testhello"\n  sys.stdout.flush()\n  time.sleep(1)\n  i+=1',
+               'param' : ''}
+    action={'id' : 'ttt',
+            'kind' : 'START_ACTION',
+            'clusterId' : 'foobar',
+            'clusterDefinitionRevision' : 1,
+            'component' : 'foocomponent',
+            'role' : 'foorole',
+            'command' : command,
+            'user' : getpass.getuser()
+    }
+    
+    actionQueue = ActionQueue(AmbariConfig().getConfig())
+    result = actionQueue.startAction(action)
+    cmdResult = result['commandResult']
+    self.assertEqual(cmdResult['exitCode'], 0, "starting a process failed")
+    shell = actionQueue.getshellinstance()
+    key = shell.getServerKey(action['clusterId'],action['clusterDefinitionRevision'],
+                       action['component'],action['role'])
+    keyPresent = True
+    if not key in serverTracker:
+      keyPresent = False
+    self.assertEqual(keyPresent, True, "Key not present")
+    plauncher = serverTracker[key]
+    self.assertTrue(plauncher.getpid() > 0, "Pid less than 0!")
+    time.sleep(5)
+    shell.stopProcess(key)
+    keyPresent = False
+    if key in serverTracker:
+      keyPresent = True
+    self.assertEqual(keyPresent, False, "Key present")
+    processexists = True
+    try:
+      os.kill(serverTracker[key].getpid(),0)
+    except:
+      processexists = False
+    self.assertEqual(processexists, False, "Process still exists!")
+    self.assertTrue("testhello" in plauncher.out, "Output doesn't match!")
diff --git a/agent/src/test/python/TestAmbariComponent.py b/agent/src/test/python/TestAmbariComponent.py
new file mode 100644
index 0000000..22fa622
--- /dev/null
+++ b/agent/src/test/python/TestAmbariComponent.py
@@ -0,0 +1,65 @@
+#!/usr/bin/env python2.6
+
+'''
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+'''
+
+from unittest import TestCase
+import ambari_component
+import os
+import shutil
+
+class TestAmbariComponent(TestCase):
+
+  def setUp(self):
+    global oldCwd, tmp
+    tmp = "/tmp/config/hadoop"
+    oldCwd = os.getcwd()
+    os.chdir("/tmp")
+    if not os.path.exists(tmp):
+      os.makedirs(tmp)
+
+  def tearDown(self):
+    global oldCwd, tmp
+    shutil.rmtree(tmp)
+    os.chdir(oldCwd)
+
+  def test_copySh(self):
+    result = ambari_component.copySh(os.getuid(), os.getgid(), 0700, 'hadoop/hadoop-env', 
+      {
+        'HADOOP_CONF_DIR'      : '/etc/hadoop',
+        'HADOOP_NAMENODE_OPTS' : '-Dsecurity.audit.logger=INFO,DRFAS'
+      }
+    )
+    self.assertEqual(result['exitCode'], 0)
+
+  def test_copyProperties(self):
+    result = ambari_component.copyProperties(os.getuid(), os.getgid(), 0700, 'hadoop/hadoop-metrics2',
+      {
+        '*.period':'60'
+      }
+    )
+    self.assertEqual(result['exitCode'], 0)
+
+  def test_copyXml(self):
+    result = ambari_component.copyXml(os.getuid(), os.getgid(), 0700, 'hadoop/core-site',
+      {
+        'local.realm'     : '${KERBEROS_REALM}',
+        'fs.default.name' : 'hdfs://localhost:8020'
+      }
+    )
+    self.assertEqual(result['exitCode'], 0)
diff --git a/agent/src/test/python/TestFileUtil.py b/agent/src/test/python/TestFileUtil.py
new file mode 100644
index 0000000..53e55c5
--- /dev/null
+++ b/agent/src/test/python/TestFileUtil.py
@@ -0,0 +1,56 @@
+#!/usr/bin/env python2.6
+
+'''
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+'''
+
+from unittest import TestCase
+from ambari_agent.FileUtil import writeFile, createStructure, deleteStructure
+import os, errno
+
+class TestFileUtil(TestCase):
+  def test_createStructure(self):
+    action = { 'clusterId' : 'abc', 'role' : 'hdfs', 'workDirComponent' : 'abc-hdfs' }
+    result = {}
+    result = createStructure(action, result)
+    self.assertEqual(result['exitCode'], 0, 'Create cluster structure failed.')
+
+#  def test_writeFile(self):
+    configFile = {
+      "data"       : "test",
+      "owner"      : os.getuid(),
+      "group"      : os.getgid() ,
+      "permission" : 0700,
+      "path"       : "/tmp/ambari_file_test/_file_write_test",
+      "umask"      : 022
+    }
+    action = { 
+      'clusterId' : 'abc', 
+      'role' : 'hdfs', 
+      'workDirComponent' : 'abc-hdfs',
+      'file' : configFile 
+    }
+    result = { }
+    result = writeFile(action, result)
+    self.assertEqual(result['exitCode'], 0, 'WriteFile test with uid/gid failed.')
+
+#  def test_deleteStructure(self):
+    result = { }
+    action = { 'clusterId' : 'abc', 'role' : 'hdfs', 'workDirComponent' : 'abc-hdfs' }
+    result = deleteStructure(action, result)
+    self.assertEqual(result['exitCode'], 0, 'Delete cluster structure failed.')
+
diff --git a/agent/src/main/python/hms_agent/shell.py b/agent/src/test/python/TestHardware.py
old mode 100755
new mode 100644
similarity index 64%
copy from agent/src/main/python/hms_agent/shell.py
copy to agent/src/test/python/TestHardware.py
index e421f3a..68ee8b2
--- a/agent/src/main/python/hms_agent/shell.py
+++ b/agent/src/test/python/TestHardware.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python2.6
 
 '''
 Licensed to the Apache Software Foundation (ASF) under one
@@ -18,16 +18,13 @@
 limitations under the License.
 '''
 
-import subprocess
-import os
+from unittest import TestCase
+from ambari_agent.Hardware import Hardware
 
-class shellRunner:
-    def run(self, script):
-        code = 0
-        cmd = " "
-        cmd = cmd.join(script)
-        p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, close_fds=True)
-        out, err = p.communicate()
-        if p.wait() != 0:
-            code = 1
-        return {'exit_code': code, 'output': out, 'error': err}
+class TestHardware(TestCase):
+  def test_build(self):
+    hardware = Hardware()
+    result = hardware.get()
+    self.assertTrue(result['coreCount'] >= 1)
+    self.assertTrue(result['netSpeed'] != None)
+
diff --git a/agent/src/test/python/TestHeartbeat.py b/agent/src/test/python/TestHeartbeat.py
new file mode 100644
index 0000000..c9dc354
--- /dev/null
+++ b/agent/src/test/python/TestHeartbeat.py
@@ -0,0 +1,38 @@
+#!/usr/bin/env python2.6
+
+'''
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+'''
+
+from unittest import TestCase
+from ambari_agent.Heartbeat import Heartbeat
+from ambari_agent.ActionQueue import ActionQueue
+from ambari_agent.AmbariConfig import AmbariConfig
+import socket
+
+class TestHeartbeat(TestCase):
+  def test_build(self):
+    actionQueue = ActionQueue(AmbariConfig().getConfig())
+    heartbeat = Heartbeat(actionQueue)
+    result = heartbeat.build(100)
+    self.assertEqual(result['hostname'], socket.gethostname(), 'hostname mismatched.')
+    self.assertEqual(result['responseId'], 100, 'responseId mismatched.')
+    self.assertEqual(result['idle'], True, 'Heartbeat should indicate Agent is idle.')
+    self.assertEqual(result['installScriptHash'], -1, 'installScriptHash should be -1.')
+    self.assertEqual(result['firstContact'], True, 'firstContact should be True.')
+    result = heartbeat.build(101)
+    self.assertEqual(result['firstContact'], False, 'firstContact should be False.')
diff --git a/agent/src/main/python/hms_agent/shell.py b/agent/src/test/python/TestServerStatus.py
old mode 100755
new mode 100644
similarity index 64%
rename from agent/src/main/python/hms_agent/shell.py
rename to agent/src/test/python/TestServerStatus.py
index e421f3a..8d09037
--- a/agent/src/main/python/hms_agent/shell.py
+++ b/agent/src/test/python/TestServerStatus.py
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python2.6
 
 '''
 Licensed to the Apache Software Foundation (ASF) under one
@@ -18,16 +18,12 @@
 limitations under the License.
 '''
 
-import subprocess
-import os
+from unittest import TestCase
+from ambari_agent.ServerStatus import ServerStatus
 
-class shellRunner:
-    def run(self, script):
-        code = 0
-        cmd = " "
-        cmd = cmd.join(script)
-        p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, close_fds=True)
-        out, err = p.communicate()
-        if p.wait() != 0:
-            code = 1
-        return {'exit_code': code, 'output': out, 'error': err}
+class TestServerStatus(TestCase):
+  def test_build(self):
+    serverStatus = ServerStatus()
+    result = serverStatus.build()
+    self.assertEqual(result, [], 'List of running servers should be 0.')
+
diff --git a/agent/src/test/python/unitTests.py b/agent/src/test/python/unitTests.py
new file mode 100644
index 0000000..233034b
--- /dev/null
+++ b/agent/src/test/python/unitTests.py
@@ -0,0 +1,51 @@
+#!/usr/bin/env python2.6
+
+'''
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+'''
+
+import unittest
+import doctest
+
+class TestAgent(unittest.TestSuite):
+  def run(self, result):
+    run = unittest.TestSuite.run
+    run(self, result)
+    return result
+
+def all_tests_suite():
+  suite = unittest.TestLoader().loadTestsFromNames([
+    'TestHeartbeat',
+    'TestHardware',
+    'TestServerStatus',
+    'TestFileUtil',
+    'TestActionQueue',
+    'TestAmbariComponent',
+    'TestAgentActions'
+  ])
+  return TestAgent([suite])
+
+def main():
+  runner = unittest.TextTestRunner()
+  suite = all_tests_suite()
+  raise SystemExit(not runner.run(suite).wasSuccessful())
+
+if __name__ == '__main__':
+  import os
+  import sys
+  sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))))
+  main()
diff --git a/beacon/pom.xml b/beacon/pom.xml
deleted file mode 100644
index c549345..0000000
--- a/beacon/pom.xml
+++ /dev/null
@@ -1,47 +0,0 @@
-<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
-
-    <parent>
-        <groupId>org.apache.hms</groupId>
-        <artifactId>hms</artifactId>
-        <version>0.1.0</version>
-    </parent>
-
-    <modelVersion>4.0.0</modelVersion>
-    <groupId>org.apache.hms</groupId>
-    <artifactId>beacon</artifactId>
-    <packaging>jar</packaging>
-    <version>0.1.0-SNAPSHOT</version>
-    <name>beacon</name>
-    <description>Hadoop Management System ZooKeeper Beacon</description>
-
-    <dependencies>
-      <dependency>
-        <groupId>org.apache.hms</groupId>
-        <artifactId>common</artifactId>
-        <version>0.1.0-SNAPSHOT</version>
-      </dependency>
-    </dependencies>
-
-    <build>
-      <plugins>
-        <plugin>
-          <groupId>org.apache.maven.plugins</groupId>
-          <artifactId>maven-jar-plugin</artifactId>
-          <configuration>
-            <archive>
-              <manifest>
-                <mainClass>org.apache.hms.beacon.Beacon</mainClass>
-                <packageName>org.apache.hms.beacon</packageName>
-              </manifest>
-              <manifestEntries>
-                <mode>development</mode>
-                <url>${project.url}</url>
-              </manifestEntries>
-            </archive>
-          </configuration>
-        </plugin>
-      </plugins>
-    </build>
-
-</project>
diff --git a/beacon/src/main/java/org/apache/hms/beacon/Beacon.java b/beacon/src/main/java/org/apache/hms/beacon/Beacon.java
deleted file mode 100755
index e141338..0000000
--- a/beacon/src/main/java/org/apache/hms/beacon/Beacon.java
+++ /dev/null
@@ -1,71 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.beacon;
-
-import java.io.IOException;
-import java.net.UnknownHostException;
-import org.apache.hms.common.util.DaemonWatcher;
-import org.apache.hms.common.util.MulticastDNS;
-
-/**
- * 
- * HMS Beacon broadcast ZooKeeper server location by using MulticastDNS.
- * This utility runs on the same node as ZooKeeper server.
- *
- */
-public class Beacon extends MulticastDNS {
-  public Beacon() throws UnknownHostException {
-    super();
-  }
- 
-  public Beacon(String svcType, int svcPort) throws UnknownHostException {
-    super(svcType, svcPort);  
-  }
-  
-  /**
-   * Register Zookeeper host location in MulticastDNS
-   * @throws IOException
-   */
-  public void start() throws IOException {
-    handleRegisterCommand();
-  }
-  
-  /**
-   * Remove Zookeeper host location from MulticastDNS
-   * @throws IOException
-   */
-  public void stop() throws IOException {
-    handleUnregisterCommand();
-  }
- 
-  public static void main(String[] args) throws IOException {
-    DaemonWatcher.createInstance(System.getProperty("PID"), 9101);
-    try {
-      final Beacon helper = new Beacon("_zookeeper._tcp.local.", 2181);
-      try {
-        helper.start();
-      } catch(Throwable t) {
-        helper.stop();
-        throw t;
-      }
-    } catch(Throwable e) {
-      DaemonWatcher.bailout(1);      
-    }
-  }
-}
\ No newline at end of file
diff --git a/beacon/src/main/resources/log4j.properties b/beacon/src/main/resources/log4j.properties
deleted file mode 100755
index 6edd539..0000000
--- a/beacon/src/main/resources/log4j.properties
+++ /dev/null
@@ -1,28 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-log4j.rootLogger=INFO, R
-log4j.appender.R=org.apache.log4j.RollingFileAppender
-log4j.appender.R.File=${HMS_LOG_DIR}/hms-beacon.log
-log4j.appender.R.MaxFileSize=10MB
-log4j.appender.R.MaxBackupIndex=10
-log4j.appender.R.layout=org.apache.log4j.PatternLayout
-log4j.appender.R.layout.ConversionPattern=%d{ISO8601} %p %t %c{1} - %m%n
-
-log4j.appender.stdout=org.apache.log4j.ConsoleAppender
-log4j.appender.stdout.follow=true
-log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
-log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} %p %t %c{1} - %m%n
-
diff --git a/bin/hms-config.sh b/bin/hms-config.sh
deleted file mode 100644
index d3002b9..0000000
--- a/bin/hms-config.sh
+++ /dev/null
@@ -1,88 +0,0 @@
-#!/usr/bin/env bash
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# included in all the hadoop scripts with source command
-# should not be executable directly
-# also should not be passed any arguments, since we need original $*
-
-# resolve links - $0 may be a softlink
-
-this="$0"
-while [ -h "$this" ]; do
-  ls=`ls -ld "$this"`
-  link=`expr "$ls" : '.*-> \(.*\)$'`
-  if expr "$link" : '.*/.*' > /dev/null; then
-    this="$link"
-  else
-    this=`dirname "$this"`/"$link"
-  fi
-done
-
-# convert relative path to absolute path
-bin=`dirname "$this"`
-script=`basename "$this"`
-bin=`cd "$bin"; pwd`
-this="$bin/$script"
-
-#check to see if the conf dir or hms home are given as an optional arguments
-if [ $# -gt 1 ]
-then
-  if [ "--config" = "$1" ]
-  then
-    shift
-    confdir=$1
-    shift
-    HMS_CONF_DIR=$confdir
-  fi
-fi
-
-# the root of the hms installation
-export HMS_HOME=`dirname "$this"`/..
-
-if [ -z ${HMS_LOG_DIR} ]; then
-    export HMS_LOG_DIR="${HMS_HOME}/logs"
-fi
-
-if [ -z ${HMS_PID_DIR} ]; then
-    export HMS_PID_DIR="${HMS_HOME}/var/run"
-fi
-
-HMS_VERSION=`cat ${HMS_HOME}/VERSION`
-HMS_IDENT_STRING=`whoami`
-
-# Allow alternate conf dir location.
-if [ -z "${HMS_CONF_DIR}" ]; then
-    HMS_CONF_DIR="${HMS_CONF_DIR:-$HMS_HOME/conf}"
-    export HMS_CONF_DIR=${HMS_HOME}/conf
-fi
-
-if [ -f "${HMS_CONF_DIR}/hms-env.sh" ]; then
-  . "${HMS_CONF_DIR}/hms-env.sh"
-fi
-
-COMMON=`ls ${HMS_HOME}/lib/*.jar`
-export COMMON=`echo ${COMMON} | sed 'y/ /:/'`
-
-export HMS_CORE=${HMS_HOME}/hms-core-${HMS_VERSION}.jar
-export HMS_AGENT=${HMS_HOME}/hms-agent-${HMS_VERSION}.jar
-export CURRENT_DATE=`date +%Y%m%d%H%M`
-
-if [ -z "$JAVA_HOME" ] ; then
-  echo ERROR! You forgot to set JAVA_HOME in conf/hms-env.sh
-fi
-
-export JPS="ps ax"
-
diff --git a/bin/hms-daemon.sh b/bin/hms-daemon.sh
deleted file mode 100755
index 82fc7d8..0000000
--- a/bin/hms-daemon.sh
+++ /dev/null
@@ -1,202 +0,0 @@
-#!/usr/bin/env bash
-#
-#/**
-# * Copyright 2007 The Apache Software Foundation
-# *
-# * Licensed to the Apache Software Foundation (ASF) under one
-# * or more contributor license agreements.  See the NOTICE file
-# * distributed with this work for additional information
-# * regarding copyright ownership.  The ASF licenses this file
-# * to you under the Apache License, Version 2.0 (the
-# * "License"); you may not use this file except in compliance
-# * with the License.  You may obtain a copy of the License at
-# *
-# *     http://www.apache.org/licenses/LICENSE-2.0
-# *
-# * Unless required by applicable law or agreed to in writing, software
-# * distributed under the License is distributed on an "AS IS" BASIS,
-# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# * See the License for the specific language governing permissions and
-# * limitations under the License.
-# */
-# 
-# Runs a Hadoop hms command as a daemon.
-#
-# Environment Variables
-#
-#   HMS_CONF_DIR   Alternate hms conf dir. Default is ${HMS_HOME}/conf.
-#   HMS_LOG_DIR    Where log files are stored.  PWD by default.
-#   HMS_PID_DIR    The pid files are stored. /tmp by default.
-#   HMS_IDENT_STRING   A string representing this instance of hadoop. $USER by default
-#   HMS_NICENESS The scheduling priority for daemons. Defaults to 0.
-#
-# Modelled after $HADOOP_HOME/bin/hadoop-daemon.sh
-
-usage="Usage: hms-daemon.sh [--config <conf-dir>]\
- (start|stop|restart) <hms-command> \
- <args...>"
-
-# if no args specified, show usage
-if [ $# -le 1 ]; then
-  echo $usage
-  exit 1
-fi
-
-bin=`dirname "${BASH_SOURCE-$0}"`
-bin=`cd "$bin">/dev/null; pwd`
-
-. "$bin"/hms-config.sh
-
-# get arguments
-startStop=$1
-shift
-
-command=$1
-shift
-
-hms_rotate_log ()
-{
-    log=$1;
-    num=5;
-    if [ -n "$2" ]; then
-    num=$2
-    fi
-    if [ -f "$log" ]; then # rotate logs
-    while [ $num -gt 1 ]; do
-        prev=`expr $num - 1`
-        [ -f "$log.$prev" ] && mv -f "$log.$prev" "$log.$num"
-        num=$prev
-    done
-    mv -f "$log" "$log.$num";
-    fi
-}
-
-wait_until_done ()
-{
-    p=$1
-    cnt=${HMS_SLAVE_TIMEOUT:-300}
-    origcnt=$cnt
-    while kill -0 $p > /dev/null 2>&1; do
-      if [ $cnt -gt 1 ]; then
-        cnt=`expr $cnt - 1`
-        sleep 1
-      else
-        echo "Process did not complete after $origcnt seconds, killing."
-        kill -9 $p
-        exit 1
-      fi
-    done
-    return 0
-}
-
-# get log directory
-if [ "$HMS_LOG_DIR" = "" ]; then
-  export HMS_LOG_DIR="$HMS_HOME/logs"
-fi
-mkdir -p "$HMS_LOG_DIR"
-
-if [ "$HMS_PID_DIR" = "" ]; then
-  HMS_PID_DIR=/tmp
-fi
-
-if [ "$HMS_IDENT_STRING" = "" ]; then
-  export HMS_IDENT_STRING="$USER"
-fi
-
-# Some variables
-# Work out java location so can print version into log.
-if [ "$JAVA_HOME" != "" ]; then
-  #echo "run java in $JAVA_HOME"
-  JAVA_HOME=$JAVA_HOME
-fi
-if [ "$JAVA_HOME" = "" ]; then
-  echo "Error: JAVA_HOME is not set."
-  exit 1
-fi
-JAVA=$JAVA_HOME/bin/java
-export HMS_LOGFILE=hms-$HMS_IDENT_STRING-$command-$HOSTNAME.log
-export HMS_ROOT_LOGGER="INFO,DRFA"
-logout=$HMS_LOG_DIR/hms-$HMS_IDENT_STRING-$command-$HOSTNAME.out  
-loglog="${HMS_LOG_DIR}/${HMS_LOGFILE}"
-pid=$HMS_PID_DIR/hms-$HMS_IDENT_STRING-$command.pid
-
-# Set default scheduling priority
-if [ "$HMS_NICENESS" = "" ]; then
-    export HMS_NICENESS=0
-fi
-
-case $startStop in
-
-  (start)
-    mkdir -p "$HMS_PID_DIR"
-    if [ -f $pid ]; then
-      if kill -0 `cat $pid` > /dev/null 2>&1; then
-        echo $command running as process `cat $pid`.  Stop it first.
-        exit 1
-      fi
-    fi
-
-    hms_rotate_log $logout
-    echo starting $command, logging to $logout
-    # Add to the command log file vital stats on our environment.
-    echo "`date` Starting $command on `hostname`" >> $loglog
-    echo "ulimit -n `ulimit -n`" >> $loglog 2>&1
-    nohup nice -n $HMS_NICENESS "$HMS_HOME"/bin/hms \
-        --config "${HMS_CONF_DIR}" \
-        $command $startStop "$@" > "$logout" 2>&1 < /dev/null &
-    sleep 1; head "$logout"
-    ;;
-
-  (stop)
-    if [ -f $pid ]; then
-      # kill -0 == see if the PID exists 
-      if kill -0 `cat $pid` > /dev/null 2>&1; then
-        echo -n stopping $command
-        if [ "$command" = "master" ]; then
-          echo "`date` Killing $command" >> $loglog
-          kill -9 `cat $pid` > /dev/null 2>&1
-        else
-          echo "`date` Killing $command" >> $loglog
-          kill `cat $pid` > /dev/null 2>&1
-        fi
-        while true > /dev/null 2>&1; do
-          echo -n "."
-          sleep 1;
-          if [ -f $pid ]; then
-            kill `cat $pid` > /dev/nul 2>&1
-          else
-            break
-          fi
-        done
-        echo
-      else
-        retval=$?
-        echo no $command to stop because kill -0 of pid `cat $pid` failed with status $retval
-      fi
-    else
-      echo no $command to stop because no pid file $pid
-    fi
-    ;;
-
-  (restart)
-    thiscmd=$0
-    args=$@
-    # stop the command
-    $thiscmd --config "${HMS_CONF_DIR}" stop $command $args &
-    wait_until_done $!
-    # wait a user-specified sleep period
-    sp=${HMS_RESTART_SLEEP:-3}
-    if [ $sp -gt 0 ]; then
-      sleep $sp
-    fi
-    # start the command
-    $thiscmd --config "${HMS_CONF_DIR}" start $command $args &
-    wait_until_done $!
-    ;;
-
-  (*)
-    echo $usage
-    exit 1
-    ;;
-
-esac
diff --git a/client/.classpath b/client/.classpath
deleted file mode 100644
index f90917e..0000000
--- a/client/.classpath
+++ /dev/null
@@ -1,47 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<classpath>
-	<classpathentry including="**/*.java" kind="src" output="target/test-classes" path="src/test/java"/>
-	<classpathentry including="**/*.java" kind="src" path="src/main/java"/>
-	<classpathentry excluding="**/*.java" kind="src" path="src/main/resources"/>
-	<classpathentry kind="var" path="M2_REPO/javax/activation/activation/1.1/activation-1.1.jar"/>
-	<classpathentry kind="var" path="M2_REPO/javax/xml/bind/jaxb-api/2.2.2/jaxb-api-2.2.2.jar"/>
-	<classpathentry kind="var" path="M2_REPO/javax/jmdns/jmdns/3.4.0/jmdns-3.4.0.jar"/>
-	<classpathentry kind="var" path="M2_REPO/javax/servlet/servlet-api/2.5/servlet-api-2.5.jar"/>
-	<classpathentry kind="var" path="M2_REPO/javax/xml/stream/stax-api/1.0-2/stax-api-1.0-2.jar"/>
-	<classpathentry kind="var" path="M2_REPO/asm/asm/3.1/asm-3.1.jar"/>
-	<classpathentry kind="var" path="M2_REPO/dk/brics/automaton/automaton/1.11.2/automaton-1.11.2.jar"/>
-	<classpathentry kind="var" path="M2_REPO/commons-beanutils/commons-beanutils/1.7.0/commons-beanutils-1.7.0.jar"/>
-	<classpathentry kind="var" path="M2_REPO/commons-beanutils/commons-beanutils-core/1.8.0/commons-beanutils-core-1.8.0.jar"/>
-	<classpathentry kind="var" path="M2_REPO/commons-cli/commons-cli/1.2/commons-cli-1.2.jar"/>
-	<classpathentry kind="var" path="M2_REPO/commons-codec/commons-codec/1.3/commons-codec-1.3.jar"/>
-	<classpathentry kind="var" path="M2_REPO/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar"/>
-	<classpathentry kind="var" path="M2_REPO/commons-configuration/commons-configuration/1.6/commons-configuration-1.6.jar"/>
-	<classpathentry kind="var" path="M2_REPO/commons-daemon/commons-daemon/1.0.5/commons-daemon-1.0.5.jar"/>
-	<classpathentry kind="var" path="M2_REPO/commons-digester/commons-digester/1.8/commons-digester-1.8.jar"/>
-	<classpathentry kind="var" path="M2_REPO/commons-httpclient/commons-httpclient/3.0.1/commons-httpclient-3.0.1.jar"/>
-	<classpathentry kind="var" path="M2_REPO/commons-lang/commons-lang/2.4/commons-lang-2.4.jar"/>
-	<classpathentry kind="var" path="M2_REPO/commons-logging/commons-logging/1.1.1/commons-logging-1.1.1.jar"/>
-	<classpathentry kind="src" path="/hms-controller"/>
-	<classpathentry kind="var" path="M2_REPO/org/codehaus/jackson/jackson-core-asl/1.7.1/jackson-core-asl-1.7.1.jar"/>
-	<classpathentry kind="var" path="M2_REPO/org/codehaus/jackson/jackson-jaxrs/1.7.1/jackson-jaxrs-1.7.1.jar"/>
-	<classpathentry kind="var" path="M2_REPO/org/codehaus/jackson/jackson-mapper-asl/1.7.1/jackson-mapper-asl-1.7.1.jar"/>
-	<classpathentry kind="var" path="M2_REPO/org/codehaus/jackson/jackson-xc/1.7.1/jackson-xc-1.7.1.jar"/>
-	<classpathentry kind="var" path="M2_REPO/com/sun/xml/bind/jaxb-impl/2.2.3-1/jaxb-impl-2.2.3-1.jar"/>
-	<classpathentry kind="var" path="M2_REPO/com/sun/jersey/jersey-client/1.6/jersey-client-1.6.jar"/>
-	<classpathentry kind="var" path="M2_REPO/com/sun/jersey/jersey-core/1.6/jersey-core-1.6.jar"/>
-	<classpathentry kind="var" path="M2_REPO/com/sun/jersey/jersey-json/1.6/jersey-json-1.6.jar"/>
-	<classpathentry kind="var" path="M2_REPO/com/sun/jersey/jersey-server/1.6/jersey-server-1.6.jar"/>
-	<classpathentry kind="var" path="M2_REPO/org/codehaus/jettison/jettison/1.1/jettison-1.1.jar"/>
-	<classpathentry kind="var" path="M2_REPO/org/mortbay/jetty/jetty/6.1.26/jetty-6.1.26.jar"/>
-	<classpathentry kind="var" path="M2_REPO/org/mortbay/jetty/jetty-util/6.1.26/jetty-util-6.1.26.jar"/>
-	<classpathentry kind="var" path="M2_REPO/jline/jline/0.9.94/jline-0.9.94.jar"/>
-	<classpathentry kind="var" path="M2_REPO/junit/junit/3.8.1/junit-3.8.1.jar"/>
-	<classpathentry kind="var" path="M2_REPO/log4j/log4j/1.2.15/log4j-1.2.15.jar"/>
-	<classpathentry kind="var" path="M2_REPO/org/mortbay/jetty/servlet-api/2.5-20081211/servlet-api-2.5-20081211.jar"/>
-	<classpathentry kind="var" path="M2_REPO/stax/stax-api/1.0.1/stax-api-1.0.1.jar"/>
-	<classpathentry kind="var" path="M2_REPO/org/testng/testng/5.8/testng-5.8-jdk15.jar"/>
-	<classpathentry kind="var" path="M2_REPO/org/apache/zookeeper/zookeeper/3.3.2/zookeeper-3.3.2.jar"/>
-	<classpathentry kind="con" path="org.eclipse.jdt.launching.JRE_CONTAINER/org.eclipse.jdt.internal.debug.ui.launcher.StandardVMType/JavaSE-1.6"/>
-	<classpathentry combineaccessrules="false" kind="src" path="/hms-common"/>
-	<classpathentry kind="output" path="target/classes"/>
-</classpath>
diff --git a/client/.project b/client/.project
deleted file mode 100644
index 1d11a4e..0000000
--- a/client/.project
+++ /dev/null
@@ -1,20 +0,0 @@
-<projectDescription>
-  <name>client</name>
-  <comment>Hadoop Management System Client. NO_M2ECLIPSE_SUPPORT: Project files created with the maven-eclipse-plugin are not supported in M2Eclipse.</comment>
-  <projects>
-    <project>common</project>
-    <project>hms-controller</project>
-  </projects>
-  <buildSpec>
-    <buildCommand>
-      <name>org.eclipse.jdt.core.javabuilder</name>
-    </buildCommand>
-    <buildCommand>
-      <name>org.eclipse.m2e.core.maven2Builder</name>
-    </buildCommand>
-  </buildSpec>
-  <natures>
-    <nature>org.eclipse.jdt.core.javanature</nature>
-    <nature>org.eclipse.m2e.core.maven2Nature</nature>
-  </natures>
-</projectDescription>
\ No newline at end of file
diff --git a/bin/hms b/client/bin/ambari
similarity index 62%
rename from bin/hms
rename to client/bin/ambari
index b83d314..11a78a3 100755
--- a/bin/hms
+++ b/client/bin/ambari
@@ -16,27 +16,25 @@
 # limitations under the License.
 
 
-# The HMS command script
+# The Ambari command script
 #
 # Environment Variables
 #
 #   JAVA_HOME        The java implementation to use.  Overrides JAVA_HOME.
-#   HMS_CONF_DIR     Alternate conf dir.  Default is ${HMS_HOME}/conf.
+#   AMBARI_CONF_DIR     Alternate conf dir.  Default is ${AMBARI_HOME}/conf.
 #
 
 bin=`dirname "$0"`
 bin=`cd "$bin"; pwd`
 
-. "$bin"/hms-config.sh
+. "$bin"/ambari-config.sh
 
 # if no args specified, show usage
 if [ $# = 0 ]; then
-  echo "Usage: hms [--config confdir] COMMAND"
+  echo "Usage: $0 [--config confdir] COMMAND"
   echo "where COMMAND is one of:"
-  echo "  agent         run a HMS Agent"
-  echo "  beacon        run a HMS Beacon"
-  echo "  controller    run a HMS Controller"
-  echo "  client        run a HMS client"
+  echo "  controller    run Ambari Controller"
+  echo "  client        run Ambari client"
   echo "  version       print the version"
   exit 1
 fi
@@ -45,8 +43,8 @@
 COMMAND=$1
 shift
 
-if [ -f "${HMS_CONF_DIR}/hms-env.sh" ]; then
-  . "${HMS_CONF_DIR}/hms-env.sh"
+if [ -f "${AMBARI_CONF_DIR}/ambari-env.sh" ]; then
+  . "${AMBARI_CONF_DIR}/ambari-env.sh"
 fi
 
 # Java parameters
@@ -59,8 +57,8 @@
   exit 1
 fi
 
-if [ "$HMS_CONF_DIR" != "" ]; then
-  CLASSPATH=${HMS_CONF_DIR}:${CLASSPATH}
+if [ "$AMBARI_CONF_DIR" != "" ]; then
+  CLASSPATH=${AMBARI_CONF_DIR}:${CLASSPATH}
 fi
 
 BACKGROUND="false"
@@ -68,29 +66,29 @@
 # configure command parameters
 if [ "$COMMAND" = "agent" ]; then
   APP='agent'
-  PID="hms-agent"
+  PID="ambari-agent"
 elif [ "$COMMAND" = "beacon" ]; then
   APP='beacon'
-  CLASS='org.apache.hms.beacon.Beacon'
-  PID="hms-$HMS_IDENT_STRING-beacon"
+  CLASS='org.apache.ambari.beacon.Beacon'
+  PID="ambari-$AMBARI_IDENT_STRING-beacon"
   BACKGROUND="true"
 elif [ "$COMMAND" = "controller" ]; then
   APP='controller'
-  CLASS='org.apache.hms.controller.Controller'
-  PID="hms-$HMS_IDENT_STRING-controller"
+  CLASS='org.apache.ambari.controller.Controller'
+  PID="ambari-$AMBARI_IDENT_STRING-controller"
   BACKGROUND="true"
 elif [ "$COMMAND" = "client" ]; then
   APP='client'
-  CLASS='org.apache.hms.client.Client'
+  CLASS='org.apache.ambari.client.AmbariClient'
   PID="client"
 elif [ "$COMMAND" = "version" ]; then
-  echo `cat ${HMS_HOME}/bin/VERSION`
+  echo `cat ${AMBARI_HOME}/bin/VERSION`
   exit 0
 fi
 
 if [ "$1" = "stop" ]; then
-  if [ -e ${HMS_PID_DIR}/${PID}.pid ]; then
-    kill -TERM `cat ${HMS_PID_DIR}/$PID.pid`
+  if [ -e ${AMBARI_PID_DIR}/${PID}.pid ]; then
+    kill -TERM `cat ${AMBARI_PID_DIR}/$PID.pid`
   else
     echo "${PID} is not running."
   fi
@@ -99,7 +97,7 @@
     echo
   else 
     # run command
-    RUN="${JAVA_HOME}/bin/java ${JAVA_OPT} -Djava.library.path=${JAVA_LIBRARY_PATH} -DPID=${PID} -DHMS_HOME=${HMS_HOME} -DHMS_CONF_DIR=${HMS_CONF_DIR} -DHMS_LOG_DIR=${HMS_LOG_DIR} -DHMS_DATA_DIR=${HMS_DATA_DIR} -DAPP=${APP} -Dlog4j.configuration=log4j.properties -classpath ${HMS_CONF_DIR}:${CLASSPATH}:${HMS_CORE}:${HMS_JAR}:${COMMON}:${tools} ${CLASS} $OPTS $@"
+    RUN="${JAVA_HOME}/bin/java ${JAVA_OPT} -Djava.library.path=${JAVA_LIBRARY_PATH} -DPID=${PID} -DAMBARI_HOME=${AMBARI_HOME} -DAMBARI_CONF_DIR=${AMBARI_CONF_DIR} -DAMBARI_LOG_DIR=${AMBARI_LOG_DIR} -DAMBARI_DATA_DIR=${AMBARI_DATA_DIR} -DAPP=${APP} -Dlog4j.configuration=log4j.properties -classpath ${AMBARI_CONF_DIR}:${CLASSPATH}:${AMBARI_CONTROLLER}:${AMBARI_JAR}:${COMMON}:${tools} ${CLASS} $OPTS $@"
     if [ "$BACKGROUND" = "true" ]; then
       exec ${RUN} &
     else
diff --git a/agent/bin/hms-config.sh b/client/bin/ambari-config.sh
similarity index 60%
rename from agent/bin/hms-config.sh
rename to client/bin/ambari-config.sh
index 6fd4dfd..8138d27 100644
--- a/agent/bin/hms-config.sh
+++ b/client/bin/ambari-config.sh
@@ -37,7 +37,7 @@
 bin=`cd "$bin"; pwd`
 this="$bin/$script"
 
-#check to see if the conf dir or hms home are given as an optional arguments
+#check to see if the conf dir or ambari home are given as an optional arguments
 if [ $# -gt 1 ]
 then
   if [ "--config" = "$1" ]
@@ -45,42 +45,45 @@
     shift
     confdir=$1
     shift
-    HMS_CONF_DIR=$confdir
+    export AMBARI_CONF_DIR=$confdir
   fi
 fi
 
-# the root of the hms installation
-export HMS_HOME=`dirname "$this"`/..
+# the root of the ambari installation
+export AMBARI_HOME=`dirname "$this"`/..
 
-if [ -z ${HMS_LOG_DIR} ]; then
-    export HMS_LOG_DIR="${HMS_HOME}/logs"
+if [ -z ${AMBARI_LOG_DIR} ]; then
+    export AMBARI_LOG_DIR="${AMBARI_HOME}/var/log"
 fi
 
-if [ -z ${HMS_PID_DIR} ]; then
-    export HMS_PID_DIR="${HMS_HOME}/var/run"
+if [ -z ${AMBARI_PID_DIR} ]; then
+    export AMBARI_PID_DIR="${AMBARI_HOME}/var/run"
 fi
 
-HMS_VERSION=`cat ${HMS_HOME}/VERSION`
+AMBARI_VERSION=`cat ${AMBARI_HOME}/share/ambari/VERSION`
 
 # Allow alternate conf dir location.
-if [ -z "${HMS_CONF_DIR}" ]; then
-    HMS_CONF_DIR="${HMS_CONF_DIR:-$HMS_HOME/conf}"
-    export HMS_CONF_DIR=${HMS_HOME}/conf
+if [ -z "${AMBARI_CONF_DIR}" ]; then
+    if [ -e "${AMBARI_HOME}/conf" ]; then
+      export AMBARI_CONF_DIR="$AMBARI_HOME/conf"
+    fi
+    if [ -e "${AMBARI_HOME}/etc/ambari" ]; then
+      export AMBARI_CONF_DIR="$AMBARI_HOME/etc/ambari"
+    fi
 fi
 
-if [ -f "${HMS_CONF_DIR}/hms-env.sh" ]; then
-  . "${HMS_CONF_DIR}/hms-env.sh"
+if [ -f "${AMBARI_CONF_DIR}/ambari-env.sh" ]; then
+  . "${AMBARI_CONF_DIR}/ambari-env.sh"
 fi
 
-COMMON=`ls ${HMS_HOME}/lib/*.jar`
-export COMMON=`echo ${COMMON} | sed 'y/ /:/'`
+COMMON="${AMBARI_HOME}/share/ambari/*:${AMBARI_HOME}/share/ambari/lib/*"
 
-export HMS_CORE=${HMS_HOME}/hms-core-${HMS_VERSION}.jar
-export HMS_AGENT=${HMS_HOME}/hms-agent-${HMS_VERSION}.jar
+export AMBARI_CONTROLLER=${AMBARI_HOME}/ambari-controller-${AMBARI_VERSION}.jar
+export AMBARI_AGENT=${AMBARI_HOME}/ambari-agent-${AMBARI_VERSION}.jar
 export CURRENT_DATE=`date +%Y%m%d%H%M`
 
 if [ -z "$JAVA_HOME" ] ; then
-  echo ERROR! You forgot to set JAVA_HOME in conf/hms-env.sh
+  echo ERROR! You forgot to set JAVA_HOME in conf/ambari-env.sh
 fi
 
 export JPS="ps ax"
diff --git a/agent/conf/hms-agent-env.sh b/client/conf/ambari-env.sh
similarity index 100%
rename from agent/conf/hms-agent-env.sh
rename to client/conf/ambari-env.sh
diff --git a/client/src/main/resources/log4j.properties b/client/conf/log4j.properties
similarity index 95%
rename from client/src/main/resources/log4j.properties
rename to client/conf/log4j.properties
index 5a173f2..3d7613f 100644
--- a/client/src/main/resources/log4j.properties
+++ b/client/conf/log4j.properties
@@ -15,7 +15,7 @@
 
 log4j.rootLogger=INFO, R
 log4j.appender.R=org.apache.log4j.RollingFileAppender
-log4j.appender.R.File=${HMS_LOG_DIR}/hms-client.log
+log4j.appender.R.File=${AMBARI_LOG_DIR}/ambari-client.log
 log4j.appender.R.MaxFileSize=10MB
 log4j.appender.R.MaxBackupIndex=10
 log4j.appender.R.layout=org.apache.log4j.PatternLayout
diff --git a/client/pom.xml b/client/pom.xml
index da98f01..f0d1af2 100644
--- a/client/pom.xml
+++ b/client/pom.xml
@@ -1,19 +1,36 @@
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
 <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
 
     <parent>
-        <groupId>org.apache.hms</groupId>
-        <artifactId>hms</artifactId>
-        <version>0.1.0</version>
+        <groupId>org.apache.ambari</groupId>
+        <artifactId>ambari</artifactId>
+        <version>0.1.0-SNAPSHOT</version>
+        <relativePath>../pom.xml</relativePath>
     </parent>
 
     <modelVersion>4.0.0</modelVersion>
-    <groupId>org.apache.hms</groupId>
-    <artifactId>client</artifactId>
+    <groupId>org.apache.ambari</groupId>
+    <artifactId>ambari-client</artifactId>
     <packaging>jar</packaging>
     <version>0.1.0-SNAPSHOT</version>
     <name>client</name>
-    <description>Hadoop Management System Client</description>
+    <description>Ambari Client</description>
 
     <dependencies>
       <dependency>
@@ -22,34 +39,74 @@
         <version>1.2</version>
       </dependency>
       <dependency>
-        <groupId>org.apache.hms</groupId>
-        <artifactId>common</artifactId>
-        <version>0.1.0-SNAPSHOT</version>
-      </dependency>
-      <dependency>
-        <groupId>org.apache.hms</groupId>
-        <artifactId>hms-controller</artifactId>
-        <version>0.1.0-SNAPSHOT</version>
-        <scope>test</scope>
-      </dependency>
-      <dependency>
         <groupId>commons-daemon</groupId>
         <artifactId>commons-daemon</artifactId>
         <version>1.0.5</version>
         <scope>test</scope>
       </dependency>
+      <dependency>
+        <groupId>commons-configuration</groupId>
+        <artifactId>commons-configuration</artifactId>
+        <version>1.6</version>
+      </dependency>
+      <dependency>
+        <groupId>javax.jmdns</groupId>
+        <artifactId>jmdns</artifactId>
+        <version>3.4.0</version>
+      </dependency>
+      <dependency>
+        <groupId>org.codehaus.jackson</groupId>
+        <artifactId>jackson-xc</artifactId>
+        <version>1.8.5</version>
+      </dependency>
+      <dependency>
+        <groupId>com.sun.jersey</groupId>
+        <artifactId>jersey-json</artifactId>
+        <version>1.8</version>
+      </dependency>
+      <dependency>
+        <groupId>com.sun.jersey</groupId>
+        <artifactId>jersey-client</artifactId>
+        <version>1.8</version>
+      </dependency>
+      <dependency>
+        <groupId>log4j</groupId>
+        <artifactId>log4j</artifactId>
+        <version>1.2.15</version>
+        <exclusions>
+          <exclusion>
+            <groupId>javax.mail</groupId>
+            <artifactId>mail</artifactId>
+          </exclusion>
+          <exclusion>
+            <groupId>javax.jms</groupId>
+            <artifactId>jms</artifactId>
+          </exclusion>
+          <exclusion>
+            <groupId>com.sun.jdmk</groupId>
+            <artifactId>jmxtools</artifactId>
+          </exclusion>
+          <exclusion>
+            <groupId>com.sun.jmx</groupId>
+            <artifactId>jmxri</artifactId>
+          </exclusion>
+        </exclusions>
+      </dependency>
     </dependencies>
 
     <build>
       <plugins>
         <plugin>
+          <artifactId>maven-assembly-plugin</artifactId>
+        </plugin>
+        <plugin>
           <groupId>org.apache.maven.plugins</groupId>
           <artifactId>maven-jar-plugin</artifactId>
           <configuration>
             <archive>
               <manifest>
-                <mainClass>org.apache.hms.client.Client</mainClass>
-                <packageName>org.apache.hms.client</packageName>
+                <mainClass>org.apache.ambari.client.Client</mainClass>
+                <packageName>org.apache.ambari.client</packageName>
               </manifest>
               <manifestEntries>
                 <mode>development</mode>
@@ -61,4 +118,72 @@
       </plugins>
     </build>
 
+  <profiles>
+    <profile>
+      <id>docs</id>
+      <activation>
+        <activeByDefault>true</activeByDefault>
+      </activation>
+      <build>
+        <plugins>
+          <plugin>
+            <groupId>com.sun.tools.jxc.maven2</groupId>
+            <artifactId>maven-jaxb-schemagen-plugin</artifactId>
+            <version>1.2</version>
+            <executions>
+              <execution>
+                <phase>generate-resources</phase>
+                <goals>
+                  <goal>generate</goal>
+                </goals>
+              </execution>
+            </executions>
+            <configuration>
+              <project>${project}</project>
+              <destdir>../src/site/resources</destdir>
+              <srcdir>${project.build.sourceDirectory}/org/apache/ambari/common/rest/entities/</srcdir>
+              <verbose>true</verbose>
+              <schemas>
+                <schema>
+                  <namespace>http://incubator.apache.org/ambari/rest</namespace>
+                  <file>schema1.xsd</file>
+                </schema>
+              </schemas>
+            </configuration>
+            <dependencies>
+              <dependency>
+                <groupId>javax.xml.bind</groupId>
+                <artifactId>jaxb-api</artifactId>
+                <version>2.2</version>
+              </dependency>
+              <dependency>
+                <groupId>com.sun.xml.bind</groupId>
+                <artifactId>jaxb-xjc</artifactId>
+                <version>2.2</version>
+              </dependency>
+              <dependency>
+                <groupId>com.sun.xml.bind</groupId>
+                <artifactId>jaxb-impl</artifactId>
+                <version>2.2</version>
+              </dependency>
+              <dependency>
+                <groupId>com.sun.xml.bind</groupId>
+                <artifactId>jaxb-xjc</artifactId>
+                <version>2.2</version>
+              </dependency>
+            </dependencies>
+          </plugin>
+        </plugins>
+      </build>
+    </profile>
+  </profiles>
+
+  <distributionManagement>
+    <site>
+      <id>apache-website</id>
+      <name>Apache website</name>
+      <url>scpexe://people.apache.org/www/incubator.apache.org/ambari-client</url>
+    </site>
+  </distributionManagement>
+
 </project>
diff --git a/client/src/main/java/org/apache/ambari/client/AmbariClient.java b/client/src/main/java/org/apache/ambari/client/AmbariClient.java
new file mode 100644
index 0000000..a8be10d
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/client/AmbariClient.java
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.client;
+
+import java.lang.reflect.Constructor;
+import java.util.HashMap;
+
+public class AmbariClient {
+
+    HashMap<String, HashMap<String, String>> commands = new HashMap<String, HashMap<String, String>>();
+    
+    /*
+     * Initialize commands HashMap
+     */
+    public void InitializeCommandsMap() {
+       
+        HashMap<String, String> clusterCommands = new HashMap<String, String>();
+        clusterCommands.put("create", "ClusterCreate");
+        clusterCommands.put("update", "ClusterUpdate");
+        clusterCommands.put("delete", "ClusterDelete");
+        clusterCommands.put("list", "ClusterList");
+        clusterCommands.put("get", "ClusterGet");
+        clusterCommands.put("stack", "ClusterStack");
+        clusterCommands.put("nodes", "ClusterNodes");
+        
+        
+        HashMap<String, String> stackCommands = new HashMap<String, String>();
+        stackCommands.put("list", "StackList");
+        stackCommands.put("history", "StackHistory");
+        stackCommands.put("add", "StackAdd");
+        stackCommands.put("get", "StackGet");
+        
+        HashMap<String, String> nodeCommands = new HashMap<String, String>();
+        nodeCommands.put("list", "NodeList");
+        nodeCommands.put("get", "NodeGet");
+        
+        commands.put("cluster", clusterCommands);
+        commands.put("stack", stackCommands);
+        commands.put("node", nodeCommands);
+        
+    }
+    
+    public static void usage(HashMap<String, HashMap<String, String>> commands) {
+        System.out.println("Usage: AmbariClient <CommandCateogry> <CommandName> <CommandOptions>\n");
+        System.out.println("To get the help on each command use -help  e.g. \"AmbariClient cluster list -help\"\n");
+        for (String category : commands.keySet()) {
+            System.out.println("CommandCategory : ["+ category+"] : Commands "+commands.get(category).keySet());
+        }    	
+    }
+    
+    /**
+     * @param args
+     */
+    public static void main(String[] args) {
+        
+        /*
+         * Initialize the commands hash map
+         */
+        AmbariClient c = new AmbariClient();
+        c.InitializeCommandsMap();
+        
+        /*
+         * Validate the arguments
+         */
+        if (args.length < 2) {
+           if (args.length == 0 || args[0].equalsIgnoreCase("help")) {
+        	   usage(c.commands);
+               System.exit(0);
+           }
+           if (args[0].equalsIgnoreCase("version")) {
+               System.out.println("VERSION 0.1.0");
+               System.exit(0);
+           }
+        }
+        
+        /*
+         * Check if args[0] belongs to command cateogory and args[1] in respective commands
+         */
+        if (!c.commands.containsKey(args[0])) {
+            System.out.println("Invalid command category ["+args[0]+"]");
+            System.exit(-1);
+        }
+        
+        if(args.length<2) {
+        	usage(c.commands);
+        	System.exit(-1);
+        }
+        
+        if (!c.commands.get(args[0]).containsKey(args[1])){
+            System.out.println("Invalid command ["+args[1]+"] for category ["+args[0]+"]");
+            System.exit(-1);
+        }
+        
+        /*
+         * Instantiate appropriate class based on command category and command name
+         */
+        try {
+            Class<?>[] classParm = new Class<?>[] {String[].class};
+            Object[] objectParm =  new Object[] {args};
+            Class<?> commandClass  = Class.forName("org.apache.ambari.client."+c.commands.get(args[0]).get(args[1]));
+            Constructor<?> co = commandClass.getConstructor(classParm);
+            Command cmd = (Command)co.newInstance(objectParm);
+            cmd.run();
+        } catch (Exception e) {
+            System.err.println( "Command failed. Reason: <" + e.getMessage() +">\n" );
+            e.printStackTrace();
+            System.exit(-1);
+        }
+    }
+
+}
diff --git a/client/src/main/java/org/apache/ambari/client/ClusterCreate.java b/client/src/main/java/org/apache/ambari/client/ClusterCreate.java
new file mode 100644
index 0000000..168088e
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/client/ClusterCreate.java
@@ -0,0 +1,291 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.client;
+
+import java.net.URI;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Properties;
+
+import javax.ws.rs.core.MediaType;
+import javax.ws.rs.core.UriBuilder;
+import org.apache.ambari.common.rest.entities.Stack;
+import org.apache.ambari.common.rest.entities.ClusterDefinition;
+import org.apache.ambari.common.rest.entities.ClusterState;
+import org.apache.ambari.common.rest.entities.RoleToNodes;
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.GnuParser;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.Option;
+import org.apache.commons.cli.OptionBuilder;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.ParseException;
+
+import com.sun.jersey.api.client.Client;
+import com.sun.jersey.api.client.ClientResponse;
+import com.sun.jersey.api.client.WebResource;
+import com.sun.jersey.api.client.config.ClientConfig;
+import com.sun.jersey.api.client.config.DefaultClientConfig;
+
+public class ClusterCreate extends Command {
+
+    String[] args = null;
+    Options options = null;
+    
+    String urlPath = "/clusters";
+    URL resourceURL = null;
+    CommandLine line;
+    String dry_run = "false";
+    
+    Properties roleToNodeExpressions = null;
+    List<RoleToNodes> roleToNodeList = null;
+    
+    public ClusterCreate() {
+    }
+    
+    public ClusterCreate (String [] args) throws Exception {  
+        /*
+         * Build options for cluster create
+         */
+        this.args = args;
+        addOptions();
+        this.resourceURL = new URL (""+this.baseURLString+this.urlPath);
+    }
+    
+    public void printUsage () {
+        HelpFormatter formatter = new HelpFormatter();
+        formatter.printHelp( "ambari cluster create", this.options);
+    }
+    
+    public void addOptions () {
+             
+        Option wait = new Option( "wait", "Optionally wait for cluster to reach desired state" );
+        Option dry_run = new Option( "dry_run", "Dry run" );
+        Option help = new Option( "help", "Help" );
+        
+        OptionBuilder.withArgName("cluster_name");
+        OptionBuilder.isRequired();
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription( "Name of the cluster to be created");
+        Option name = OptionBuilder.create( "name" );
+        
+        OptionBuilder.withArgName("stack_name");
+        OptionBuilder.isRequired();
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription( "Name of the cluster stack");
+        Option stack = OptionBuilder.create( "stack" );
+        
+        OptionBuilder.withArgName( "\"node_exp1; node_exp2; ...\"" );
+        OptionBuilder.isRequired();
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription(  "List of node range expressions separated by semicolon (;) and contained in double quotes (\"\")" );
+        Option nodes = OptionBuilder.create( "nodes" );
+        
+        OptionBuilder.withArgName( "stack_revision" );
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription(  "Stack revision, if not specified latest revision is used" );
+        Option revision = OptionBuilder.create( "revision" );
+        
+        OptionBuilder.withArgName( "description" );
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription(  "Description to be associated with cluster" );
+        Option desc = OptionBuilder.create( "desc" );
+        
+        OptionBuilder.withArgName( "goalstate" );
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription(  "Desired goal state of the cluster" );
+        Option goalstate = OptionBuilder.create( "goalstate" );
+        
+        OptionBuilder.withArgName( "\"component-1; component-2; ...\"" );
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription(  "List of components to be active in the cluster. Components are seperated by semicolon \";\"" );
+        Option services = OptionBuilder.create( "services" );
+        
+        OptionBuilder.withArgName( "rolename=\"node_exp1; node_exp2; ... \"" );
+        OptionBuilder.hasArgs(2);
+        OptionBuilder.withValueSeparator();
+        OptionBuilder.withDescription( "Provide node range expressions for a given rolename separated by semicolon (;) and contained in double quotes (\"\")" );
+        Option role = OptionBuilder.create( "role" );
+
+        this.options = new Options();
+        options.addOption( wait );   
+        options.addOption(dry_run);
+        options.addOption( name );
+        options.addOption( stack );   
+        options.addOption(revision);
+        options.addOption( desc );
+        options.addOption( role );
+        options.addOption( goalstate );
+        options.addOption( nodes );
+        options.addOption( services );
+        options.addOption(help);
+    }
+    
+    public void parseCommandLine() {
+     
+        // create the parser
+        CommandLineParser parser = new GnuParser();
+        try {
+            // parse the command line arguments
+            line = parser.parse(this.options, this.args );
+            
+            if (line.hasOption("help")) {
+                printUsage();
+                System.exit(0);
+            }
+            
+            if (line.hasOption("dry_run")) {
+                dry_run = "true";
+            }
+            
+        }
+        catch( ParseException exp ) {
+            // oops, something went wrong
+            System.err.println( "Command parsing failed. Reason: <" + exp.getMessage()+">\n" );
+            printUsage();
+            System.exit(-1);
+        } 
+    }
+    
+    private static URI getBaseURI() {
+        return UriBuilder.fromUri(
+                "http://localhost:4080/rest/").build();
+    }
+    
+    public static 
+    List<RoleToNodes> getRoleToNodesList (Properties roleToNodeExpressions) {
+        if (roleToNodeExpressions == null) { return null; };
+        
+        List<RoleToNodes> roleToNodesMap = new ArrayList<RoleToNodes>();
+        for (String roleName : roleToNodeExpressions.stringPropertyNames()) {
+            RoleToNodes e = new RoleToNodes();
+            e.setRoleName(roleName);
+            e.setNodes(roleToNodeExpressions.getProperty(roleName));
+            roleToNodesMap.add(e);
+        }
+        return roleToNodesMap;
+    }
+    
+    private List<String> splitServices(String services) {
+      if (services == null) { return null; }
+      String[] arr = services.split(",");
+      List<String> result = new ArrayList<String>(arr.length);
+      for (String x: arr) {
+          result.add(x.trim());
+      }
+      return result;
+    }
+    
+    public void run() throws Exception {
+        /* 
+         * Parse the command line to get the command line arguments
+         */
+        parseCommandLine();
+        
+        ClientConfig config = new DefaultClientConfig();
+        Client client = Client.create(config);
+        WebResource service = client.resource(getBaseURI());
+        
+        // Create Cluster Definition
+        ClusterDefinition clsDef = new ClusterDefinition();
+        clsDef.setName(line.getOptionValue("name"));
+        clsDef.setStackName(line.getOptionValue("stack"));
+        clsDef.setNodes(line.getOptionValue("nodes"));
+        
+        clsDef.setGoalState(line.getOptionValue("goalstate"));
+        String revision = line.getOptionValue("revision");
+        if(revision==null) {
+        	revision = "";
+        	ClientResponse response = service.path("stacks/"+line.getOptionValue("stack"))
+                    .accept(MediaType.APPLICATION_JSON).type(MediaType.APPLICATION_JSON).get(ClientResponse.class);
+            if (response.getStatus() != 404 && response.getStatus() != 200) { 
+                System.err.println("Stack list command failed. Reason [Code: <"+response.getStatus()+">, Message: <"+response.getHeaders().getFirst("ErrorMessage")+">]");
+                System.exit(-1);
+            }
+            if (response.getStatus() == 404) {
+            	System.err.println("Stack name:" + line.getOptionValue("stack") + " does not exist.");
+                System.exit(-1);
+            }
+            /* 
+             * Retrieve the stack from the response
+             */
+            Stack stack = response.getEntity(Stack.class);
+            revision = stack.getRevision();
+        }
+        clsDef.setStackRevision(revision);
+        clsDef.setEnabledServices(splitServices(line.getOptionValue("services")));
+        clsDef.setDescription(line.getOptionValue("desc"));
+        clsDef.setRoleToNodesMap(getRoleToNodesList(line.getOptionProperties("role")));
+        
+        /*
+         * Create cluster
+         */
+        ClientResponse response = service.path("clusters/"+line.getOptionValue("name")).queryParam("dry_run", dry_run).accept(MediaType.APPLICATION_JSON).type(MediaType.APPLICATION_JSON).put(ClientResponse.class, clsDef);
+        if (response.getStatus() != 200) { 
+            System.err.println("Cluster create command failed. Reason [Code: <"+response.getStatus()+">, Message: <"+response.getHeaders().getFirst("ErrorMessage")+">]");
+            System.exit(-1);
+        }
+        
+        /* 
+         * Retrieve the cluster definition from the response
+         */
+        ClusterDefinition def = response.getEntity(ClusterDefinition.class);
+        
+        /*
+         * If dry_run print the clsuter defn and return
+         */
+        if (line.hasOption("dry_run")) {
+            System.out.println("Cluster: ["+def.getName()+"] created. Mode: dry_run.\n");
+            printClusterDefinition(def);
+            return;
+        }
+        
+        /*
+         * If no wait, then print the cluster definition and return
+         */
+        if (!line.hasOption("wait")) {
+           System.out.println("Cluster: ["+def.getName()+"] created.\n");
+           printClusterDefinition(def);
+           return; 
+        }
+        
+        /*
+         * If wait option is specified then wait for cluster state to reach the desired state 
+         */
+        ClusterState clusterState;
+        for (;;) {
+            response = service.path("clusters/"+def.getName()+"/state").accept(MediaType.APPLICATION_JSON).type(MediaType.APPLICATION_JSON).get(ClientResponse.class);
+            if (response.getStatus() != 200) { 
+                System.err.println("Failed to get the cluster state. Reason [Code: <"+response.getStatus()+">, Message: <"+response.getHeaders().getFirst("ErrorMessage")+">]");
+                System.exit(-1);
+            }
+            
+            clusterState = response.getEntity(ClusterState.class);
+            if (clusterState.getState().equals(def.getGoalState())) {
+                break;
+            }
+            System.out.println("Waiting for cluster ["+def.getName()+"] to get to desired goalstate of ["+def.getGoalState()+"]");
+            Thread.sleep(15 * 60000);
+        }  
+        
+        System.out.println("Cluster: ["+def.getName()+"] created. Cluster state: ["+clusterState.getState()+"]\n");
+        printClusterDefinition(def);
+    }
+}
diff --git a/client/src/main/java/org/apache/ambari/client/ClusterDelete.java b/client/src/main/java/org/apache/ambari/client/ClusterDelete.java
new file mode 100644
index 0000000..961556d
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/client/ClusterDelete.java
@@ -0,0 +1,158 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.client;
+
+import java.net.URI;
+import java.net.URL;
+import javax.ws.rs.core.MediaType;
+import javax.ws.rs.core.UriBuilder;
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.GnuParser;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.Option;
+import org.apache.commons.cli.OptionBuilder;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.ParseException;
+
+import com.sun.jersey.api.client.Client;
+import com.sun.jersey.api.client.ClientResponse;
+import com.sun.jersey.api.client.WebResource;
+import com.sun.jersey.api.client.config.ClientConfig;
+import com.sun.jersey.api.client.config.DefaultClientConfig;
+
+public class ClusterDelete extends Command {
+
+    String[] args = null;
+    Options options = null;
+    
+    String urlPath = "/clusters";
+    URL resourceURL = null;
+    CommandLine line;
+    
+    public ClusterDelete() {
+    }
+    
+    public ClusterDelete (String [] args) throws Exception {  
+        /*
+         * Build options for cluster update
+         */
+        this.args = args;
+        addOptions();
+        this.resourceURL = new URL (""+this.baseURLString+this.urlPath);
+    }
+    
+    public void printUsage () {
+        HelpFormatter formatter = new HelpFormatter();
+        formatter.printHelp( "ambari cluster delete", this.options);
+    }
+    
+    public void addOptions () {
+             
+        Option wait = new Option( "wait", "Optionally wait for cluster to reach desired state" );
+        Option help = new Option( "help", "Help" );
+        
+        OptionBuilder.withArgName("cluster_name");
+        OptionBuilder.isRequired();
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription( "Name of the cluster to be deleted");
+        Option name = OptionBuilder.create( "name" );
+        
+        this.options = new Options();
+        options.addOption( wait );   
+        options.addOption( name );
+        options.addOption(help);
+    }
+    
+    public void parseCommandLine() {
+     
+        // create the parser
+        CommandLineParser parser = new GnuParser();
+        try {
+            // parse the command line arguments
+            line = parser.parse(this.options, this.args );
+            
+            if (line.hasOption("help")) {
+                printUsage();
+                System.exit(0);
+            }
+        }
+        catch( ParseException exp ) {
+            // oops, something went wrong
+            System.err.println( "Command parsing failed. Reason: <" + exp.getMessage()+">\n" );
+            printUsage();
+            System.exit(-1);
+        } 
+    }
+    
+    private static URI getBaseURI() {
+        return UriBuilder.fromUri(
+                "http://localhost:4080/rest/").build();
+    }
+    
+    
+    public void run() throws Exception {
+        /* 
+         * Parse the command line to get the command line arguments
+         */
+        parseCommandLine();
+        
+        ClientConfig config = new DefaultClientConfig();
+        Client client = Client.create(config);
+        WebResource service = client.resource(getBaseURI());
+          
+        /*
+         * Update cluster
+         */
+        ClientResponse response = service.path("clusters/"+line.getOptionValue("name")).accept(MediaType.APPLICATION_JSON).type(MediaType.APPLICATION_JSON).delete(ClientResponse.class);
+        if (response.getStatus() != 204) { 
+            System.err.println("Cluster delete command failed. Reason [Code: <"+response.getStatus()+">, Message: <"+response.getHeaders().getFirst("ErrorMessage")+">]");
+            System.exit(-1);
+        }
+        
+        /*
+         * If no wait, then print the message and return
+         */
+        if (!line.hasOption("wait")) {
+           System.out.println("Cluster: ["+line.getOptionValue("name")+"] deleted.\n");
+           return; 
+        }
+        
+        /*
+         * If wait option is specified then wait for cluster to be deleted 
+         */
+        
+        for (;;) {
+            System.out.println("Waiting for cluster ["+line.getOptionValue("name")+"] to be deleted.\n");
+            response = service.path("clusters/"+line.getOptionValue("name")+"/state").accept(MediaType.APPLICATION_JSON).type(MediaType.APPLICATION_JSON).get(ClientResponse.class);
+            if (response.getStatus() != 200 && response.getStatus() != 404) { 
+                System.err.println("Failed to get the cluster state. Reason [Code: <"+response.getStatus()+">, Message: <"+response.getHeaders().getFirst("ErrorMessage")+">]");
+                System.exit(-1);
+            }
+            
+            /*
+             * If cluster does not exist, then break
+             */
+            if (response.getStatus() == 404) {
+                break;
+            }
+            Thread.sleep(60 * 60000);
+        }         
+        System.out.println("Cluster: ["+line.getOptionValue("name")+"] deleted.\n");
+    }
+}
diff --git a/client/src/main/java/org/apache/ambari/client/ClusterGet.java b/client/src/main/java/org/apache/ambari/client/ClusterGet.java
new file mode 100644
index 0000000..32b779a
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/client/ClusterGet.java
@@ -0,0 +1,145 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.client;
+
+import java.net.URI;
+import javax.ws.rs.core.MediaType;
+import javax.ws.rs.core.UriBuilder;
+
+import org.apache.ambari.common.rest.entities.ClusterInformation;
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.GnuParser;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.Option;
+import org.apache.commons.cli.OptionBuilder;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.ParseException;
+
+import com.sun.jersey.api.client.Client;
+import com.sun.jersey.api.client.ClientResponse;
+import com.sun.jersey.api.client.WebResource;
+import com.sun.jersey.api.client.config.ClientConfig;
+import com.sun.jersey.api.client.config.DefaultClientConfig;
+
+public class ClusterGet extends Command {
+
+    String[] args = null;
+    Options options = null;
+   
+    CommandLine line;
+    
+    public ClusterGet() {
+    }
+    
+    public ClusterGet (String [] args) throws Exception {  
+        /*
+         * Build options for cluster list
+         */
+        this.args = args;
+        addOptions();
+    }
+    
+    public void printUsage () {
+        HelpFormatter formatter = new HelpFormatter();
+        formatter.printHelp( "ambari cluster get", this.options);
+    }
+    
+    public void addOptions () {
+             
+        Option help = new Option( "help", "Help" );
+        Option verbose = new Option( "verbose", "Verbose mode" );
+        
+        OptionBuilder.withArgName("cluster_name");
+        OptionBuilder.isRequired();
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription( "Name of the cluster");
+        Option name = OptionBuilder.create( "name" );
+        
+        this.options = new Options();
+        options.addOption(name);
+        options.addOption( verbose );   
+        options.addOption(help);
+    }
+    
+    public void parseCommandLine() {
+     
+        // create the parser
+        CommandLineParser parser = new GnuParser();
+        try {
+            // parse the command line arguments
+            line = parser.parse(this.options, this.args );
+            
+            if (line.hasOption("help")) {
+                printUsage();
+                System.exit(0);
+            }
+        }
+        catch( ParseException exp ) {
+            // oops, something went wrong
+            System.err.println( "Command parsing failed. Reason: <" + exp.getMessage()+">\n" );
+            printUsage();
+            System.exit(-1);
+        } 
+    }
+    
+    private static URI getBaseURI() {
+        return UriBuilder.fromUri(
+                "http://localhost:4080/rest/").build();
+    }
+    
+    
+    public void run() throws Exception {
+        /* 
+         * Parse the command line to get the command line arguments
+         */
+        parseCommandLine();
+        
+        ClientConfig config = new DefaultClientConfig();
+        Client client = Client.create(config);
+        WebResource service = client.resource(getBaseURI());
+        String clusterName = line.getOptionValue("name");
+        
+        /*
+         * Get cluster
+         */
+        ClientResponse response;
+        response = service.path("clusters/"+clusterName).accept(MediaType.APPLICATION_JSON).type(MediaType.APPLICATION_JSON).get(ClientResponse.class);
+        if (response.getStatus() != 200) { 
+            System.err.println("Cluster get command failed. Reason [Code: <"+response.getStatus()+">, Message: <"+response.getHeaders().getFirst("ErrorMessage")+">]");
+            System.exit(-1);
+        }
+        
+        /* 
+         * Retrieve the cluster Information from the response
+         */
+        ClusterInformation clsInfo = response.getEntity(ClusterInformation.class);
+        
+        if (!line.hasOption("verbose")) {
+            System.out.println("[NAME]\t[STATE]\t[DATE_CREATED]\t[ACTIVE_SERVICES]\n");
+            
+            System.out.println("["+clsInfo.getDefinition().getName()+"]\t"+
+                               "["+clsInfo.getState().getState()+"]\t"+
+                               "["+clsInfo.getState().getCreationTime()+"]\t"+
+                               "["+clsInfo.getDefinition().getEnabledServices()+"]\n");
+        } else {
+            System.out.println("Cluster Information document:\n");
+            printClusterInformation(clsInfo);
+        }
+    }
+}
diff --git a/client/src/main/java/org/apache/ambari/client/ClusterList.java b/client/src/main/java/org/apache/ambari/client/ClusterList.java
new file mode 100644
index 0000000..6e4338c
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/client/ClusterList.java
@@ -0,0 +1,160 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.client;
+
+import java.net.URI;
+import java.util.List;
+
+import javax.ws.rs.core.MediaType;
+import javax.ws.rs.core.UriBuilder;
+
+import org.apache.ambari.common.rest.entities.ClusterInformation;
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.GnuParser;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.Option;
+import org.apache.commons.cli.OptionBuilder;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.ParseException;
+
+import com.sun.jersey.api.client.Client;
+import com.sun.jersey.api.client.ClientResponse;
+import com.sun.jersey.api.client.GenericType;
+import com.sun.jersey.api.client.WebResource;
+import com.sun.jersey.api.client.config.ClientConfig;
+import com.sun.jersey.api.client.config.DefaultClientConfig;
+
+public class ClusterList extends Command {
+
+    String[] args = null;
+    Options options = null;
+   
+    CommandLine line;
+    
+    public ClusterList() {
+    }
+    
+    public ClusterList (String [] args) throws Exception {  
+        /*
+         * Build options for cluster list
+         */
+        this.args = args;
+        addOptions();
+    }
+    
+    public void printUsage () {
+        HelpFormatter formatter = new HelpFormatter();
+        formatter.printHelp( "ambari cluster list", this.options);
+    }
+    
+    public void addOptions () {
+             
+        Option help = new Option( "help", "Help" );
+        Option verbose = new Option( "verbose", "Verbose mode" );
+        
+        OptionBuilder.withArgName("cluster_state");
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription( "State of the clusters to be listed");
+        Option state = OptionBuilder.create( "state" );
+        
+        this.options = new Options();
+        options.addOption(state);
+        options.addOption( verbose );   
+        options.addOption(help);
+    }
+    
+    public void parseCommandLine() {
+     
+        // create the parser
+        CommandLineParser parser = new GnuParser();
+        try {
+            // parse the command line arguments
+            line = parser.parse(this.options, this.args );
+            
+            if (line.hasOption("help")) {
+                printUsage();
+                System.exit(0);
+            }
+        }
+        catch( ParseException exp ) {
+            // oops, something went wrong
+            System.err.println( "Command parsing failed. Reason: <" + exp.getMessage()+">\n" );
+            printUsage();
+            System.exit(-1);
+        } 
+    }
+    
+    private static URI getBaseURI() {
+        return UriBuilder.fromUri(
+                "http://localhost:4080/rest/").build();
+    }
+    
+    
+    public void run() throws Exception {
+        /* 
+         * Parse the command line to get the command line arguments
+         */
+        parseCommandLine();
+        
+        ClientConfig config = new DefaultClientConfig();
+        Client client = Client.create(config);
+        WebResource service = client.resource(getBaseURI());
+        String clusterState;
+        if (!line.hasOption("state")) {
+            clusterState = "ALL";
+        } else {
+            clusterState = line.getOptionValue("state");
+        }
+        
+        /*
+         * list clusters
+         */
+        ClientResponse response;
+        response = service.path("clusters").queryParam("state", clusterState).accept(MediaType.APPLICATION_JSON).type(MediaType.APPLICATION_JSON).get(ClientResponse.class);
+        if (response.getStatus() == 204) {
+            System.exit(0);
+        }
+        
+        if (response.getStatus() != 200) { 
+            System.err.println("Cluster list command failed. Reason [Code: <"+response.getStatus()+">, Message: <"+response.getHeaders().getFirst("ErrorMessage")+">]");
+            System.exit(-1);
+        }
+        
+        /* 
+         * Retrieve the cluster Information from the response
+         */
+        List<ClusterInformation> clsInfos = response.getEntity(new GenericType<List<ClusterInformation>>(){});
+        
+        if (!line.hasOption("verbose")) {
+            System.out.println("[NAME]\t[STATE]\t[DATE_CREATED]\t[ACTIVE_SERVICES]\n");
+            for (ClusterInformation clsInfo : clsInfos ) {
+                System.out.println("["+clsInfo.getDefinition().getName()+"]\t"+
+                                   "["+clsInfo.getState().getState()+"]\t"+
+                                   "["+clsInfo.getState().getCreationTime()+"]\t"+
+                                   "["+clsInfo.getDefinition().getEnabledServices()+"]\n");
+            }
+        } else {
+            System.out.println("Cluster Information documents:\n");
+            for (ClusterInformation clsInfo : clsInfos ) {
+              printClusterInformation(clsInfo);
+              System.out.println("\n");
+            }
+        }
+    }
+}
diff --git a/client/src/main/java/org/apache/ambari/client/ClusterNodes.java b/client/src/main/java/org/apache/ambari/client/ClusterNodes.java
new file mode 100644
index 0000000..883d6d6
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/client/ClusterNodes.java
@@ -0,0 +1,168 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.client;
+
+import java.net.URI;
+import java.net.URL;
+import java.util.List;
+
+import javax.ws.rs.core.MediaType;
+import javax.ws.rs.core.UriBuilder;
+
+import org.apache.ambari.common.rest.entities.Node;
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.GnuParser;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.Option;
+import org.apache.commons.cli.OptionBuilder;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.ParseException;
+
+import com.sun.jersey.api.client.Client;
+import com.sun.jersey.api.client.ClientResponse;
+import com.sun.jersey.api.client.GenericType;
+import com.sun.jersey.api.client.WebResource;
+import com.sun.jersey.api.client.config.ClientConfig;
+import com.sun.jersey.api.client.config.DefaultClientConfig;
+
+public class ClusterNodes extends Command {
+
+    String[] args = null;
+    Options options = null;
+    
+    String urlPath = "/clusters";
+    URL resourceURL = null;
+    CommandLine line;
+    
+    public ClusterNodes() {
+    }
+    
+    public ClusterNodes (String [] args) throws Exception {  
+        /*
+         * Build options for cluster create
+         */
+        this.args = args;
+        addOptions();
+        this.resourceURL = new URL (""+this.baseURLString+this.urlPath);
+    }
+    
+    public void printUsage () {
+        HelpFormatter formatter = new HelpFormatter();
+        formatter.printHelp( "ambari cluster create", this.options);
+    }
+    
+    public void addOptions () {
+             
+        Option help = new Option( "help", "Help" );
+        
+        OptionBuilder.withArgName("cluster_name");
+        OptionBuilder.isRequired();
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription( "Name of the cluster to be created");
+        Option name = OptionBuilder.create( "name" );
+        
+        OptionBuilder.withArgName("role_name");
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription( "Role name to get list of nodes associated with specified role");
+        Option role = OptionBuilder.create( "role");
+        
+        OptionBuilder.withArgName( "[true/false]" );
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription(  "Node state alive as true or false" );
+        Option alive = OptionBuilder.create( "alive" );
+  
+        this.options = new Options();
+
+        options.addOption( name );
+        options.addOption( role );   
+        options.addOption( alive );
+        options.addOption(help);
+    }
+    
+    public void parseCommandLine() {
+     
+        // create the parser
+        CommandLineParser parser = new GnuParser();
+        try {
+            // parse the command line arguments
+            line = parser.parse(this.options, this.args );
+            
+            if (line.hasOption("help")) {
+                printUsage();
+                System.exit(0);
+            }
+        }
+        catch( ParseException exp ) {
+            // oops, something went wrong
+            System.err.println( "Command parsing failed. Reason: <" + exp.getMessage()+">\n" );
+            printUsage();
+            System.exit(-1);
+        } 
+    }
+    
+    private static URI getBaseURI() {
+        return UriBuilder.fromUri(
+                "http://localhost:4080/rest/").build();
+    }
+    
+    public void run() throws Exception {
+        /* 
+         * Parse the command line to get the command line arguments
+         */
+        parseCommandLine();
+        
+        ClientConfig config = new DefaultClientConfig();
+        Client client = Client.create(config);
+        WebResource service = client.resource(getBaseURI());
+        String clusterName = line.getOptionValue("name");
+        String role = ""; 
+        String alive = "";
+        if (line.getOptionValue("alive") != null) { alive = line.getOptionValue("alive"); }
+        if (line.getOptionValue("role") != null) { role = line.getOptionValue("role"); }
+        
+        
+        /*
+         * Get Cluster node list
+         */
+        ClientResponse response = service.path("clusters/"+clusterName+"/nodes")
+                      .queryParam("alive", alive)
+                      .queryParam("role", role)
+                      .accept(MediaType.APPLICATION_JSON)
+                      .type(MediaType.APPLICATION_JSON).get(ClientResponse.class);
+        if (response.getStatus() == 204) {
+            System.out.println ("No nodes are associated.");
+            System.exit(0);
+        }
+        if (response.getStatus() != 200) { 
+            System.err.println("Cluster nodes command failed. Reason [Code: <"+response.getStatus()+">, Message: <"+response.getHeaders().getFirst("ErrorMessage")+">]");
+            System.exit(-1);
+        }
+        
+        /* 
+         * Retrieve the node list from response
+         */
+        List<Node> nodes = response.getEntity(new GenericType<List<Node>>(){});
+        
+        System.out.println("List of cluster nodes: \n");
+        for (Node node : nodes ) {
+            printNodeInformation(node);
+            System.out.println("\n");
+        }
+    }
+}
diff --git a/client/src/main/java/org/apache/ambari/client/ClusterRename.java b/client/src/main/java/org/apache/ambari/client/ClusterRename.java
new file mode 100644
index 0000000..7872968
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/client/ClusterRename.java
@@ -0,0 +1,138 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.client;
+
+import java.net.URI;
+import java.net.URL;
+import javax.ws.rs.core.MediaType;
+import javax.ws.rs.core.UriBuilder;
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.GnuParser;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.Option;
+import org.apache.commons.cli.OptionBuilder;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.ParseException;
+
+import com.sun.jersey.api.client.Client;
+import com.sun.jersey.api.client.ClientResponse;
+import com.sun.jersey.api.client.WebResource;
+import com.sun.jersey.api.client.config.ClientConfig;
+import com.sun.jersey.api.client.config.DefaultClientConfig;
+
+public class ClusterRename extends Command {
+
+    String[] args = null;
+    Options options = null;
+    
+    String urlPath = "/clusters";
+    URL resourceURL = null;
+    CommandLine line;
+    
+    public ClusterRename() {
+    }
+    
+    public ClusterRename (String [] args) throws Exception {  
+        /*
+         * Build options for cluster update
+         */
+        this.args = args;
+        addOptions();
+        this.resourceURL = new URL (""+this.baseURLString+this.urlPath);
+    }
+    
+    public void printUsage () {
+        HelpFormatter formatter = new HelpFormatter();
+        formatter.printHelp( "ambari cluster rename", this.options);
+    }
+    
+    public void addOptions () {
+             
+        Option help = new Option( "help", "Help" );
+        
+        OptionBuilder.withArgName("cluster_name");
+        OptionBuilder.isRequired();
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription( "Name of the cluster to be renamed");
+        Option name = OptionBuilder.create( "name" );
+        
+        OptionBuilder.withArgName("new_name");
+        OptionBuilder.isRequired();
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription( "New name of the cluster");
+        Option new_name = OptionBuilder.create( "new_name" );
+        
+        this.options = new Options();
+        options.addOption( new_name );   
+        options.addOption( name );
+        options.addOption(help);
+    }
+    
+    public void parseCommandLine() {
+     
+        // create the parser
+        CommandLineParser parser = new GnuParser();
+        try {
+            // parse the command line arguments
+            line = parser.parse(this.options, this.args );
+            
+            if (line.hasOption("help")) {
+                printUsage();
+                System.exit(0);
+            }
+        }
+        catch( ParseException exp ) {
+            // oops, something went wrong
+            System.err.println( "Command parsing failed. Reason: <" + exp.getMessage()+">\n" );
+            printUsage();
+            System.exit(-1);
+        } 
+    }
+    
+    private static URI getBaseURI() {
+        return UriBuilder.fromUri(
+                "http://localhost:4080/rest/").build();
+    }
+    
+    
+    public void run() throws Exception {
+        /* 
+         * Parse the command line to get the command line arguments
+         */
+        parseCommandLine();
+        
+        ClientConfig config = new DefaultClientConfig();
+        Client client = Client.create(config);
+        WebResource service = client.resource(getBaseURI());
+          
+        /*
+         * Rename cluster
+         */
+        String path = ""+"clusters/"+line.getOptionValue("name")+"/rename";
+        ClientResponse response = service.path(path)
+                       .queryParam("new_name", line.getOptionValue("new_name"))
+                       .accept(MediaType.APPLICATION_JSON).type(MediaType.APPLICATION_JSON)
+                       .put(ClientResponse.class);
+        if (response.getStatus() != 204) { 
+            System.err.println("Cluster rename command failed. Reason [Code: <"+response.getStatus()+">, Message: <"+response.getHeaders().getFirst("ErrorMessage")+">]");
+            System.exit(-1);
+        }
+        System.out.println("Cluster: ["+line.getOptionValue("name")+"] renamed to ["+line.getOptionValue("new_name")+"].\n");
+    }
+}
diff --git a/client/src/main/java/org/apache/ambari/client/ClusterStack.java b/client/src/main/java/org/apache/ambari/client/ClusterStack.java
new file mode 100644
index 0000000..691916d
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/client/ClusterStack.java
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.client;
+
+import java.net.URI;
+import javax.ws.rs.core.MediaType;
+import javax.ws.rs.core.UriBuilder;
+
+import org.apache.ambari.common.rest.entities.Stack;
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.GnuParser;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.Option;
+import org.apache.commons.cli.OptionBuilder;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.ParseException;
+
+import com.sun.jersey.api.client.Client;
+import com.sun.jersey.api.client.ClientResponse;
+import com.sun.jersey.api.client.WebResource;
+import com.sun.jersey.api.client.config.ClientConfig;
+import com.sun.jersey.api.client.config.DefaultClientConfig;
+
+public class ClusterStack extends Command {
+
+    String[] args = null;
+    Options options = null;
+    CommandLine line;
+    
+    public ClusterStack() {
+    }
+    
+    public ClusterStack (String [] args) throws Exception {  
+        /*
+         * Build options for cluster update
+         */
+        this.args = args;
+        addOptions();
+    }
+    
+    public void printUsage () {
+        HelpFormatter formatter = new HelpFormatter();
+        formatter.printHelp( "ambari cluster delete", this.options);
+    }
+    
+    public void addOptions () {
+             
+        Option help = new Option( "help", "Help" );
+        Option expanded = new Option( "expanded", "Return expanded version of stack inlining parent stack" );
+        
+        OptionBuilder.withArgName("cluster_name");
+        OptionBuilder.isRequired();
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription( "Name of the cluster");
+        Option name = OptionBuilder.create( "name" );
+        
+        OptionBuilder.withArgName("file_path");
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription( "File path to store the stack locally on client side");
+        Option file = OptionBuilder.create( "file" );
+        
+        this.options = new Options(); 
+        options.addOption( name );
+        options.addOption( file );
+        options.addOption( expanded );
+        options.addOption(help);
+    }
+    
+    public void parseCommandLine() {
+     
+        // create the parser
+        CommandLineParser parser = new GnuParser();
+        try {
+            // parse the command line arguments
+            line = parser.parse(this.options, this.args );
+            
+            if (line.hasOption("help")) {
+                printUsage();
+                System.exit(0);
+            }
+        }
+        catch( ParseException exp ) {
+            // oops, something went wrong
+            System.err.println( "Command parsing failed. Reason: <" + exp.getMessage()+">\n" );
+            printUsage();
+            System.exit(-1);
+        } 
+    }
+    
+    private static URI getBaseURI() {
+        return UriBuilder.fromUri(
+                "http://localhost:4080/rest/").build();
+    }
+    
+    
+    public void run() throws Exception {
+        /* 
+         * Parse the command line to get the command line arguments
+         */
+        parseCommandLine();
+        
+        ClientConfig config = new DefaultClientConfig();
+        Client client = Client.create(config);
+        WebResource service = client.resource(getBaseURI());
+        String clusterName = line.getOptionValue("name");
+        
+        /*
+         * Update cluster
+         */
+        ClientResponse response = service
+                .path("clusters/"+clusterName+"/stack")
+                .queryParam("expanded", "true")
+                .accept(MediaType.APPLICATION_JSON).type(MediaType.APPLICATION_JSON).get(ClientResponse.class);
+        if (response.getStatus() != 200) { 
+            System.err.println("Get cluster stack command failed. Reason [Code: <"+response.getStatus()+">, Message: <"+response.getHeaders().getFirst("ErrorMessage")+">]");
+            System.exit(-1);
+        }
+        
+        /* 
+         * Retrieve the cluster stack
+         */
+        Stack stack = response.getEntity(Stack.class);
+        printStack(stack, line.getOptionValue("file"));
+    }
+}
diff --git a/client/src/main/java/org/apache/ambari/client/ClusterUpdate.java b/client/src/main/java/org/apache/ambari/client/ClusterUpdate.java
new file mode 100644
index 0000000..37e4e04
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/client/ClusterUpdate.java
@@ -0,0 +1,280 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.client;
+
+import java.net.URI;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Properties;
+
+import javax.ws.rs.core.MediaType;
+import javax.ws.rs.core.UriBuilder;
+import javax.xml.bind.JAXBContext;
+import javax.xml.bind.Marshaller;
+
+import org.apache.ambari.common.rest.entities.ClusterDefinition;
+import org.apache.ambari.common.rest.entities.ClusterState;
+import org.apache.ambari.common.rest.entities.RoleToNodes;
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.GnuParser;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.Option;
+import org.apache.commons.cli.OptionBuilder;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.ParseException;
+
+import com.sun.jersey.api.client.Client;
+import com.sun.jersey.api.client.ClientResponse;
+import com.sun.jersey.api.client.WebResource;
+import com.sun.jersey.api.client.config.ClientConfig;
+import com.sun.jersey.api.client.config.DefaultClientConfig;
+
+public class ClusterUpdate extends Command {
+
+    String[] args = null;
+    Options options = null;
+    
+    String urlPath = "/clusters";
+    URL resourceURL = null;
+    CommandLine line;
+    String dry_run = "false";
+    
+    Properties roleToNodeExpressions = null;
+    List<RoleToNodes> roleToNodeList = null;
+    
+    public ClusterUpdate() {
+    }
+    
+    public ClusterUpdate (String [] args) throws Exception {  
+        /*
+         * Build options for cluster update
+         */
+        this.args = args;
+        addOptions();
+        this.resourceURL = new URL (""+this.baseURLString+this.urlPath);
+    }
+    
+    public void printUsage () {
+        HelpFormatter formatter = new HelpFormatter();
+        formatter.printHelp( "ambari cluster update", this.options);
+    }
+    
+    public void addOptions () {
+             
+        Option wait = new Option( "wait", "Optionally wait for cluster to reach desired state" );
+        Option dry_run = new Option( "dry_run", "Dry run" );
+        Option help = new Option( "help", "Help" );
+        
+        OptionBuilder.withArgName("cluster_name");
+        OptionBuilder.isRequired();
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription( "Name of the cluster to be updated");
+        Option name = OptionBuilder.create( "name" );
+        
+        OptionBuilder.withArgName("stack_name");
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription( "Name of the cluster stack");
+        Option stack = OptionBuilder.create( "stack" );
+        
+        OptionBuilder.withArgName( "\"node_exp1; node_exp2; ...\"" );
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription(  "List of node range expressions separated by semicolon (;) and contained in double quotes (\"\")" );
+        Option nodes = OptionBuilder.create( "nodes" );
+        
+        OptionBuilder.withArgName( "stack_revision" );
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription(  "Stack revision" );
+        Option revision = OptionBuilder.create( "revision" );
+        
+        OptionBuilder.withArgName( "description" );
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription(  "Description associated with cluster" );
+        Option desc = OptionBuilder.create( "desc" );
+        
+        OptionBuilder.withArgName( "goalstate" );
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription(  "Desired goal state of the cluster" );
+        Option goalstate = OptionBuilder.create( "goalstate" );
+        
+        OptionBuilder.withArgName( "\"component-1; component-2; ...\"" );
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription(  "List of components to be active in the cluster. Components are seperated by semicolon \";\"" );
+        Option services = OptionBuilder.create( "services" );
+        
+        OptionBuilder.withArgName( "rolename=\"node_exp1; node_exp2; ... \"" );
+        OptionBuilder.hasArgs(2);
+        OptionBuilder.withValueSeparator();
+        OptionBuilder.withDescription( "Node range expressions for a given rolename separated by semicolon (;) and contained in double quotes (\"\")" );
+        Option role = OptionBuilder.create( "role" );
+
+        this.options = new Options();
+        options.addOption( wait );   
+        options.addOption(dry_run);
+        options.addOption( name );
+        options.addOption( stack );   
+        options.addOption(revision);
+        options.addOption( desc );
+        options.addOption( role );
+        options.addOption( goalstate );
+        options.addOption( nodes );
+        options.addOption( services );
+        options.addOption(help);
+    }
+    
+    public void parseCommandLine() {
+     
+        // create the parser
+        CommandLineParser parser = new GnuParser();
+        try {
+            // parse the command line arguments
+            line = parser.parse(this.options, this.args );
+            
+            if (line.hasOption("help")) {
+                printUsage();
+                System.exit(0);
+            }
+            
+            if (line.hasOption("dry_run")) {
+                dry_run = "true";
+            }
+            
+        }
+        catch( ParseException exp ) {
+            // oops, something went wrong
+            System.err.println( "Command parsing failed. Reason: <" + exp.getMessage()+">\n" );
+            printUsage();
+            System.exit(-1);
+        } 
+    }
+    
+    private static URI getBaseURI() {
+        return UriBuilder.fromUri(
+                "http://localhost:4080/rest/").build();
+    }
+    
+    public static 
+    List<RoleToNodes> getRoleToNodesList (Properties roleToNodeExpressions) {
+        if (roleToNodeExpressions == null) { return null; };
+        
+        List<RoleToNodes> roleToNodesMap = new ArrayList<RoleToNodes>();
+        for (String roleName : roleToNodeExpressions.stringPropertyNames()) {
+            RoleToNodes e = new RoleToNodes();
+            e.setRoleName(roleName);
+            e.setNodes(roleToNodeExpressions.getProperty(roleName));
+            roleToNodesMap.add(e);
+        }
+        return roleToNodesMap;
+    }
+    
+    private List<String> splitServices(String services) {
+      if (services == null) { return null; }
+      String[] arr = services.split(",");
+      List<String> result = new ArrayList<String>(arr.length);
+      for (String x: arr) {
+          result.add(x.trim());
+      }
+      return result;
+    }
+
+    public void printClusterDefinition(ClusterDefinition def) throws Exception {
+        JAXBContext jc = JAXBContext.newInstance(org.apache.ambari.common.rest.entities.ClusterDefinition.class);
+        Marshaller m = jc.createMarshaller();
+        m.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, Boolean.TRUE);
+        m.marshal(def, System.out);
+    }
+    
+    public void run() throws Exception {
+        /* 
+         * Parse the command line to get the command line arguments
+         */
+        parseCommandLine();
+        
+        ClientConfig config = new DefaultClientConfig();
+        Client client = Client.create(config);
+        WebResource service = client.resource(getBaseURI());
+        
+        // Create Cluster Definition 
+        ClusterDefinition clsDef = new ClusterDefinition();
+        clsDef.setName(line.getOptionValue("name"));
+        clsDef.setStackName(line.getOptionValue("stack"));
+        clsDef.setNodes(line.getOptionValue("nodes"));
+        
+        clsDef.setGoalState(line.getOptionValue("goalstate"));
+        clsDef.setStackRevision(line.getOptionValue("revision"));
+        clsDef.setEnabledServices(splitServices(line.getOptionValue("services")));
+        clsDef.setDescription(line.getOptionValue("desc"));
+        clsDef.setRoleToNodesMap(getRoleToNodesList(line.getOptionProperties("role")));
+        
+        /*
+         * Update cluster
+         */
+        ClientResponse response = service.path("clusters/"+clsDef.getName()).queryParam("dry_run", dry_run).accept(MediaType.APPLICATION_JSON).type(MediaType.APPLICATION_JSON).put(ClientResponse.class, clsDef);
+        if (response.getStatus() != 200) { 
+            System.err.println("Cluster update command failed. Reason [Code: <"+response.getStatus()+">, Message: <"+response.getHeaders().getFirst("ErrorMessage")+">]");
+            System.exit(-1);
+        }
+        
+        /* 
+         * Retrieve the cluster definition from the response
+         */
+        ClusterDefinition def = response.getEntity(ClusterDefinition.class);
+        
+        /*
+         * If dry_run print the clsuter defn and return
+         */
+        if (line.hasOption("dry_run")) {
+            System.out.println("Cluster: ["+def.getName()+"] updated. Mode: dry_run.\n");
+            printClusterDefinition(def);
+            return;
+        }
+        
+        /*
+         * If no wait, then print the cluster definition and return
+         * TODO: 
+         */
+        if (!line.hasOption("wait") || !line.hasOption("goalstate")) {
+           System.out.println("Cluster: ["+def.getName()+"] updated.\n");
+           printClusterDefinition(def);
+           return; 
+        }
+        
+        /*
+         * If wait option is specified then wait for cluster state to reach the desired state 
+         */
+        ClusterState clusterState;
+        for (;;) {
+            response = service.path("clusters/"+def.getName()+"/state").accept(MediaType.APPLICATION_JSON).type(MediaType.APPLICATION_JSON).get(ClientResponse.class);
+            if (response.getStatus() != 200) { 
+                System.err.println("Failed to get the cluster state. Reason [Code: <"+response.getStatus()+">, Message: <"+response.getHeaders().getFirst("ErrorMessage")+">]");
+                System.exit(-1);
+            }
+            
+            clusterState = response.getEntity(ClusterState.class);
+            if (clusterState.getState().equals(def.getGoalState())) {
+                break;
+            }
+            System.out.println("Waiting for cluster ["+def.getName()+"] to get to desired goalstate of ["+def.getGoalState()+"]");
+            Thread.sleep(15 * 60000);
+        }  
+        
+        System.out.println("Cluster: ["+def.getName()+"] updated. Cluster state: ["+clusterState.getState()+"]\n");
+        printClusterDefinition(def);
+    }
+}
diff --git a/client/src/main/java/org/apache/ambari/client/Command.java b/client/src/main/java/org/apache/ambari/client/Command.java
new file mode 100644
index 0000000..49e7762
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/client/Command.java
@@ -0,0 +1,214 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.client;
+
+import java.io.File;
+
+
+import javax.ws.rs.core.MediaType;
+import javax.xml.bind.JAXBContext;
+import javax.xml.bind.Marshaller;
+import org.apache.ambari.common.rest.entities.Stack;
+import org.apache.ambari.common.rest.entities.StackInformation;
+import org.apache.ambari.common.rest.entities.ClusterDefinition;
+import org.apache.ambari.common.rest.entities.ClusterInformation;
+import org.apache.ambari.common.rest.entities.Node;
+import com.sun.jersey.api.client.ClientResponse;
+import com.sun.jersey.api.client.WebResource;
+import com.sun.jersey.api.json.JSONJAXBContext;
+import com.sun.jersey.api.json.JSONMarshaller;
+
+public abstract class Command {
+    
+    protected String baseURLString = "http://localhost:4080/rest";
+    
+    //Hashtable<String, String> commandOptions = new Hashtable<String, String>();
+    
+    //Options options = null;
+    
+    public Command() {
+        
+        /* boolean, name, description, required, argname,    
+        Option wait = "boolean, false, wait, Optionally wait for cluster to reach desired state";
+        String dry_run = "boolean, false dry_run, Dry run";
+        String help = "boolean, false, help, help";
+        String name = "false, true, name, cluster_name, true, Name of the cluster to be created";
+        
+        
+        
+        OptionBuilder.withArgName("cluster_name");
+        OptionBuilder.isRequired();
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription( "Name of the cluster to be created");
+        Option name = OptionBuilder.create( "name" );
+        
+        OptionBuilder.withArgName("stack_name");
+        OptionBuilder.isRequired();
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription( "Name of the cluster stack");
+        Option stack = OptionBuilder.create( "stack" );
+        
+        OptionBuilder.withArgName( "\"node_exp1; node_exp2; ...\"" );
+        OptionBuilder.isRequired();
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription(  "List of node range expressions separated by semicolon (;) and contained in double quotes (\"\")" );
+        Option nodes = OptionBuilder.create( "nodes" );
+        
+        OptionBuilder.withArgName( "stack_revision" );
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription(  "Stack revision, if not specified latest revision is used" );
+        Option stack_revision = OptionBuilder.create( "revision" );
+        
+        OptionBuilder.withArgName( "description" );
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription(  "Description to be associated with cluster" );
+        Option desc = OptionBuilder.create( "desc" );
+        
+        OptionBuilder.withArgName( "goalstate" );
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription(  "Desired goal state of the cluster" );
+        Option goalstate = OptionBuilder.create( "goalstate" );
+        
+        OptionBuilder.withArgName( "\"component-1; component-2; ...\"" );
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription(  "List of components to be active in the cluster. Components are seperated by semicolon \";\"" );
+        Option services = OptionBuilder.create( "services" );
+        
+        OptionBuilder.withArgName( "rolename=\"node_exp1; node_exp2; ... \"" );
+        OptionBuilder.hasArgs(2);
+        OptionBuilder.withValueSeparator();
+        OptionBuilder.withDescription( "Provide node range expressions for a given rolename separated by semicolon (;) and contained in double quotes (\"\")" );
+        Option role = OptionBuilder.create( "role" );
+    
+        this.options = new Options();
+        options.addOption( wait );   
+        options.addOption(dry_run);
+        options.addOption( name );
+        options.addOption( stack );   
+        options.addOption(stack_revision);
+        options.addOption( desc );
+        options.addOption( role );
+        options.addOption( goalstate );
+        options.addOption( nodes );
+        options.addOption( services );
+        options.addOption(help);
+        */
+    }
+    
+    public void printClusterDefinition(ClusterDefinition def) throws Exception {
+        JAXBContext jc = JAXBContext.newInstance(org.apache.ambari.common.rest.entities.ClusterDefinition.class);
+        Marshaller m = jc.createMarshaller();
+        m.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, Boolean.TRUE);
+        m.marshal(def, System.out);
+    }
+    
+    public void printClusterDefinitionJSON(ClusterDefinition def) throws Exception {
+        JAXBContext jc = JSONJAXBContext.newInstance(org.apache.ambari.common.rest.entities.ClusterDefinition.class);
+        Marshaller m = jc.createMarshaller(); 
+        JSONMarshaller mx = JSONJAXBContext.getJSONMarshaller(m);
+        mx.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, Boolean.TRUE);
+        mx.marshallToJSON(def, System.out);
+    }
+    
+    public void printClusterInformation(ClusterInformation clsInfo) throws Exception {
+        JAXBContext jc = JAXBContext.newInstance(org.apache.ambari.common.rest.entities.ClusterInformation.class);
+        Marshaller m = jc.createMarshaller();
+        m.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, Boolean.TRUE);
+        m.marshal(clsInfo, System.out);
+    }
+    
+    public void printNodeInformation(Node node) throws Exception {
+        JAXBContext jc = JAXBContext.newInstance(org.apache.ambari.common.rest.entities.Node.class);
+        Marshaller m = jc.createMarshaller();
+        m.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, Boolean.TRUE);
+        m.marshal(node, System.out);
+    }
+    
+    public void printStack(Stack stack, String file_path) throws Exception {
+        JAXBContext jc = JAXBContext.newInstance(org.apache.ambari.common.rest.entities.Stack.class);
+        Marshaller m = jc.createMarshaller();
+        m.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, Boolean.TRUE);
+        if (file_path == null) {
+            m.marshal(stack, System.out);
+        } else {
+            m.marshal(stack, new File(file_path));
+        }
+    }
+    
+    /*
+     * TODO: Return Stack objects instead of StackInformation???
+     */
+    public void printStackInformation (WebResource service, StackInformation bpInfo, boolean tree) {
+        
+        System.out.println("\nName:["+bpInfo.getName()+"], Revision:["+bpInfo.getRevision()+"]");
+        
+        if (tree) {
+            String tab = "    ";
+            while (bpInfo.getParentName() != null) {    
+                System.out.println(tab+":-> Name:["+bpInfo.getParentName()+"], Revision:["+bpInfo.getParentRevision()+"]");
+                ClientResponse response = service.path("stacks/"+bpInfo.getParentName())
+                        .accept(MediaType.APPLICATION_JSON).type(MediaType.APPLICATION_JSON).get(ClientResponse.class);
+                if (response.getStatus() != 404 && response.getStatus() != 200) { 
+                    System.err.println("Stack list command failed. Reason [Code: <"+response.getStatus()+">, Message: <"+response.getHeaders().getFirst("ErrorMessage")+">]");
+                    System.exit(-1);
+                }
+                if (response.getStatus() == 404) {
+                    System.exit(0);
+                }
+                /* 
+                 * Retrieve the stack from the response
+                 * TODO: 
+                 */
+                Stack bp = response.getEntity(Stack.class);
+                bpInfo.setName(bp.getName());
+                bpInfo.setParentName(bp.getParentName());
+                bpInfo.setRevision(bp.getRevision());
+                bpInfo.setParentRevision(bp.getParentRevision());
+                tab = tab+"        ";
+            }
+        } 
+    }
+    
+    public void printStackInformation (WebResource service, Stack bp, boolean tree) {
+        System.out.println("\nName:["+bp.getName()+"], Revision:["+bp.getRevision()+"]");
+
+        if (tree) {
+            String tab = "    ";
+            while (bp.getParentName() != null) {
+                
+                System.out.println(tab+":-> Name:["+bp.getParentName()+"], Revision:["+bp.getParentRevision()+"]");
+                ClientResponse response = service.path("stacks/"+bp.getParentName())
+                        .accept(MediaType.APPLICATION_JSON).type(MediaType.APPLICATION_JSON).get(ClientResponse.class);
+                if (response.getStatus() != 404 && response.getStatus() != 200) { 
+                    System.err.println("Stack list command failed. Reason [Code: <"+response.getStatus()+">, Message: <"+response.getHeaders().getFirst("ErrorMessage")+">]");
+                    System.exit(-1);
+                }
+                if (response.getStatus() == 404) {
+                    System.exit(0);
+                }
+                /* 
+                 * Retrieve the stack from the response
+                 */
+                bp = response.getEntity(Stack.class);
+                tab = tab+"        ";
+            }
+        }
+    }
+    
+    public abstract void run () throws Exception;
+}
diff --git a/client/src/main/java/org/apache/ambari/client/NodeGet.java b/client/src/main/java/org/apache/ambari/client/NodeGet.java
new file mode 100644
index 0000000..2323365
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/client/NodeGet.java
@@ -0,0 +1,134 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.client;
+
+import java.net.URI;
+
+import javax.ws.rs.core.MediaType;
+import javax.ws.rs.core.UriBuilder;
+
+import org.apache.ambari.common.rest.entities.Node;
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.GnuParser;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.Option;
+import org.apache.commons.cli.OptionBuilder;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.ParseException;
+
+import com.sun.jersey.api.client.Client;
+import com.sun.jersey.api.client.ClientResponse;
+import com.sun.jersey.api.client.WebResource;
+import com.sun.jersey.api.client.config.ClientConfig;
+import com.sun.jersey.api.client.config.DefaultClientConfig;
+
+public class NodeGet extends Command {
+
+    String[] args = null;
+    Options options = null;
+   
+    CommandLine line;
+    
+    public NodeGet() {
+    }
+    
+    public NodeGet (String [] args) throws Exception {  
+        /*
+         * Build options for node get
+         */
+        this.args = args;
+        addOptions();
+    }
+    
+    public void printUsage () {
+        HelpFormatter formatter = new HelpFormatter();
+        formatter.printHelp( "ambari node get", this.options);
+    }
+    
+    public void addOptions () {
+             
+        Option help = new Option( "help", "Help" );
+        
+        OptionBuilder.withArgName("node_name");
+        OptionBuilder.isRequired();
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription( "Name of the node");
+        Option name = OptionBuilder.create( "name" );
+
+        this.options = new Options();
+        options.addOption( name );   
+        options.addOption(help);
+    }
+    
+    public void parseCommandLine() {
+     
+        // create the parser
+        CommandLineParser parser = new GnuParser();
+        try {
+            // parse the command line arguments
+            line = parser.parse(this.options, this.args );
+            
+            if (line.hasOption("help")) {
+                printUsage();
+                System.exit(0);
+            }
+        }
+        catch( ParseException exp ) {
+            // oops, something went wrong
+            System.err.println( "Command parsing failed. Reason: <" + exp.getMessage()+">\n" );
+            printUsage();
+            System.exit(-1);
+        } 
+    }
+    
+    private static URI getBaseURI() {
+        return UriBuilder.fromUri(
+                "http://localhost:4080/rest/").build();
+    }
+    
+    
+    public void run() throws Exception {
+        /* 
+         * Parse the command line to get the command line arguments
+         */
+        parseCommandLine();
+        
+        ClientConfig config = new DefaultClientConfig();
+        Client client = Client.create(config);
+        WebResource service = client.resource(getBaseURI());
+        String nodeName = line.getOptionValue("name");
+        
+        /*
+         * Get node
+         */
+        ClientResponse response;
+        response = service.path("nodes/"+nodeName).accept(MediaType.APPLICATION_JSON).type(MediaType.APPLICATION_JSON).get(ClientResponse.class);
+        if (response.getStatus() != 200) { 
+            System.err.println("Node get command failed. Reason [Code: <"+response.getStatus()+">, Message: <"+response.getHeaders().getFirst("ErrorMessage")+">]");
+            System.exit(-1);
+        }
+        
+        /* 
+         * Retrieve the node Information from the response
+         */
+        Node node = response.getEntity(Node.class);     
+        System.out.println("Node Information:\n");
+            printNodeInformation(node);
+    }
+}
diff --git a/client/src/main/java/org/apache/ambari/client/NodeList.java b/client/src/main/java/org/apache/ambari/client/NodeList.java
new file mode 100644
index 0000000..0a99107
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/client/NodeList.java
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.client;
+
+import java.net.URI;
+import java.util.List;
+
+import javax.ws.rs.core.MediaType;
+import javax.ws.rs.core.UriBuilder;
+
+import org.apache.ambari.common.rest.entities.Node;
+import org.apache.ambari.common.rest.entities.NodeRole;
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.GnuParser;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.Option;
+import org.apache.commons.cli.OptionBuilder;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.ParseException;
+
+import com.sun.jersey.api.client.Client;
+import com.sun.jersey.api.client.ClientResponse;
+import com.sun.jersey.api.client.GenericType;
+import com.sun.jersey.api.client.WebResource;
+import com.sun.jersey.api.client.config.ClientConfig;
+import com.sun.jersey.api.client.config.DefaultClientConfig;
+
+public class NodeList extends Command {
+
+    String[] args = null;
+    Options options = null;
+   
+    CommandLine line;
+    
+    public NodeList() {
+    }
+    
+    public NodeList (String [] args) throws Exception {  
+        /*
+         * Build options for node list
+         */
+        this.args = args;
+        addOptions();
+    }
+    
+    public void printUsage () {
+        HelpFormatter formatter = new HelpFormatter();
+        formatter.printHelp( "ambari node list", this.options);
+    }
+    
+    public void addOptions () {
+             
+        Option help = new Option( "help", "Help" );
+        Option verbose = new Option( "verbose", "Verbose mode" );
+        
+        OptionBuilder.withArgName("true/false");
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription( "State of the node indicating if node is allocated to some cluster. If not specified, implies both allocated and free nodes");
+        Option allocated = OptionBuilder.create( "allocated" );
+        
+        OptionBuilder.withArgName("true/false");
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription( "State of the node to be listed. If not specified, implies both alive and dead nodes");
+        Option alive = OptionBuilder.create( "alive" );
+        
+        this.options = new Options();
+        options.addOption( verbose );   
+        options.addOption(help);
+        options.addOption(allocated);
+        options.addOption(alive);
+    }
+    
+    public void parseCommandLine() {
+     
+        // create the parser
+        CommandLineParser parser = new GnuParser();
+        try {
+            // parse the command line arguments
+            line = parser.parse(this.options, this.args );
+            
+            if (line.hasOption("help")) {
+                printUsage();
+                System.exit(0);
+            }
+        }
+        catch( ParseException exp ) {
+            // oops, something went wrong
+            System.err.println( "Command parsing failed. Reason: <" + exp.getMessage()+">\n" );
+            printUsage();
+            System.exit(-1);
+        } 
+    }
+    
+    private static URI getBaseURI() {
+        return UriBuilder.fromUri(
+                "http://localhost:4080/rest/").build();
+    }
+    
+    
+    public void run() throws Exception {
+        /* 
+         * Parse the command line to get the command line arguments
+         */
+        parseCommandLine();
+        
+        ClientConfig config = new DefaultClientConfig();
+        Client client = Client.create(config);
+        WebResource service = client.resource(getBaseURI());
+        boolean verbose = line.hasOption("verbose");
+        String allocated = "";
+        if (line.hasOption("allocated")) {
+            allocated = line.getOptionValue("allocated");
+        }
+        String alive = "";
+        if (line.hasOption("alive")) {
+            alive = line.getOptionValue("alive");
+        }
+        
+        /*
+         * list nodes
+         */
+        ClientResponse response;
+        response = service.path("nodes")
+                   .queryParam("alive", alive)
+                   .queryParam("allocated", allocated)
+                   .accept(MediaType.APPLICATION_JSON).type(MediaType.APPLICATION_JSON).get(ClientResponse.class);
+        if (response.getStatus() == 204) {
+            System.exit(0);
+        }
+        
+        if (response.getStatus() != 200) { 
+            System.err.println("node list command failed. Reason [Code: <"+response.getStatus()+">, Message: <"+response.getHeaders().getFirst("ErrorMessage")+">]");
+            System.exit(-1);
+        }
+        
+        /* 
+         * Retrieve the cluster Information from the response
+         */
+        List<Node> nodes = response.getEntity(new GenericType<List<Node>>(){});
+        
+        if (!verbose) {
+            System.out.println("[NAME]\t[LAST HEARTBEAT TIME]\t[ASSOCIATED_ROLES]\t[ACTIVE_ROLES]\t[CLUSTER_ID]\n");
+            for (Node node : nodes ) {
+                String clusterID = "";
+                if (node.getNodeState().getClusterName() != null) clusterID = node.getNodeState().getClusterName();
+                System.out.println("["+node.getName()+"]\t"+
+                                   "["+node.getNodeState().getLastHeartbeatTime()+"]\t"+
+                                   "["+node.getNodeState().getNodeRoleNames("")+"]\t"+
+                                   "["+node.getNodeState().getNodeRoleNames(NodeRole.NODE_SERVER_STATE_UP)+"]\t"+
+                                   "["+clusterID+"]\n");
+            }
+        } else {
+            System.out.println("Node List:\n");
+            for (Node node : nodes ) {
+              printNodeInformation(node);
+              System.out.println("\n");
+            }
+        }
+    }
+}
\ No newline at end of file
diff --git a/client/src/main/java/org/apache/ambari/client/StackAdd.java b/client/src/main/java/org/apache/ambari/client/StackAdd.java
new file mode 100644
index 0000000..ed96d80
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/client/StackAdd.java
@@ -0,0 +1,199 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.client;
+
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.MalformedURLException;
+import java.net.URI;
+import java.net.URL;
+
+import javax.ws.rs.core.MediaType;
+import javax.ws.rs.core.UriBuilder;
+import javax.xml.bind.JAXBContext;
+import javax.xml.bind.JAXBException;
+import javax.xml.bind.Unmarshaller;
+
+import org.apache.ambari.common.rest.entities.Stack;
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.GnuParser;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.Option;
+import org.apache.commons.cli.OptionBuilder;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.ParseException;
+
+import com.sun.jersey.api.client.Client;
+import com.sun.jersey.api.client.ClientResponse;
+import com.sun.jersey.api.client.WebResource;
+import com.sun.jersey.api.client.config.ClientConfig;
+import com.sun.jersey.api.client.config.DefaultClientConfig;
+import com.sun.jersey.api.json.JSONJAXBContext;
+import com.sun.jersey.api.json.JSONUnmarshaller;
+
+public class StackAdd extends Command {
+
+    String[] args = null;
+    Options options = null;
+   
+    CommandLine line;
+    
+    public StackAdd() {
+    }
+    
+    public StackAdd (String [] args) throws Exception {  
+        /*
+         * Build options for stack add
+         */
+        this.args = args;
+        addOptions();
+    }
+    
+    public void printUsage () {
+        HelpFormatter formatter = new HelpFormatter();
+        formatter.printHelp( "ambari stack add", this.options);
+    }
+    
+    public void addOptions () {
+             
+        Option help = new Option( "help", "Help" );
+        
+        OptionBuilder.withArgName("name");
+        OptionBuilder.isRequired();
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription( "Name of the stack");
+        Option name = OptionBuilder.create( "name" );
+        
+        OptionBuilder.withArgName("location");
+        OptionBuilder.isRequired();
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription( "Either URL or local file path where stack in JSON format is available");
+        Option location = OptionBuilder.create( "location" );
+        
+        this.options = new Options();
+        options.addOption(location);
+        options.addOption(name);
+        options.addOption(help);
+    }
+    
+    public void parseCommandLine() {
+     
+        // create the parser
+        CommandLineParser parser = new GnuParser();
+        try {
+            // parse the command line arguments
+            line = parser.parse(this.options, this.args );
+            
+            if (line.hasOption("help")) {
+                printUsage();
+                System.exit(0);
+            }
+        }
+        catch( ParseException exp ) {
+            // oops, something went wrong
+            System.err.println( "Command parsing failed. Reason: <" + exp.getMessage()+">\n" );
+            printUsage();
+            System.exit(-1);
+        } 
+    }
+    
+    private static URI getBaseURI() {
+        return UriBuilder.fromUri(
+                "http://localhost:4080/rest/").build();
+    }
+    
+    
+    public void run() throws Exception {
+        /* 
+         * Parse the command line to get the command line arguments
+         */
+        parseCommandLine();
+        
+        ClientConfig config = new DefaultClientConfig();
+        Client client = Client.create(config);
+        WebResource service = client.resource(getBaseURI());
+        String location = line.getOptionValue("location");
+        String name = line.getOptionValue("name");
+        
+        /*
+         * Import stack 
+         */
+        File f = new File(location);
+        ClientResponse response = null;
+        if (!f.exists()) {
+            try {
+                URL urlx = new URL(location);
+            } catch (MalformedURLException x) {
+                System.out.println("Specified location is either a non-existing file path or a malformed URL");
+                System.exit(-1);
+            }
+            Stack bp = new Stack();
+            response = service.path("stacks/"+name)
+                    .queryParam("url", location)
+                    .accept(MediaType.APPLICATION_JSON).type(MediaType.APPLICATION_JSON).put(ClientResponse.class, bp);
+        } else {
+            Stack bp = null;
+            if (f.getName().endsWith(".json")) {
+                bp = this.readStackFromJSONFile(f);
+                response = service.path("stacks/"+name)
+                        .accept(MediaType.APPLICATION_JSON).type(MediaType.APPLICATION_JSON).put(ClientResponse.class, bp);
+            } else if (f.getName().endsWith(".xml")) {
+                bp = this.readStackFromXMLFile(f);
+                response = service.path("stacks/"+name)
+                        .accept(MediaType.APPLICATION_XML).type(MediaType.APPLICATION_XML).put(ClientResponse.class, bp);
+            } else {
+                System.out.println("Specified stack file does not end with .json or .xml");
+                System.exit(-1);
+            }
+            
+        }     
+        
+        if (response.getStatus() != 200) { 
+            System.err.println("Stack add command failed. Reason [Code: <"+response.getStatus()+">, Message: <"+response.getHeaders().getFirst("ErrorMessage")+">]");
+            System.exit(-1);
+        }
+        
+        Stack bp_return = response.getEntity(Stack.class);
+        
+        System.out.println("Stack added.\n");
+        printStack(bp_return, null);
+    }
+    
+    public Stack readStackFromXMLFile (File f) throws Exception {      
+        JAXBContext jc = JAXBContext.newInstance(org.apache.ambari.common.rest.entities.Stack.class);
+        Unmarshaller u = jc.createUnmarshaller();
+        Stack bp = (Stack)u.unmarshal(f);
+        return bp;
+    }
+    
+    public Stack readStackFromJSONFile (File f) throws Exception {   
+        JSONJAXBContext jsonContext = 
+                new JSONJAXBContext("org.apache.ambari.common.rest.entities");
+        InputStream in = new FileInputStream(f.getAbsoluteFile());
+        try {
+          JSONUnmarshaller um = jsonContext.createJSONUnmarshaller();
+          Stack stack = um.unmarshalFromJSON(in, Stack.class);
+          return stack;
+        } catch (JAXBException je) {
+          throw new IOException("Can't parse " + f.getAbsolutePath(), je);
+        }
+    }
+}
\ No newline at end of file
diff --git a/client/src/main/java/org/apache/ambari/client/StackGet.java b/client/src/main/java/org/apache/ambari/client/StackGet.java
new file mode 100644
index 0000000..342554c
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/client/StackGet.java
@@ -0,0 +1,156 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.client;
+
+import java.net.URI;
+
+import javax.ws.rs.core.MediaType;
+import javax.ws.rs.core.UriBuilder;
+
+import org.apache.ambari.common.rest.entities.Stack;
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.GnuParser;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.Option;
+import org.apache.commons.cli.OptionBuilder;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.ParseException;
+
+import com.sun.jersey.api.client.Client;
+import com.sun.jersey.api.client.ClientResponse;
+import com.sun.jersey.api.client.WebResource;
+import com.sun.jersey.api.client.config.ClientConfig;
+import com.sun.jersey.api.client.config.DefaultClientConfig;
+
+public class StackGet extends Command {
+
+    String[] args = null;
+    Options options = null;
+   
+    CommandLine line;
+    
+    public StackGet() {
+    }
+    
+    public StackGet (String [] args) throws Exception {  
+        /*
+         * Build options for cluster list
+         */
+        this.args = args;
+        addOptions();
+    }
+    
+    public void printUsage () {
+        HelpFormatter formatter = new HelpFormatter();
+        formatter.printHelp( "ambari stack get", this.options);
+    }
+    
+    public void addOptions () {
+             
+        Option help = new Option( "help", "Help" );
+        
+        OptionBuilder.withArgName("stack_name");
+        OptionBuilder.isRequired();
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription( "Name of the stack");
+        Option name = OptionBuilder.create( "name" );
+        
+        OptionBuilder.withArgName("revision");
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription( "Name of the cluster");
+        Option revision = OptionBuilder.create( "revision" );
+        
+        OptionBuilder.withArgName("file_path");
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription( "Local file path");
+        Option file = OptionBuilder.create( "file" );
+        
+        
+        this.options = new Options();
+        options.addOption(name);
+        options.addOption(revision);
+        options.addOption(file);
+   
+        options.addOption(help);
+    }
+    
+    public void parseCommandLine() {
+     
+        // create the parser
+        CommandLineParser parser = new GnuParser();
+        try {
+            // parse the command line arguments
+            line = parser.parse(this.options, this.args );
+            
+            if (line.hasOption("help")) {
+                printUsage();
+                System.exit(0);
+            }
+        }
+        catch( ParseException exp ) {
+            // oops, something went wrong
+            System.err.println( "Command parsing failed. Reason: <" + exp.getMessage()+">\n" );
+            printUsage();
+            System.exit(-1);
+        } 
+    }
+    
+    private static URI getBaseURI() {
+        return UriBuilder.fromUri(
+                "http://localhost:4080/rest/").build();
+    }
+    
+    
+    public void run() throws Exception {
+        /* 
+         * Parse the command line to get the command line arguments
+         */
+        parseCommandLine();
+        
+        ClientConfig config = new DefaultClientConfig();
+        Client client = Client.create(config);
+        WebResource service = client.resource(getBaseURI());
+        String stackName = line.getOptionValue("name");
+        String file_path = line.getOptionValue("file");
+        
+        /*
+         * Get stack
+         */
+        ClientResponse response;
+        if (line.hasOption("revision")) {
+            response = service.path("stacks/"+stackName)
+                   .queryParam("revision", line.getOptionValue("revision"))
+                   .accept(MediaType.APPLICATION_JSON).type(MediaType.APPLICATION_JSON).get(ClientResponse.class);
+        } else {
+            response = service.path("stacks/"+stackName)
+                    .accept(MediaType.APPLICATION_JSON).type(MediaType.APPLICATION_JSON).get(ClientResponse.class);
+        }        
+        if (response.getStatus() != 200) { 
+            System.err.println("Stack get command failed. Reason [Code: <"+response.getStatus()+">, Message: <"+response.getHeaders().getFirst("ErrorMessage")+">]");
+            System.exit(-1);
+        }
+        
+        /* 
+         * Retrieve the cluster Information from the response
+         */
+        Stack bp = response.getEntity(Stack.class);
+        
+        printStack(bp, file_path);
+    }
+}
diff --git a/client/src/main/java/org/apache/ambari/client/StackHistory.java b/client/src/main/java/org/apache/ambari/client/StackHistory.java
new file mode 100644
index 0000000..3944088
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/client/StackHistory.java
@@ -0,0 +1,152 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.client;
+
+import java.net.URI;
+import java.net.URL;
+import java.util.List;
+
+import javax.ws.rs.core.MediaType;
+import javax.ws.rs.core.UriBuilder;
+
+import org.apache.ambari.common.rest.entities.StackInformation;
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.GnuParser;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.Option;
+import org.apache.commons.cli.OptionBuilder;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.ParseException;
+
+import com.sun.jersey.api.client.Client;
+import com.sun.jersey.api.client.ClientResponse;
+import com.sun.jersey.api.client.GenericType;
+import com.sun.jersey.api.client.WebResource;
+import com.sun.jersey.api.client.config.ClientConfig;
+import com.sun.jersey.api.client.config.DefaultClientConfig;
+
+public class StackHistory extends Command {
+
+    String[] args = null;
+    Options options = null;
+    
+    String urlPath = "/stacks";
+    URL resourceURL = null;
+    CommandLine line;
+    
+    public StackHistory() {
+    }
+    
+    public StackHistory (String [] args) throws Exception {  
+        /*
+         * Build options for stack history
+         */
+        this.args = args;
+        addOptions();
+        this.resourceURL = new URL (""+this.baseURLString+this.urlPath);
+    }
+    
+    public void printUsage () {
+        HelpFormatter formatter = new HelpFormatter();
+        formatter.printHelp( "ambari stack history", this.options);
+    }
+    
+    public void addOptions () {
+             
+        Option help = new Option( "help", "Help" );
+        Option tree = new Option( "tree", "tree representation" );
+        
+        OptionBuilder.withArgName("stack_name");
+        OptionBuilder.isRequired();
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription( "Name of the stack");
+        Option name = OptionBuilder.create( "name" );
+        
+        this.options = new Options();  
+        options.addOption( name );
+        options.addOption( tree );
+        options.addOption(help);
+    }
+    
+    public void parseCommandLine() {
+     
+        // create the parser
+        CommandLineParser parser = new GnuParser();
+        try {
+            // parse the command line arguments
+            line = parser.parse(this.options, this.args );
+            
+            if (line.hasOption("help")) {
+                printUsage();
+                System.exit(0);
+            }
+        }
+        catch( ParseException exp ) {
+            // oops, something went wrong
+            System.err.println( "Command parsing failed. Reason: <" + exp.getMessage()+">\n" );
+            printUsage();
+            System.exit(-1);
+        } 
+    }
+    
+    private static URI getBaseURI() {
+        return UriBuilder.fromUri(
+                "http://localhost:4080/rest/").build();
+    }
+    
+    
+    public void run() throws Exception {
+        /* 
+         * Parse the command line to get the command line arguments
+         */
+        parseCommandLine();
+        
+        ClientConfig config = new DefaultClientConfig();
+        Client client = Client.create(config);
+        WebResource service = client.resource(getBaseURI());
+        String stackName = line.getOptionValue("name");
+        boolean tree = line.hasOption("tree");
+          
+        /*
+         * Get stack revisions
+         */
+        ClientResponse response = service.path("stacks/"+stackName+"/revisions").accept(MediaType.APPLICATION_JSON).type(MediaType.APPLICATION_JSON).get(ClientResponse.class);
+        if (response.getStatus() == 404) { 
+            System.out.println("Stack ["+stackName+"] does not exist");
+            System.exit(-1);
+        }
+        
+        if (response.getStatus() == 204) {
+            System.out.println("No revisions available for Stack ["+stackName+"]");
+            System.exit(0);
+        }
+        
+        if (response.getStatus() != 200) { 
+            System.err.println("Stack history command failed. Reason [Code: <"+response.getStatus()+">, Message: <"+response.getHeaders().getFirst("ErrorMessage")+">]");
+            System.exit(-1);
+        }
+        /* 
+         * Retrieve the stack Information list from the response
+         */
+        List<StackInformation> bpInfos = response.getEntity(new GenericType<List<StackInformation>>(){});
+        for (StackInformation bpInfo : bpInfos) {
+            printStackInformation (service, bpInfo, tree);
+        }
+    }
+}
\ No newline at end of file
diff --git a/client/src/main/java/org/apache/ambari/client/StackList.java b/client/src/main/java/org/apache/ambari/client/StackList.java
new file mode 100644
index 0000000..e46350c
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/client/StackList.java
@@ -0,0 +1,163 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.client;
+
+import java.net.URI;
+import java.util.List;
+
+import javax.ws.rs.core.MediaType;
+import javax.ws.rs.core.UriBuilder;
+
+import org.apache.ambari.common.rest.entities.Stack;
+import org.apache.ambari.common.rest.entities.StackInformation;
+import org.apache.commons.cli.CommandLine;
+import org.apache.commons.cli.CommandLineParser;
+import org.apache.commons.cli.GnuParser;
+import org.apache.commons.cli.HelpFormatter;
+import org.apache.commons.cli.Option;
+import org.apache.commons.cli.OptionBuilder;
+import org.apache.commons.cli.Options;
+import org.apache.commons.cli.ParseException;
+
+import com.sun.jersey.api.client.Client;
+import com.sun.jersey.api.client.ClientResponse;
+import com.sun.jersey.api.client.GenericType;
+import com.sun.jersey.api.client.WebResource;
+import com.sun.jersey.api.client.config.ClientConfig;
+import com.sun.jersey.api.client.config.DefaultClientConfig;
+
+public class StackList extends Command {
+
+    String[] args = null;
+    Options options = null;
+   
+    CommandLine line;
+    
+    public StackList() {
+    }
+    
+    public StackList (String [] args) throws Exception {  
+        /*
+         * Build options for stack add
+         */
+        this.args = args;
+        addOptions();
+    }
+    
+    public void printUsage () {
+        HelpFormatter formatter = new HelpFormatter();
+        formatter.printHelp( "ambari stack list", this.options);
+    }
+    
+    public void addOptions () {
+             
+        Option help = new Option( "help", "Help" );
+        Option tree = new Option( "tree", "tree representation" );
+        
+        OptionBuilder.withArgName("name");
+        OptionBuilder.hasArg();
+        OptionBuilder.withDescription( "Name of the stack");
+        Option name = OptionBuilder.create( "name" );
+        
+        this.options = new Options();
+        options.addOption(name);
+        options.addOption(tree);
+        options.addOption(help);
+    }
+    
+    public void parseCommandLine() {
+     
+        // create the parser
+        CommandLineParser parser = new GnuParser();
+        try {
+            // parse the command line arguments
+            line = parser.parse(this.options, this.args );
+            
+            if (line.hasOption("help")) {
+                printUsage();
+                System.exit(0);
+            }
+        }
+        catch( ParseException exp ) {
+            // oops, something went wrong
+            System.err.println( "Command parsing failed. Reason: <" + exp.getMessage()+">\n" );
+            printUsage();
+            System.exit(-1);
+        } 
+    }
+    
+    private static URI getBaseURI() {
+        return UriBuilder.fromUri(
+                "http://localhost:4080/rest/").build();
+    }
+    
+    
+    public void run() throws Exception {
+        /* 
+         * Parse the command line to get the command line arguments
+         */
+        parseCommandLine();
+        
+        ClientConfig config = new DefaultClientConfig();
+        Client client = Client.create(config);
+        WebResource service = client.resource(getBaseURI());
+        String name = line.getOptionValue("name");
+        boolean tree = line.hasOption("tree");
+        
+        /*
+         * Get stack 
+         * TODO: Ignore does not exist case?
+         */
+        if (name != null) {
+            ClientResponse response = service.path("stacks/"+name)
+                    .accept(MediaType.APPLICATION_JSON).type(MediaType.APPLICATION_JSON).get(ClientResponse.class);
+            if (response.getStatus() != 404 && response.getStatus() != 200) { 
+                System.err.println("Stack list command failed. Reason [Code: <"+response.getStatus()+">, Message: <"+response.getHeaders().getFirst("ErrorMessage")+">]");
+                System.exit(-1);
+            }
+            if (response.getStatus() == 404) {
+                System.exit(0);
+            }
+            /* 
+             * Retrieve the stack from the response
+             */
+            Stack stack = response.getEntity(Stack.class);
+            printStackInformation (service, stack, tree);
+           
+        } else {
+            ClientResponse response = service.path("stacks")
+                    .accept(MediaType.APPLICATION_JSON).type(MediaType.APPLICATION_JSON).get(ClientResponse.class);
+            if (response.getStatus() != 200 && response.getStatus() != 204) { 
+                System.err.println("Stack list command failed. Reason [Code: <"+response.getStatus()+">, Message: <"+response.getHeaders().getFirst("ErrorMessage")+">]");
+                System.exit(-1);
+            }
+            if (response.getStatus() == 204) {
+                System.exit(0);
+            }
+            
+            /* 
+             * Retrieve the stack Information list from the response
+             */
+            List<StackInformation> bpInfos = response.getEntity(new GenericType<List<StackInformation>>(){});
+            for (StackInformation bpInfo : bpInfos) {
+                printStackInformation (service, bpInfo, tree);
+            }
+        }
+        
+    }
+}
\ No newline at end of file
diff --git a/client/src/main/java/org/apache/ambari/common/rest/agent/Action.java b/client/src/main/java/org/apache/ambari/common/rest/agent/Action.java
new file mode 100644
index 0000000..86c3b40
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/common/rest/agent/Action.java
@@ -0,0 +1,207 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.common.rest.agent;
+
+import java.util.concurrent.atomic.AtomicLong;
+
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlRootElement;
+import javax.xml.bind.annotation.XmlType;
+import javax.xml.bind.annotation.adapters.XmlAdapter;
+
+@XmlRootElement
+@XmlAccessorType(XmlAccessType.FIELD)
+@XmlType(name = "", propOrder = {})
+public class Action {
+  @XmlElement
+  public Kind kind;
+  @XmlElement
+  public String clusterId;
+  @XmlElement
+  public String user;
+  @XmlElement
+  public String id;
+  @XmlElement
+  public String component;
+  @XmlElement
+  public String role;
+  @XmlElement
+  public Signal signal;
+  @XmlElement
+  public Command command;
+  @XmlElement
+  public Command cleanUpCommand;
+  @XmlElement
+  public long clusterDefinitionRevision;
+  @XmlElement
+  public String workDirComponent;
+  @XmlElement
+  public ConfigFile file;
+  
+  private static AtomicLong globalId = new AtomicLong();
+  
+  public Action() {
+    long id = globalId.incrementAndGet();
+    this.id = new Long(id).toString();
+  }
+  
+  public Kind getKind() {
+    return kind;
+  }
+  
+  public void setKind(Kind kind) {
+    this.kind = kind;
+  }  
+
+  public String getClusterId() {
+    return clusterId;
+  }
+  
+  public void setClusterId(String clusterId) {
+    this.clusterId = clusterId;
+  }
+  
+  public String getUser() {
+    return user;
+  }
+  
+  public void setUser(String user) {
+    this.user = user;
+  }
+  
+  public String getId() {
+    return id;
+  }
+  
+  public void setId(String id) {
+    this.id = id;
+  }
+  
+  public String getComponent() {
+    return component;
+  }
+  
+  public String getRole() {
+    return role;
+  }
+  
+  public void setComponent(String component) {
+    this.component = component;
+  }
+  
+  public void setRole(String role) {
+    this.role = role;
+  }
+  
+  public void setWorkDirectoryComponent(String workDirComponent) {
+    this.workDirComponent = workDirComponent;
+  }
+  
+  public String getWorkDirectoryComponent() {
+    return workDirComponent;
+  }
+  
+  public Signal getSignal() {
+    return signal;
+  }
+  
+  public void setSignal(Signal signal) {
+    this.signal = signal;
+  }
+  
+  public Command getCommand() {
+    return command;
+  }
+  
+  public void setCommand(Command command) {
+    this.command = command;
+  }
+  
+  public Command getCleanUpCommand() {
+    return cleanUpCommand;
+  }
+  
+  public void setCleanUpCommand(Command cleanUpCommand) {
+    this.cleanUpCommand = cleanUpCommand;  
+  }
+  
+  public long getClusterDefinitionRevision() {
+    return this.clusterDefinitionRevision;
+  }
+  
+  public void setClusterDefinitionRevision(long clusterDefinitionRevision) {
+    this.clusterDefinitionRevision = clusterDefinitionRevision;
+  }
+  
+  public ConfigFile getFile() {
+    return this.file;
+  }
+  
+  public void setFile(ConfigFile file) {
+    this.file = file;
+  }
+  
+  public static enum Kind {
+    RUN_ACTION, START_ACTION, STOP_ACTION, STATUS_ACTION, 
+    CREATE_STRUCTURE_ACTION, DELETE_STRUCTURE_ACTION, WRITE_FILE_ACTION,
+    INSTALL_AND_CONFIG_ACTION,NO_OP_ACTION;
+    public static class KindAdaptor extends XmlAdapter<String, Kind> {
+      @Override
+      public String marshal(Kind obj) throws Exception {
+        return obj.toString();
+      }
+
+      @Override
+      public Kind unmarshal(String str) throws Exception {
+        for (Kind j : Kind.class.getEnumConstants()) {
+          if (j.toString().equals(str)) {
+            return j;
+          }
+        }
+        throw new Exception("Can't convert " + str + " to "
+          + Kind.class.getName());
+      }
+
+    }
+  }
+
+  public static enum Signal {
+    TERM, KILL;
+    public static class SignalAdaptor extends XmlAdapter<String, Signal> {
+      @Override
+      public String marshal(Signal obj) throws Exception {
+        return obj.toString();
+      }
+
+      @Override
+      public Signal unmarshal(String str) throws Exception {
+        for (Signal j : Signal.class.getEnumConstants()) {
+          if (j.toString().equals(str)) {
+            return j;
+          }
+        }
+        throw new Exception("Can't convert " + str + " to "
+          + Signal.class.getName());
+      }
+
+    }
+  }
+}
diff --git a/client/src/main/java/org/apache/ambari/common/rest/agent/ActionResult.java b/client/src/main/java/org/apache/ambari/common/rest/agent/ActionResult.java
new file mode 100644
index 0000000..101724a
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/common/rest/agent/ActionResult.java
@@ -0,0 +1,121 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.common.rest.agent;
+
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlRootElement;
+import javax.xml.bind.annotation.XmlType;
+
+import org.apache.ambari.common.rest.agent.Action.Kind;
+
+/**
+ * 
+ * Data model for reporting server related actions.
+ * 
+ */
+@XmlRootElement
+@XmlAccessorType(XmlAccessType.FIELD)
+@XmlType(name = "", propOrder = {})
+public class ActionResult {
+  @XmlElement
+  private String clusterId;
+  @XmlElement
+  private String id;
+  @XmlElement
+  private Kind kind;
+  @XmlElement
+  private CommandResult commandResult;
+  @XmlElement
+  private CommandResult cleanUpCommandResult;
+  @XmlElement
+  private String component;
+  @XmlElement
+  private String role;
+  @XmlElement
+  private long clusterDefinitionRevision;
+  @XmlElement
+  private String stackRevision;
+
+  public String getClusterId() {
+    return clusterId;
+  }
+  
+  public void setClusterId(String clusterId) {
+    this.clusterId = clusterId;
+  }
+  
+  public String getId() {
+    return id;
+  }
+  
+  public void setId(String id) {
+    this.id = id;
+  }
+  
+  public Kind getKind() {
+    return kind;
+  }
+  
+  public void setKind(Kind kind) {
+    this.kind = kind;
+  }
+  
+  public CommandResult getCommandResult() {
+    return commandResult;
+  }
+  
+  public void setCommandResult(CommandResult commandResult) {
+    this.commandResult = commandResult;
+  }
+
+  public CommandResult getCleanUpCommandResult() {
+    return cleanUpCommandResult;  
+  }
+  
+  public void setCleanUpResult(CommandResult cleanUpResult) {
+    this.cleanUpCommandResult = cleanUpResult;
+    
+  }
+  
+  public String getComponent() {
+    return this.component;
+  }
+  
+  public void setComponent(String component) {
+    this.component = component;
+  }
+
+  public String getRole() {
+    return role;
+  }  
+  
+  public void setRole(String role) {
+    this.role = role;
+  }
+  
+  public long getClusterDefinitionRevision() {
+    return clusterDefinitionRevision;
+  }
+  
+  public void setClusterDefinitionRevision(long clusterDefinitionRevision) {
+    this.clusterDefinitionRevision = clusterDefinitionRevision;
+  }
+}
diff --git a/common/src/main/java/org/apache/hms/common/entity/manifest/NodesManifest.java b/client/src/main/java/org/apache/ambari/common/rest/agent/ActionResults.java
old mode 100755
new mode 100644
similarity index 62%
rename from common/src/main/java/org/apache/hms/common/entity/manifest/NodesManifest.java
rename to client/src/main/java/org/apache/ambari/common/rest/agent/ActionResults.java
index 68c508d..3280b2d
--- a/common/src/main/java/org/apache/hms/common/entity/manifest/NodesManifest.java
+++ b/client/src/main/java/org/apache/ambari/common/rest/agent/ActionResults.java
@@ -16,30 +16,31 @@
  * limitations under the License.
  */
 
-package org.apache.hms.common.entity.manifest;
+package org.apache.ambari.common.rest.agent;
 
+import java.util.ArrayList;
 import java.util.List;
 
 import javax.xml.bind.annotation.XmlAccessType;
 import javax.xml.bind.annotation.XmlAccessorType;
-import javax.xml.bind.annotation.XmlElement;
-import javax.xml.bind.annotation.XmlRootElement;
-import javax.xml.bind.annotation.XmlType;
 
-import org.apache.hms.common.entity.manifest.Node;
-
-@XmlAccessorType(XmlAccessType.FIELD) 
-@XmlType(propOrder = { "roles" })
-@XmlRootElement
-public class NodesManifest extends Manifest {
-  @XmlElement
-  public List<Role> roles;
+@XmlAccessorType(XmlAccessType.FIELD)
+public class ActionResults {
+  public List<ActionResult> actionResults;
   
-  public List<Role> getRoles() {
-    return this.roles;
+  public List<ActionResult> getActionResults() {
+    return actionResults;
   }
   
-  public void setNodes(List<Role> roles) {
-    this.roles = roles;
+  public void setActionResults(List<ActionResult> actionResults) {
+    this.actionResults = actionResults;
   }
+  
+  public void add(ActionResult actionResult) {
+    if(this.actionResults == null) {
+      this.actionResults = new ArrayList<ActionResult>();
+    }
+    this.actionResults.add(actionResult);
+  }
+
 }
diff --git a/common/src/main/java/org/apache/hms/common/entity/RestSource.java b/client/src/main/java/org/apache/ambari/common/rest/agent/Actions.java
old mode 100755
new mode 100644
similarity index 64%
rename from common/src/main/java/org/apache/hms/common/entity/RestSource.java
rename to client/src/main/java/org/apache/ambari/common/rest/agent/Actions.java
index 8a5e80c..3227742
--- a/common/src/main/java/org/apache/hms/common/entity/RestSource.java
+++ b/client/src/main/java/org/apache/ambari/common/rest/agent/Actions.java
@@ -16,20 +16,30 @@
  * limitations under the License.
  */
 
-package org.apache.hms.common.entity;
+package org.apache.ambari.common.rest.agent;
+
+import java.util.ArrayList;
+import java.util.List;
 
 import javax.xml.bind.annotation.XmlAccessType;
 import javax.xml.bind.annotation.XmlAccessorType;
-import javax.xml.bind.annotation.XmlRootElement;
-import javax.xml.bind.annotation.XmlType;
 
-/**
- * Base class for HMS Rest API
- *
- */
-@XmlRootElement
-@XmlAccessorType(XmlAccessType.FIELD) 
-@XmlType(name="", propOrder = {})
-public abstract class RestSource {
-
+@XmlAccessorType(XmlAccessType.FIELD)
+public class Actions {
+  public List<Action> actions;
+  
+  public List<Action> getActions() {
+    return actions;
+  }
+  
+  public void setActions(List<Action> actions) {
+    this.actions = actions;
+  }
+  
+  public void add(Action action) {
+    if(this.actions == null) {
+      this.actions = new ArrayList<Action>();
+    }
+    this.actions.add(action);
+  }
 }
diff --git a/client/src/main/java/org/apache/ambari/common/rest/agent/AgentRoleState.java b/client/src/main/java/org/apache/ambari/common/rest/agent/AgentRoleState.java
new file mode 100644
index 0000000..bc57cfb
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/common/rest/agent/AgentRoleState.java
@@ -0,0 +1,114 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.common.rest.agent;
+
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlRootElement;
+import javax.xml.bind.annotation.XmlType;
+import javax.xml.bind.annotation.adapters.XmlAdapter;
+
+@XmlRootElement
+@XmlAccessorType(XmlAccessType.FIELD)
+@XmlType(name = "", propOrder = {"clusterId", "clusterDefinitionRevision", 
+    "componentName", "roleName", "serverStatus"})
+public class AgentRoleState {
+  @XmlElement
+  private long clusterDefinitionRevision;
+  @XmlElement
+  private String clusterId;
+  @XmlElement
+  private String componentName;
+  @XmlElement
+  private String roleName;
+  @XmlElement
+  private State serverStatus;
+  
+  public String getClusterId() {
+    return clusterId;
+  }
+  
+  public long getClusterDefinitionRevision() {
+    return clusterDefinitionRevision;
+  }
+  
+  public String getComponentName() {
+    return componentName;
+  }
+  
+  public String getRoleName() {
+    return roleName;
+  }
+  
+  public State getServerStatus() {
+    return serverStatus;
+  }
+  
+  public void setClusterId(String clusterId) {
+    this.clusterId = clusterId;
+  }
+  
+  public void setClusterDefinitionRevision(long clusterDefinitionRevision) {
+    this.clusterDefinitionRevision = clusterDefinitionRevision;    
+  }
+  
+  public void setComponentName(String componentName) {
+    this.componentName = componentName;
+  }
+  
+  public void setRoleName(String roleName) {
+    this.roleName = roleName;
+  }
+  
+  public void setServerStatus(State serverStatus) {
+    this.serverStatus = serverStatus;
+  }
+  
+  public static enum State {
+    START, STARTING, STARTED, STOP, STOPPING, STOPPED;
+    public static class ServerStateAdaptor extends XmlAdapter<String, State> {
+      @Override
+      public String marshal(State obj) throws Exception {
+        return obj.toString();
+      }
+
+      @Override
+      public State unmarshal(String str) throws Exception {
+        for (State j : State.class.getEnumConstants()) {
+          if (j.toString().equals(str)) {
+            return j;
+          }
+        }
+        throw new Exception("Can't convert " + str + " to "
+          + State.class.getName());
+      }
+
+    }
+  }
+  
+  public boolean roleAttributesEqual(Object obj) {
+    AgentRoleState agentObj = (AgentRoleState)obj;
+    return (clusterDefinitionRevision == 
+        agentObj.clusterDefinitionRevision &&
+        agentObj.clusterId.equals(clusterId) && 
+        agentObj.componentName.equals(componentName) &&
+        agentObj.roleName.equals(roleName)); 
+  }
+}
diff --git a/client/src/main/java/org/apache/ambari/common/rest/agent/Command.java b/client/src/main/java/org/apache/ambari/common/rest/agent/Command.java
new file mode 100644
index 0000000..68fc1a3
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/common/rest/agent/Command.java
@@ -0,0 +1,75 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.common.rest.agent;
+
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlRootElement;
+import javax.xml.bind.annotation.XmlType;
+
+/**
+ * 
+ * Data model for Ambari Controller to issue command to Ambari Agent.
+ *
+ */
+@XmlRootElement
+@XmlAccessorType(XmlAccessType.FIELD)
+@XmlType(name = "", propOrder = {})
+public class Command {
+  public Command() {
+  }
+  
+  public Command(String user, String script, String[] param) {
+    this.script = script;
+    this.user = user;
+    this.param = param;
+  }
+  
+  @XmlElement
+  private String script;
+  @XmlElement
+  private String[] param;
+  @XmlElement
+  private String user;
+
+  public String getScript() {
+    return script;
+  }
+  
+  public void setScript(String script) {
+    this.script = script;
+  }
+
+  public String getUser() {
+    return user;
+  }
+  
+  public void setUser(String user) {
+    this.user = user;
+  }
+  
+  public String[] getParam() {
+    return this.param;
+  }
+  
+  public void setParam(String[] param) {
+    this.param = param;
+  }
+}
diff --git a/client/src/main/java/org/apache/ambari/common/rest/agent/CommandResult.java b/client/src/main/java/org/apache/ambari/common/rest/agent/CommandResult.java
new file mode 100644
index 0000000..151ac0d
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/common/rest/agent/CommandResult.java
@@ -0,0 +1,76 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.common.rest.agent;
+
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlRootElement;
+import javax.xml.bind.annotation.XmlType;
+
+/**
+ * 
+ * Data model for Ambari Agent to report the execution result of the command
+ * to Ambari controller.
+ *
+ */
+@XmlRootElement
+@XmlAccessorType(XmlAccessType.FIELD)
+@XmlType(name = "", propOrder = {})
+public class CommandResult {
+  public CommandResult() {
+  }
+  
+  public CommandResult(int exitCode, String output, String error) {
+    this.exitCode = exitCode;
+    this.output = output;
+    this.error = error;
+  }
+  
+  @XmlElement
+  private int exitCode;
+  @XmlElement
+  private String output;
+  @XmlElement
+  private String error;
+
+  public int getExitCode() {
+    return exitCode;
+  }
+  
+  public String getOutput() {
+    return this.output;
+  }
+  
+  public String getError() {
+    return this.error;
+  }
+  
+  public void setExitCode(int exitCode) {
+    this.exitCode = exitCode;
+  }
+  
+  public void setOutput(String output) {
+    this.output = output;
+  }
+  
+  public void setError(String error) {
+    this.error = error;
+  }
+}
diff --git a/client/src/main/java/org/apache/ambari/common/rest/agent/ConfigFile.java b/client/src/main/java/org/apache/ambari/common/rest/agent/ConfigFile.java
new file mode 100644
index 0000000..940ef3e
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/common/rest/agent/ConfigFile.java
@@ -0,0 +1,103 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.common.rest.agent;
+
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlRootElement;
+
+@XmlRootElement
+@XmlAccessorType(XmlAccessType.FIELD)
+public class ConfigFile {
+  
+  public ConfigFile() {  
+  }
+  
+  public ConfigFile(String owner, String group, String permission, 
+      String path, String umask, String data) {
+    this.owner = owner;
+    this.group = group;
+    this.permission = permission;
+    this.path = path;
+    this.umask = umask;
+    this.data = data;
+  }
+  
+  @XmlElement
+  private String data;
+  @XmlElement
+  private String umask;
+  @XmlElement
+  private String path;
+  @XmlElement
+  private String owner;
+  @XmlElement
+  private String group;
+  @XmlElement
+  private String permission;
+  
+  public String getData() {
+    return data;
+  }
+  
+  public String getUmask() {
+    return umask;
+  }
+  
+  public String getPath() {
+    return path;
+  }
+  
+  public String getOwner() {
+    return owner;
+  }
+  
+  public String getGroup() {
+    return group;
+  }
+  
+  public String getPermission() {
+    return permission;
+  }
+  
+  public void setData(String data) {
+    this.data = data;
+  }
+  
+  public void setUmask(String umask) {
+    this.umask = umask;
+  }
+  
+  public void setPath(String path) {
+    this.path = path;
+  }
+  
+  public void setOwner(String owner) {
+    this.owner = owner;
+  }
+  
+  public void setGroup(String group) {
+    this.group = group;
+  }
+  
+  public void setPermission(String permission) {
+    this.permission = permission;
+  }
+}
diff --git a/common/src/main/java/org/apache/hms/common/entity/manifest/SoftwareManifest.java b/client/src/main/java/org/apache/ambari/common/rest/agent/ControllerResponse.java
old mode 100755
new mode 100644
similarity index 60%
rename from common/src/main/java/org/apache/hms/common/entity/manifest/SoftwareManifest.java
rename to client/src/main/java/org/apache/ambari/common/rest/agent/ControllerResponse.java
index 59dd83d..031e198
--- a/common/src/main/java/org/apache/hms/common/entity/manifest/SoftwareManifest.java
+++ b/client/src/main/java/org/apache/ambari/common/rest/agent/ControllerResponse.java
@@ -16,49 +16,53 @@
  * limitations under the License.
  */
 
-package org.apache.hms.common.entity.manifest;
+package org.apache.ambari.common.rest.agent;
 
 import java.util.List;
 
 import javax.xml.bind.annotation.XmlAccessType;
 import javax.xml.bind.annotation.XmlAccessorType;
-import javax.xml.bind.annotation.XmlAttribute;
 import javax.xml.bind.annotation.XmlElement;
 import javax.xml.bind.annotation.XmlRootElement;
 import javax.xml.bind.annotation.XmlType;
 
-@XmlAccessorType(XmlAccessType.FIELD) 
-@XmlType(name="", propOrder = {})
+/**
+ * 
+ * Controller to Agent response data model.
+ *
+ */
 @XmlRootElement
-public class SoftwareManifest extends Manifest {
-  @XmlAttribute
-  private String name;
-  @XmlAttribute
-  private String version;
+@XmlAccessorType(XmlAccessType.FIELD)
+@XmlType(name = "", propOrder = {})
+public class ControllerResponse {
   @XmlElement
-  private List<Role> roles;
-  
-  public String getName() {
-    return this.name;
+  public short responseId;
+  @XmlElement
+  public String clusterId;
+  @XmlElement
+  public List<Action> actions;
+
+  public short getResponseId() {
+    return responseId;
   }
   
-  public String getVersion() {
-    return this.version;
+  public void setResponseId(short responseId) {
+    this.responseId=responseId;
   }
   
-  public List<Role> getRoles() {
-    return this.roles;
+  public String getClusterId() {
+    return clusterId;
   }
   
-  public void setName(String name) {
-    this.name = name;
+  public void setClusterId(String clusterId) {
+    this.clusterId = clusterId;
   }
   
-  public void setVersion(String version) {
-    this.version = version;
+  public List<Action> getActions() {
+    return actions;
   }
   
-  public void setRoles(List<Role> roles) {
-    this.roles = roles;
+  public void setActions(List<Action> actions) {
+    this.actions = actions;
   }
 }
diff --git a/client/src/main/java/org/apache/ambari/common/rest/agent/HardwareProfile.java b/client/src/main/java/org/apache/ambari/common/rest/agent/HardwareProfile.java
new file mode 100644
index 0000000..2aacabc
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/common/rest/agent/HardwareProfile.java
@@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.common.rest.agent;
+
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlRootElement;
+import javax.xml.bind.annotation.XmlType;
+
+/**
+ * 
+ * Data model for Ambari Agent to send hardware profile to Ambari Controller.
+ *
+ */
+@XmlRootElement
+@XmlAccessorType(XmlAccessType.FIELD)
+@XmlType(name = "", propOrder = {})
+public class HardwareProfile {
+  @XmlElement
+  private int coreCount;
+  @XmlElement
+  private int diskCount;
+  @XmlElement
+  private long ramSize;
+  @XmlElement
+  private int cpuSpeed;
+  @XmlElement
+  private long netSpeed;
+  @XmlElement
+  private String cpuFlags;
+  
+  public int getCoreCount() {
+    return coreCount;
+  }
+  
+  public int getDiskCount() {
+    return diskCount;
+  }
+  
+  public long ramSize() {
+    return ramSize;
+  }
+  
+  public int getCpuSpeed() {
+    return cpuSpeed;
+  }
+  
+  public long getNetSpeed() {
+    return netSpeed;
+  }
+  
+  public String getCpuFlags() {
+    return cpuFlags;
+  }
+  
+  public void setCoreCount(int coreCount) {
+    this.coreCount = coreCount;
+  }
+  
+  public void setDiskCount(int diskCount) {
+    this.diskCount = diskCount;
+  }
+  
+  public void setRamSize(long ramSize) {
+    this.ramSize = ramSize;
+  }
+  
+  public void setCpuSpeed(int cpuSpeed) {
+    this.cpuSpeed = cpuSpeed;
+  }
+  
+  public void setNetSpeed(long netSpeed) {
+    this.netSpeed = netSpeed;
+  }
+  
+  public void setCpuFlags(String cpuFlags) {
+    this.cpuFlags = cpuFlags;
+  }
+}
diff --git a/client/src/main/java/org/apache/ambari/common/rest/agent/HeartBeat.java b/client/src/main/java/org/apache/ambari/common/rest/agent/HeartBeat.java
new file mode 100644
index 0000000..4c6781a
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/common/rest/agent/HeartBeat.java
@@ -0,0 +1,131 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.common.rest.agent;
+
+import java.util.List;
+
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlRootElement;
+import javax.xml.bind.annotation.XmlType;
+
+
+/**
+ * 
+ * Data model for Ambari Agent to send heartbeat to Ambari Controller.
+ *
+ */
+@XmlRootElement
+@XmlAccessorType(XmlAccessType.FIELD)
+@XmlType(name = "", propOrder = {"responseId","timestamp", 
+    "hostname", "hardwareProfile", "installedRoleStates", "installScriptHash",
+    "actionResults", "firstContact", "idle"})
+public class HeartBeat {
+  @XmlElement
+  private short responseId = -1;
+  @XmlElement
+  private long timestamp;
+  @XmlElement
+  private String hostname;
+  @XmlElement
+  private HardwareProfile hardwareProfile;
+  @XmlElement
+  private List<AgentRoleState> installedRoleStates;
+  @XmlElement
+  private int installScriptHash;
+  @XmlElement
+  private List<ActionResult> actionResults;
+  @XmlElement
+  private boolean firstContact;
+  @XmlElement
+  private boolean idle;
+  
+  public short getResponseId() {
+    return responseId;
+  }
+  
+  public void setResponseId(short responseId) {
+    this.responseId=responseId;
+  }
+  
+  public long getTimestamp() {
+    return timestamp;
+  }
+  
+  public String getHostname() {
+    return hostname;
+  }
+  
+  public boolean getFirstContact() {
+    return firstContact;
+  }
+  
+  public boolean getIdle() {
+    return idle;
+  }
+  
+  public HardwareProfile getHardwareProfile() {
+    return hardwareProfile;
+  }
+  
+  public List<ActionResult> getActionResults() {
+    return actionResults;
+  }
+  
+  public List<AgentRoleState> getInstalledRoleStates() {
+    return installedRoleStates;
+  }
+  
+  public int getInstallScriptHash() {
+    return installScriptHash;
+  }
+  
+  public void setTimestamp(long timestamp) {
+    this.timestamp = timestamp;
+  }
+  
+  public void setHostname(String hostname) {
+    this.hostname = hostname;
+  }
+    
+  public void setActionResults(List<ActionResult> actionResults) {
+    this.actionResults = actionResults;
+  }
+
+  public void setHardwareProfile(HardwareProfile hardwareProfile) {
+    this.hardwareProfile = hardwareProfile;    
+  }
+  
+  public void setInstalledRoleStates(List<AgentRoleState> installedRoleStates) {
+    this.installedRoleStates = installedRoleStates;
+  }
+  
+  public void setFirstContact(boolean firstContact) {
+    this.firstContact = firstContact;
+  }
+  
+  public void setIdle(boolean idle) {
+    this.idle = idle;
+  }
+  
+  public void setInstallScriptHash(int hash) {
+    this.installScriptHash = hash;
+  }
+}
diff --git a/common/src/main/java/org/apache/hms/common/util/ExceptionUtil.java b/client/src/main/java/org/apache/ambari/common/rest/agent/package.html
old mode 100755
new mode 100644
similarity index 68%
copy from common/src/main/java/org/apache/hms/common/util/ExceptionUtil.java
copy to client/src/main/java/org/apache/ambari/common/rest/agent/package.html
index 5f23e2b..8fc8b9f
--- a/common/src/main/java/org/apache/hms/common/util/ExceptionUtil.java
+++ b/client/src/main/java/org/apache/ambari/common/rest/agent/package.html
@@ -1,4 +1,4 @@
-/*
+<!-- 
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
  * distributed with this work for additional information
@@ -14,19 +14,17 @@
  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  * See the License for the specific language governing permissions and
  * limitations under the License.
- */
-
-package org.apache.hms.common.util;
-
-import java.io.PrintWriter;
-import java.io.StringWriter;
-
-public class ExceptionUtil {
-  public static String getStackTrace(Throwable t) {
-    StringWriter sw = new StringWriter();
-    PrintWriter pw = new PrintWriter(sw);
-    t.printStackTrace(pw);
-    pw.flush();
-    return sw.toString();
-  }
-}
+ -->
+ <html>
+ <head><title>Ambari Agent-Controller REST entities</title></head>
+ <body>
+ <p>
+ These are the entities that are used for the internal Ambari Agent to 
+ Controller communication.
+ </p>
+ <p>
+ Any new classes that are added, must be manually inserted into 
+ org.apache.ambari.controller.reset.agent.AgentJAXBContextResolver.
+ </p>
+ </body>
+ </html>
\ No newline at end of file
diff --git a/client/src/main/java/org/apache/ambari/common/rest/entities/ClusterDefinition.java b/client/src/main/java/org/apache/ambari/common/rest/entities/ClusterDefinition.java
new file mode 100644
index 0000000..9e21810
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/common/rest/entities/ClusterDefinition.java
@@ -0,0 +1,237 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.common.rest.entities;
+
+import java.util.List;
+
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlRootElement;
+import javax.xml.bind.annotation.XmlType;
+
+
+
+/**
+ * Definition of a cluster.
+ * 
+ */
+@XmlAccessorType(XmlAccessType.FIELD)
+@XmlType(name = "ClusterDefinition", propOrder = {
+    "enabledServices",
+    "roleToNodesMap"
+})
+@XmlRootElement(name = "cluster")
+public class ClusterDefinition {
+        
+    public static final String GOAL_STATE_ACTIVE = "ACTIVE";
+    public static final String GOAL_STATE_INACTIVE = "INACTIVE";
+    public static final String GOAL_STATE_ATTIC = "ATTIC";
+   
+    /**
+     * The name of the cluster.
+     */
+    @XmlAttribute
+    protected String name = null;
+    
+    /**
+     * Every cluster update creates a new revision and returned through this field. 
+     * This field can be optionally be set durint the update to latest revision 
+     * (currently checked out revision) of the cluster being updated and if so,
+     * Ambari will prevent the update, if the latest revision of the cluster changed 
+     * in the background before update. If not specified update will over-write current
+     * latest revision.
+     */
+    @XmlAttribute
+    protected String revision = null;
+  
+    /**
+     * A user-facing comment about the cluster about what it is intended for.
+     */
+    @XmlAttribute
+    protected String description = null;
+    
+    /**
+     * The name of the stack that defines the cluster.
+     */
+    @XmlAttribute
+    protected String stackName = null;
+    
+    /**
+     * The revision of the stack that this cluster is based on.
+     */
+    @XmlAttribute
+    protected String stackRevision = null;
+    
+    /**
+     * The goal state of the cluster. Valid states are:
+     * ACTIVE - deploy and start the cluster
+     * INACTIVE - the cluster should be stopped, but the nodes reserved
+     * ATTIC - the cluster's nodes should be released
+     */
+    @XmlAttribute
+    protected String goalState = null;
+    
+    /**
+     * The list of components that should be running if the cluster is ACTIVE.
+     */
+    @XmlElement
+    protected List<String> enabledServices = null;
+    
+    /**
+     * A node expression giving the entire set of nodes for this cluster.
+     */
+    @XmlAttribute
+    protected String nodes = null;
+
+    /**
+     * A map from roles to the nodes associated with each role.
+     */
+    @XmlElement
+    protected List<RoleToNodes> roleToNodesMap = null;
+    
+
+    /**
+     * @return the roleToNodesMap
+     */
+    public List<RoleToNodes> getRoleToNodesMap() {
+        return roleToNodesMap;
+    }
+
+    /**
+     * @param roleToNodesMap the roleToNodesMap to set
+     */
+    public void setRoleToNodesMap(List<RoleToNodes> roleToNodesMap) {
+        this.roleToNodesMap = roleToNodesMap;
+    }
+
+    /**
+     * @return the stackRevision
+     */
+    public String getStackRevision() {
+        return stackRevision;
+    }
+
+    /**
+     * @param stackRevision the stackRevision to set
+     */
+    public void setStackRevision(String stackRevision) {
+        this.stackRevision = stackRevision;
+    }
+
+    /**
+     * @return the name
+     */
+    public String getName() {
+            return name;
+    }
+
+    /**
+     * @param name the name to set
+     */
+    public void setName(String name) {
+            this.name = name;
+    }
+
+    /**
+     * @return the description
+     */
+    public String getDescription() {
+            return description;
+    }
+
+    /**
+     * @param description the description to set
+     */
+    public void setDescription(String description) {
+            this.description = description;
+    }
+
+    /**
+     * @return the stackName
+     */
+    public String getStackName() {
+            return stackName;
+    }
+
+    /**
+     * @param stackName the stackName to set
+     */
+    public void setStackName(String stackName) {
+            this.stackName = stackName;
+    }
+
+    /**
+     * @return the goalState
+     */
+    public String getGoalState() {
+            return goalState;
+    }
+
+    /**
+     * @param goalState the goalState to set
+     */
+    public void setGoalState(String goalState) {
+            this.goalState = goalState;
+    }
+
+    /**
+     * @return the enabledServices
+     */
+    public List<String> getEnabledServices() {
+            return enabledServices;
+    }
+
+    /**
+     * @param enabledServices the enabledServices to set
+     */
+    public void setEnabledServices(List<String> enabledServices) {
+            this.enabledServices = enabledServices;
+    }
+
+    /**
+     * @return the nodeRangeExpressions
+     */
+    public String getNodes() {
+            return nodes;
+    }
+
+    /**
+     * @param nodeRangeExpressions the nodeRangeExpressions to set
+     */
+    public void setNodes(String nodeRangeExpressions) {
+            this.nodes = nodeRangeExpressions;
+    }
+    
+    
+    /**
+     * @return the revision
+     */
+    public String getRevision() {
+        return revision;
+    }
+
+    /**
+     * @param revision the revision to set
+     */
+    public void setRevision(String revision) {
+        this.revision = revision;
+    }
+
+}
diff --git a/client/src/main/java/org/apache/ambari/common/rest/entities/ClusterInformation.java b/client/src/main/java/org/apache/ambari/common/rest/entities/ClusterInformation.java
new file mode 100644
index 0000000..9cdbd51
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/common/rest/entities/ClusterInformation.java
@@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.common.rest.entities;
+
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlRootElement;
+import javax.xml.bind.annotation.XmlType;
+
+/**
+ * Combination of the cluster definition and state.
+ */
+@XmlAccessorType(XmlAccessType.FIELD)
+@XmlType(name = "ClusterInformation", propOrder = {
+    "definition",
+    "state"
+})
+@XmlRootElement(name = "ClusterInformation")
+public class ClusterInformation {
+
+   
+    @XmlElement
+    protected ClusterDefinition definition = null;
+    
+    @XmlElement
+    protected ClusterState state = null;
+
+    /**
+     * @return the definition
+     */
+    public ClusterDefinition getDefinition() {
+        return definition;
+    }
+
+    /**
+     * @param definition the definition to set
+     */
+    public void setDefinition(ClusterDefinition definition) {
+        this.definition = definition;
+    }
+
+    /**
+     * @return the state
+     */
+    public ClusterState getState() {
+        return state;
+    }
+
+    /**
+     * @param state the state to set
+     */
+    public void setState(ClusterState state) {
+        this.state = state;
+    }
+    
+}
diff --git a/client/src/main/java/org/apache/ambari/common/rest/entities/ClusterState.java b/client/src/main/java/org/apache/ambari/common/rest/entities/ClusterState.java
new file mode 100644
index 0000000..55e8306
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/common/rest/entities/ClusterState.java
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.common.rest.entities;
+
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlRootElement;
+import javax.xml.bind.annotation.XmlSchemaType;
+import javax.xml.datatype.XMLGregorianCalendar;
+
+/**
+ * The state of a cluster.
+ * 
+ * <p>
+ * The schema looks like:
+ * <pre>
+ * element ClusterState {
+ *   attribute state { text }
+ *   attribute creationTime { text }
+ *   attribute deployTime { text }
+ *   attribute lastUpdateTime { text }
+ *   attribute markForDeletionWhenInAttic { boolean }
+ * }
+ * </pre>
+ * </p>
+ */
+@XmlAccessorType(XmlAccessType.FIELD)
+@XmlRootElement
+public class ClusterState {
+        
+    /*
+     *  Cluster is deployed w/ Hadoop stack and required services are up
+     */
+    public static final String CLUSTER_STATE_ACTIVE = "ACTIVE";
+    
+    /* 
+     * Cluster nodes are reserved but may not be deployed w/ stack. If deployed w/ stack
+     * then cluster services are down
+     */
+    public static final String CLUSTER_STATE_INACTIVE = "INACTIVE";
+    
+    /*
+     * No nodes are reserved for the cluster
+     */
+    public static final String CLUSTER_STATE_ATTIC = "ATTIC";
+    
+    @XmlAttribute(required = true)
+    protected String state;
+    @XmlAttribute(required = true)
+    @XmlSchemaType(name = "dateTime")
+    protected XMLGregorianCalendar creationTime;
+    @XmlAttribute
+    @XmlSchemaType(name = "dateTime")
+    protected XMLGregorianCalendar deployTime;
+    @XmlAttribute
+    @XmlSchemaType(name = "dateTime")
+    protected XMLGregorianCalendar lastUpdateTime;
+    @XmlAttribute(required = true)
+    protected boolean markForDeletionWhenInAttic = false;
+    
+    /**
+     * @return the markForDeletionWhenInAttic
+     */
+    public boolean isMarkForDeletionWhenInAttic() {
+        return markForDeletionWhenInAttic;
+    }
+
+    /**
+     * @param markForDeletionWhenInAttic the markForDeletionWhenInAttic to set
+     */
+    public void setMarkForDeletionWhenInAttic(boolean markForDeletionWhenInAttic) {
+        this.markForDeletionWhenInAttic = markForDeletionWhenInAttic;
+    }
+
+    /**
+     * @return the creationTime
+     */
+    public XMLGregorianCalendar getCreationTime() {
+            return creationTime;
+    }
+
+    /**
+     * @param creationTime the creationTime to set
+     */
+    public void setCreationTime(XMLGregorianCalendar creationTime) {
+            this.creationTime = creationTime;
+    }
+
+    /**
+     * @return the deployTime
+     */
+    public XMLGregorianCalendar getDeployTime() {
+            return deployTime;
+    }
+
+    /**
+     * @param deployTime the deployTime to set
+     */
+    public void setDeployTime(XMLGregorianCalendar deployTime) {
+            this.deployTime = deployTime;
+    }
+    
+    /**
+     * @return the lastUpdateTime
+     */
+    public XMLGregorianCalendar getLastUpdateTime() {
+            return lastUpdateTime;
+    }
+
+    /**
+     * @param lastUpdateTime the lastUpdateTime to set
+     */
+    public void setLastUpdateTime(XMLGregorianCalendar lastUpdateTime) {
+            this.lastUpdateTime = lastUpdateTime;
+    }
+    
+    /**
+     * @return the state
+     */
+    public String getState() {
+            return state;
+    }
+
+    /**
+     * @param State the state to set
+     */
+    public void setState(String state) {
+            this.state = state;
+    }
+
+}
diff --git a/client/src/main/java/org/apache/ambari/common/rest/entities/Component.java b/client/src/main/java/org/apache/ambari/common/rest/entities/Component.java
new file mode 100644
index 0000000..2f9e867
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/common/rest/entities/Component.java
@@ -0,0 +1,272 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.common.rest.entities;
+
+import java.util.List;
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlRootElement;
+import javax.xml.bind.annotation.XmlType;
+
+
+/**
+ * Metadata information about a given component.
+ */
+@XmlAccessorType(XmlAccessType.FIELD)
+@XmlType(name = "Component", propOrder = {
+    "definition",
+    "user_group",
+    "configuration",
+    "roles"
+})
+@XmlRootElement
+public class Component {
+
+    /**
+     * The name of the component.
+     */
+    @XmlAttribute(required = true)
+    private String name;
+    
+    /**
+     * The architecture of the tarball/rpm to install.
+     */
+    @XmlAttribute
+    private String architecture;
+    
+    /**
+     * The version of the tarball/rpm to install.
+     */
+    @XmlAttribute
+    private String version;
+    
+    /**
+     * The provider of the tarball/rpm to install.
+     */
+    @XmlAttribute
+    private String provider;
+    
+    /**
+     * The definition of the component including how to configure and run
+     * the component.
+     */
+    @XmlElement
+    private ComponentDefinition definition;
+    
+    /**
+     * Component user/group information
+     */
+    @XmlElement
+    private UserGroup user_group;
+    
+    /**
+     * @return the user_group
+     */
+    public UserGroup getUser_group() {
+        return user_group;
+    }
+
+    /**
+     * @param user_group the user_group to set
+     */
+    public void setUser_group(UserGroup user_group) {
+        this.user_group = user_group;
+    }
+
+    /**
+     * The configuration shared between the active roles of the component.
+     */
+    @XmlElement
+    private Configuration configuration;
+    
+    /**
+     * Specific configuration for each of the roles.
+     */
+    @XmlElement
+    private List<Role> roles;
+
+    public Component() {
+      // PASS
+    }
+
+    public Component(String name, String version, String architecture,
+                     String provider, ComponentDefinition definition,
+                     Configuration configuration, List<Role> roles, UserGroup user_group) {
+      this.name = name;
+      this.version = version;
+      this.architecture = architecture;
+      this.provider = provider;
+      this.definition = definition;
+      this.configuration = configuration;
+      this.roles = roles;
+      this.user_group = user_group;
+    }
+
+    /**
+     * Shallow copy overriding attributes of a component.
+     * @param other the component to copy
+     */
+    public void mergeInto(Component other) {
+      if (other.architecture != null) {
+        this.architecture = other.architecture;
+      }
+      if (other.configuration != null) {
+        this.configuration = other.configuration;
+      }
+      if (other.definition != null) {
+        this.definition.mergeInto(other.definition);
+      }
+      if (other.name != null) {
+        this.name = other.name;
+      }
+      if (other.provider != null) {
+        this.provider = other.provider;
+      }
+      if (other.roles != null) {
+        this.roles = other.roles;
+      }
+      if (other.version != null) {
+        this.version = other.version;
+      }
+      if (other.user_group != null) {
+          this.user_group = other.user_group;
+        }
+    }
+    
+    /**
+     * Gets the value of the name property.
+     * 
+     * @return
+     *     possible object is
+     *     {@link String }
+     *     
+     */
+    public String getName() {
+        return name;
+    }
+
+    /**
+     * Sets the value of the name property.
+     * 
+     * @param value
+     *     allowed object is
+     *     {@link String }
+     *     
+     */
+    public void setName(String value) {
+        this.name = value;
+    }
+
+    /**
+     * Get the roles property
+     * @return the custom configuration for the component
+     */
+    public List<Role> getRoles() {
+      return roles;
+    }
+    
+    /**
+     * Set the roles property
+     * @param roles
+     */
+    public void setRoles(List<Role> roles) {
+      this.roles = roles;
+    }
+
+    /**
+     * Get the architecture of the package to install.
+     * @return the name of the architecture.
+     */
+    public String getArchitecture() {
+      return architecture;
+    }
+    
+    /**
+     * Set the architecture of the package to install.
+     * @param value the new architecture
+     */
+    public void setArchitecture(String value) {
+      architecture = value;
+    }
+    
+    /**
+     * Get the name of the component definition
+     * @return the component definition name
+     */
+    public ComponentDefinition getDefinition() {
+      return definition;
+    }
+    
+    /**
+     * Set the name of the component definition
+     * @param value the new name
+     */
+    public void setDefinition(ComponentDefinition value) {
+      definition = value;
+    }
+
+    /**
+     * Get the version of the package to install
+     * @return the version string
+     */
+    public String getVersion() {
+      return version;
+    }
+    
+    /**
+     * Set the version of the package to install
+     * @param version the new version
+     */
+    public void setVersion(String version) {
+      this.version = version;
+    }
+    
+    /**
+     * Get the provider of the package to install
+     * @return the provider name
+     */
+    public String getProvider() {
+      return provider;
+    }
+    
+    /**
+     * Set the provider of the package to install
+     * @param value the new provider
+     */
+    public void setProvider(String value) {
+      provider = value;
+    }
+    
+    /**
+     * Get the configuration for all of the active roles.
+     * @return the configuration
+     */
+    public Configuration getConfiguration() {
+      return configuration;
+    }
+    
+    /**
+     * Set the configuration for all of the active roles
+     * @param conf the configuration
+     */
+    public void setConfiguration(Configuration conf) {
+      configuration = conf;
+    }
+}
diff --git a/client/src/main/java/org/apache/ambari/common/rest/entities/ComponentDefinition.java b/client/src/main/java/org/apache/ambari/common/rest/entities/ComponentDefinition.java
new file mode 100644
index 0000000..2d7c110
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/common/rest/entities/ComponentDefinition.java
@@ -0,0 +1,157 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.common.rest.entities;
+
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlRootElement;
+
+
+/**
+ * Define the name, group, and version of a component definition.
+ */
+@XmlAccessorType(XmlAccessType.FIELD)
+@XmlRootElement
+public class ComponentDefinition {
+
+  @XmlAttribute
+  private String provider;
+  @XmlAttribute
+  private String name; 
+  @XmlAttribute
+  private String version;
+  
+  public ComponentDefinition() {
+    // PASS
+  }
+
+  public ComponentDefinition(String name, String provider, String version) {
+    this.name = name;
+    this.provider = provider;
+    this.version = version;
+  }
+
+  @Override
+  public boolean equals(Object other) {
+    if (other == null || other.getClass() != getClass()) {
+      return false;
+    } else if (other == this) {
+      return true;
+    } else {
+      ComponentDefinition otherDefn = (ComponentDefinition) other;
+      return isStringEqual(name, otherDefn.name) && 
+             isStringEqual(provider, otherDefn.provider) &&
+             isStringEqual(version, otherDefn.version);
+    }
+  }
+
+  @Override
+  public int hashCode() {
+    return stringHash(name) + stringHash(version);
+  }
+
+  static int stringHash(String str) {
+    return str != null ? str.hashCode() : 0;
+  }
+
+  static boolean isStringEqual(String left, String right) {
+    if (left == right) {
+      return true;
+    } if (left == null || right == null) {
+      return false;
+    } else {
+      return left.equals(right);
+    }
+  }
+
+  /**
+   * Override this configuration's properties with any corresponding ones
+   * that are set in the other component.
+   * @param other the overriding component
+   */
+  public void mergeInto(ComponentDefinition other) {
+    if (other.provider != null) {
+      this.provider = other.provider;
+    }
+    if (other.name != null) {
+      this.name = other.name;
+    }
+    if (other.version != null) {
+      this.version = other.version;
+    }
+  }
+  
+  /**
+   * Get the provider that published the component definition
+   * @return the provider name
+   */
+  public String getProvider() {
+    return provider;
+  }
+  
+  /**
+   * Get the name of the component definition
+   * @return the component definition name
+   */
+  public String getName() {
+    return name;
+  }
+  
+  /**
+   * Get the version of the component definition
+   * @return the version string
+   */
+  public String getVersion() {
+    return version;
+  }
+  
+  /**
+   * Set the provider that published the component definition
+   * @param provider
+   */
+  public void setProvider(String provider) {
+    this.provider = provider;
+  }
+  
+  /**
+   * Set the component definition name.
+   * @param definition the new name
+   */
+  public void setName(String name) {
+    this.name = name;
+  }
+  
+  /**
+   * Set the version of the component definition.
+   * @param version the new version
+   */
+  public void setVersion(String version) {
+    this.version = version;
+  }
+  
+  public String toString() {
+    StringBuilder buffer = new StringBuilder();
+    buffer.append(provider);
+    buffer.append('.');
+    buffer.append(name);
+    buffer.append('@');
+    buffer.append(version);
+    return buffer.toString();
+  }
+}
diff --git a/client/src/main/java/org/apache/ambari/common/rest/entities/Configuration.java b/client/src/main/java/org/apache/ambari/common/rest/entities/Configuration.java
new file mode 100644
index 0000000..23c8c0b
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/common/rest/entities/Configuration.java
@@ -0,0 +1,101 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.common.rest.entities;
+
+import java.util.ArrayList;
+import java.util.List;
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlElements;
+import javax.xml.bind.annotation.XmlRootElement;
+import javax.xml.bind.annotation.XmlType;
+
+
+/**
+ * The configuration included in a Stack. Configurations are a set of categories
+ * that correspond to the different configuration files necessary for running 
+ * Hadoop. The categories other than Ambari come from the components. The
+ * categories for Hadoop, HDFS, MapReduce and Pig are:
+ * <ul>
+ * <li> <b>ambari</b> - the generic properties that affect multiple components
+ * <li> Categories for Hadoop:
+ *    <ul>
+ *    <li> hadoop-env
+ *    <li> common-site
+ *    <li> log4j
+ *    <li> metrics2
+ *    </ul>
+ * <li> Categories for HDFS:
+ *    <ul>
+ *    <li> hdfs-site
+ *    </ul>
+ * <li> Categories for MapReduce:
+ *    <ul>
+ *    <li> mapred-site
+ *    <li> mapred-queue-acl
+ *    <li> task-controller
+ *    <li> capacity-scheduler
+ *    </ul>
+ * <li> Categories for Pig:
+ *    <ul>
+ *    <li> pig-env
+ *    <li> pig-site
+ *    </ul>
+ * </ul>
+ */
+@XmlAccessorType(XmlAccessType.FIELD)
+@XmlType(name = "Configuration", propOrder = {
+    "category"
+})
+@XmlRootElement
+public class Configuration {
+
+    @XmlElements({@XmlElement})
+    protected List<ConfigurationCategory> category;
+
+    /**
+     * Gets the value of the category property.
+     * 
+     * <p>
+     * This accessor method returns a reference to the live list,
+     * not a snapshot. Therefore any modification you make to the
+     * returned list will be present inside the JAXB object.
+     * This is why there is not a <CODE>set</CODE> method for the category property.
+     * 
+     * <p>
+     * For example, to add a new item, do as follows:
+     * <pre>
+     *    getCategory().add(newItem);
+     * </pre>
+     * 
+     * 
+     * <p>
+     * Objects of the following type(s) are allowed in the list
+     * {@link ConfigurationCategory }
+     * 
+     * 
+     */
+    public List<ConfigurationCategory> getCategory() {
+        if (category == null) {
+            category = new ArrayList<ConfigurationCategory>();
+        }
+        return this.category;
+    }
+
+}
diff --git a/client/src/main/java/org/apache/ambari/common/rest/entities/ConfigurationCategory.java b/client/src/main/java/org/apache/ambari/common/rest/entities/ConfigurationCategory.java
new file mode 100644
index 0000000..b67044b
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/common/rest/entities/ConfigurationCategory.java
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.common.rest.entities;
+
+import java.util.ArrayList;
+import java.util.List;
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlElements;
+import javax.xml.bind.annotation.XmlRootElement;
+import javax.xml.bind.annotation.XmlType;
+
+
+/**
+ * A category in a Configuration.
+ */
+@XmlAccessorType(XmlAccessType.FIELD)
+@XmlType(name = "ConfigurationCategory", propOrder = {
+    "property"
+})
+@XmlRootElement
+public class ConfigurationCategory {
+
+    @XmlAttribute(required = true)
+    private String name;
+    @XmlElements({@XmlElement})
+    private List<Property> property;
+
+    public ConfigurationCategory() {
+      // PASS
+    }
+    
+    public ConfigurationCategory(String name, List<Property> property) {
+      this.name = name;
+      this.property = property;
+    }
+
+    /**
+     * Gets the value of the name property.
+     * 
+     * @return
+     *     possible object is
+     *     {@link String }
+     *     
+     */
+    public String getName() {
+        return name;
+    }
+
+    /**
+     * Sets the value of the name property.
+     * 
+     * @param value
+     *     allowed object is
+     *     {@link String }
+     *     
+     */
+    public void setName(String value) {
+        this.name = value;
+    }
+
+    /**
+     * Gets the value of the property property.
+     * 
+     * <p>
+     * This accessor method returns a reference to the live list,
+     * not a snapshot. Therefore any modification you make to the
+     * returned list will be present inside the JAXB object.
+     * This is why there is not a <CODE>set</CODE> method for the property property.
+     * 
+     * <p>
+     * For example, to add a new item, do as follows:
+     * <pre>
+     *    getProperty().add(newItem);
+     * </pre>
+     * 
+     * 
+     * <p>
+     * Objects of the following type(s) are allowed in the list
+     * {@link Property }
+     * 
+     * 
+     */
+    public List<Property> getProperty() {
+        if (property == null) {
+            property = new ArrayList<Property>();
+        }
+        return this.property;
+    }
+    
+}
diff --git a/client/src/main/java/org/apache/ambari/common/rest/entities/KeyValuePair.java b/client/src/main/java/org/apache/ambari/common/rest/entities/KeyValuePair.java
new file mode 100644
index 0000000..e628ddd
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/common/rest/entities/KeyValuePair.java
@@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.common.rest.entities;
+
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlRootElement;
+import javax.xml.bind.annotation.XmlType;
+
+/**
+ * A single key/value pair inside a Configuration.
+ */
+@XmlAccessorType(XmlAccessType.FIELD)
+@XmlType(name = "KeyValuePair", propOrder = {
+    "name",
+    "value"
+})
+@XmlRootElement(name = "KeyValuePair")
+public class KeyValuePair {
+
+    @XmlAttribute(required = true)
+    protected String name;
+    @XmlAttribute(required = true)
+    protected String value;
+
+    public KeyValuePair() {
+      // PASS
+    }
+    
+    public KeyValuePair(String key, String value) {
+      this.name = key;
+      this.value = value;
+    }
+
+    /**
+     * Gets the value of the name property.
+     * 
+     * @return
+     *     possible object is
+     *     {@link String }
+     *     
+     */
+    public String getName() {
+        return name;
+    }
+
+    /**
+     * Sets the value of the name property.
+     * 
+     * @param value
+     *     allowed object is
+     *     {@link String }
+     *     
+     */
+    public void setName(String value) {
+        this.name = value;
+    }
+
+    /**
+     * Gets the value of the value property.
+     * 
+     * @return
+     *     possible object is
+     *     {@link String }
+     *     
+     */
+    public String getValue() {
+        return value;
+    }
+
+    /**
+     * Sets the value of the value property.
+     * 
+     * @param value
+     *     allowed object is
+     *     {@link String }
+     *     
+     */
+    public void setValue(String value) {
+        this.value = value;
+    }
+}
diff --git a/client/src/main/java/org/apache/ambari/common/rest/entities/Node.java b/client/src/main/java/org/apache/ambari/common/rest/entities/Node.java
new file mode 100644
index 0000000..5cafb69
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/common/rest/entities/Node.java
@@ -0,0 +1,144 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.common.rest.entities;
+
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlRootElement;
+import javax.xml.bind.annotation.XmlType;
+
+/**
+ * The information about each node.
+ * 
+ * <p>
+ * The schema is
+ * <pre>
+ * element Node {
+ *   element nodeAttributes {
+ *     attribute ramInGB { text }
+ *     element cpu {
+ *       attribute type { text }
+ *       attribute core { int }
+ *     }
+ *     element network {
+ *       attribute speed { int }
+ *     }
+ *     element disk {
+ *       attribute capacity { long }
+ *     } *
+ *   }
+ *   element nodeState {
+ *     attribute lastHeartbeat { text }?
+ *     attribute clusterName { text }?
+ *     attribute agentInstalled { boolean }?
+ *     attribute allocatedToCluster { boolean }?
+ *     element nodeRoleNames { text }*
+ *     element nodeServers {
+ *       attribute name { text }
+ *       attribute state { text }
+ *       attribute lastStateUpdateTime { text }
+ *     }*
+ *   }
+ * }
+ * </pre>
+ * </p>
+ */
+@XmlAccessorType(XmlAccessType.FIELD)
+@XmlType(name = "Node", propOrder = {
+    "nodeAttributes",
+    "nodeState"
+})
+@XmlRootElement(name = "Node")
+public class Node {
+    
+    @XmlAttribute(required = true)
+    protected String name;
+    @XmlElement(name = "NodeAttributes")
+    protected NodeAttributes nodeAttributes;
+    @XmlElement(name = "NodeState", required = true)
+    protected NodeState nodeState;
+   
+    public Node () {}
+    
+    public Node (String name) {
+        this.name = name;
+        this.nodeState = new NodeState();
+    }
+	
+    /*
+     * Marks the nodes associated w/ Cluster to be released
+     */
+    public void releaseNodeFromCluster() {
+        /*
+         * Cluster ID + NodeServers + NodeToRoleNames associated w/ cluster will be reset 
+         * part of heartbeat when node stop services and does clean up.
+         */
+        this.nodeState.setAllocatedToCluster(false);
+        this.getNodeState().setNodeRoles(null);
+    }
+	
+  	/*
+  	 * Reserving node for cluster is done by associating cluster name w/ node
+  	 */
+  	public void reserveNodeForCluster (String clusterName, Boolean agentInstalled) {
+  		this.getNodeState().setClusterName(clusterName);
+  		this.getNodeState().setAgentInstalled(agentInstalled);
+  		this.getNodeState().setAllocatedToCluster(true);
+  	}
+        
+    /**
+     * @return the name
+     */
+    public String getName() {
+            return name;
+    }
+    /**
+     * @param name the name to set
+     */
+    public void setName(String name) {
+            this.name = name;
+    }
+    
+    /**
+     * @return the nodeMetrics
+     */
+    public NodeAttributes getNodeAttributes() {
+            return nodeAttributes;
+    }
+    /**
+     * @param nodeMetrics the nodeMetrics to set
+     */
+    public void setNodeAttributes(NodeAttributes nodeAttributes) {
+            this.nodeAttributes = nodeAttributes;
+    }
+    /**
+     * @return the nodeState
+     */
+    public NodeState getNodeState() {
+            return nodeState;
+    }
+    
+    /**
+     * @param nodeState the nodeState to set
+     */
+    public void setNodeState(NodeState nodeState) {
+            this.nodeState = nodeState;
+    }
+}
diff --git a/client/src/main/java/org/apache/ambari/common/rest/entities/NodeAttributes.java b/client/src/main/java/org/apache/ambari/common/rest/entities/NodeAttributes.java
new file mode 100644
index 0000000..492ed9f
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/common/rest/entities/NodeAttributes.java
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.common.rest.entities;
+
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlAttribute;
+
+/**
+ * The attributes for each machine included in Node.
+ */
+@XmlAccessorType(XmlAccessType.FIELD)
+public class NodeAttributes {
+
+    @XmlAttribute
+    protected String cpuType;
+    @XmlAttribute
+    protected short cpuUnits;
+    @XmlAttribute
+    protected short cpuCores;
+    @XmlAttribute
+    protected long ramInGB;
+    @XmlAttribute
+    protected long diskSizeInGB;
+    @XmlAttribute
+    protected short diskUnits;
+
+    /**
+     * Gets the value of the cpuType property.
+     * 
+     * @return
+     *     possible object is
+     *     {@link String }
+     *     
+     */
+    public String getCPUType() {
+        return cpuType;
+    }
+
+    /**
+     * Sets the value of the cpuType property.
+     * 
+     * @param value
+     *     allowed object is
+     *     {@link String }
+     *     
+     */
+    public void setCPUType(String value) {
+        this.cpuType = value;
+    }
+
+    /**
+     * Gets the value of the cpuUnits property.
+     * 
+     */
+    public short getCPUUnits() {
+        return cpuUnits;
+    }
+
+    /**
+     * Sets the value of the cpuUnits property.
+     * 
+     */
+    public void setCPUUnits(short value) {
+        this.cpuUnits = value;
+    }
+
+    /**
+     * Gets the value of the cpuCores property.
+     * 
+     */
+    public short getCPUCores() {
+        return cpuCores;
+    }
+
+    /**
+     * Sets the value of the cpuCores property.
+     * 
+     */
+    public void setCPUCores(short value) {
+        this.cpuCores = value;
+    }
+
+    /**
+     * Gets the value of the ramInGB property.
+     * 
+     */
+    public long getRAMInGB() {
+        return ramInGB;
+    }
+
+    /**
+     * Sets the value of the ramInGB property.
+     * 
+     */
+    public void setRAMInGB(long value) {
+        this.ramInGB = value;
+    }
+
+    /**
+     * Gets the value of the diskSizeInGB property.
+     * 
+     */
+    public long getDISKSizeInGB() {
+        return diskSizeInGB;
+    }
+
+    /**
+     * Sets the value of the diskSizeInGB property.
+     * 
+     */
+    public void setDISKSizeInGB(long value) {
+        this.diskSizeInGB = value;
+    }
+
+    /**
+     * Gets the value of the diskUnits property.
+     * 
+     */
+    public short getDISKUnits() {
+        return diskUnits;
+    }
+
+    /**
+     * Sets the value of the diskUnits property.
+     * 
+     */
+    public void setDISKUnits(short value) {
+        this.diskUnits = value;
+    }
+
+}
diff --git a/client/src/main/java/org/apache/ambari/common/rest/entities/NodeRole.java b/client/src/main/java/org/apache/ambari/common/rest/entities/NodeRole.java
new file mode 100644
index 0000000..d7b2a61
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/common/rest/entities/NodeRole.java
@@ -0,0 +1,118 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.common.rest.entities;
+
+import java.util.Date;
+import java.util.GregorianCalendar;
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlSchemaType;
+import javax.xml.bind.annotation.XmlType;
+import javax.xml.datatype.DatatypeFactory;
+import javax.xml.datatype.XMLGregorianCalendar;
+
+/**
+ * The information about a server running on a node, which is included in 
+ * Node.
+ */
+@XmlAccessorType(XmlAccessType.FIELD)
+@XmlType
+public class NodeRole {
+
+    public static final String NODE_SERVER_STATE_UP = "UP";
+    public static final String NODE_SERVER_STATE_DOWN = "DOWN";
+    
+    /*
+     * name should be component name : role name
+     * TODO : May be we can have component and role as two separate attributes instead of name
+     */
+    @XmlAttribute(required = true)
+    protected String name;
+    
+    @XmlAttribute(required = true)
+    protected String state;  // UP/DOWN
+    
+    @XmlAttribute(required = true)
+    @XmlSchemaType(name = "dateTime")
+    protected XMLGregorianCalendar lastStateUpdateTime;
+    
+    public NodeRole () {}
+        
+    /**
+     * 
+     */
+    public NodeRole (String name, String state, XMLGregorianCalendar lastStateUpdateTime) {
+        this.name = name;
+        this.state = state;
+        this.lastStateUpdateTime = lastStateUpdateTime;
+    }
+    
+    /**
+     * @return the name
+     */
+    public String getName() {
+            return name;
+    }
+
+    /**
+     * @param name the name to set
+     */
+    public void setName(String name) {
+            this.name = name;
+    }
+
+    /**
+     * @return the state
+     */
+    public String getState() {
+            return state;
+    }
+
+    /**
+     * @param state the state to set
+     */
+    public void setState(String state) {
+            this.state = state;
+    }
+
+
+    /**
+     * @return the lastStateUpdateTime
+     */
+    public XMLGregorianCalendar getLastStateUpdateTime() {
+            return lastStateUpdateTime;
+    }
+
+    /**
+     * @param lastStateUpdateTime the lastStateUpdateTime to set
+     */
+    public void setLastStateUpdateTime(XMLGregorianCalendar lastStateUpdateTime) {
+            this.lastStateUpdateTime = lastStateUpdateTime;
+    }
+
+    /**
+     * @param creationTime the creationTime to set
+     */
+    protected void setLastUpdateTime(Date lastStateUpdateTime) throws Exception {
+            GregorianCalendar cal = new GregorianCalendar();
+            cal.setTime(lastStateUpdateTime);
+            this.lastStateUpdateTime = DatatypeFactory.newInstance().newXMLGregorianCalendar(cal);
+    }
+    
+}
diff --git a/client/src/main/java/org/apache/ambari/common/rest/entities/NodeState.java b/client/src/main/java/org/apache/ambari/common/rest/entities/NodeState.java
new file mode 100644
index 0000000..73cc5a5
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/common/rest/entities/NodeState.java
@@ -0,0 +1,260 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.common.rest.entities;
+
+import java.util.ArrayList;
+import java.util.Date;
+import java.util.GregorianCalendar;
+import java.util.List;
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlRootElement;
+import javax.xml.bind.annotation.XmlSchemaType;
+import javax.xml.bind.annotation.XmlType;
+import javax.xml.datatype.DatatypeFactory;
+import javax.xml.datatype.XMLGregorianCalendar;
+
+import org.apache.ambari.common.rest.agent.CommandResult;
+
+/**
+ * Information about the Nodes.
+  */
+@XmlAccessorType(XmlAccessType.FIELD)
+@XmlType(name = "NodeState", propOrder = {
+    "nodeRoles",
+    "failedCommandStdouts",
+    "failedCommandStderrs"
+})
+@XmlRootElement
+public class NodeState {
+
+    @XmlAttribute
+    @XmlSchemaType(name = "dateTime")
+    protected XMLGregorianCalendar lastHeartbeatTime;
+        
+    /*
+     * Associating the cluster name would reserve the node for a given cluster
+     * 
+     */
+    @XmlAttribute
+    protected String clusterName;
+    
+    @XmlAttribute
+    protected Boolean agentInstalled = true;
+
+    @XmlAttribute
+    protected Boolean allocatedToCluster = false;
+    
+    @XmlAttribute
+    protected Boolean health = NodeState.HEALTHY;
+        
+    /*
+     * null indicates no roles associated with this node.
+     */
+    @XmlElement
+    protected List<NodeRole> nodeRoles = null;
+        
+    @XmlElement
+    protected List<String> failedCommandStdouts = null;
+    
+    @XmlElement
+    protected List<String> failedCommandStderrs = null;
+    
+    public static final boolean HEALTHY = true;
+    public static final boolean UNHEALTHY = false;
+
+    /**
+     * Get Node Roles names 
+     */
+    public List<String> getNodeRoleNames (String activeState) {
+        if (this.getNodeRoles() == null) return null;
+        List<String> rolenames = new ArrayList<String>();
+        for (NodeRole x : this.getNodeRoles()) {
+            if(activeState == null || activeState.equals("")) {
+                rolenames.add(x.getName()); continue;
+            }
+            if (activeState.equals(NodeRole.NODE_SERVER_STATE_DOWN) && x.getState().equals(NodeRole.NODE_SERVER_STATE_DOWN)) {
+                rolenames.add(x.getName()); continue;
+            }
+            if (activeState.equals(NodeRole.NODE_SERVER_STATE_UP) && x.getState().equals(NodeRole.NODE_SERVER_STATE_UP)) {
+                rolenames.add(x.getName()); continue;
+            }
+        }
+        return rolenames;
+    }
+    
+    /**
+     * Update role name. Add if it does not exists in the list
+     */
+    public void updateRoleState(NodeRole role) {
+        if (this.getNodeRoles() == null) {
+            this.setNodeRoles(new ArrayList<NodeRole>());
+        }
+        int i = 0;
+        for (i=0; i<this.getNodeRoles().size(); i++) {
+            if (this.getNodeRoles().get(i).getName().equals(role.getName())) {
+                this.getNodeRoles().remove(i);
+                this.getNodeRoles().add(i, role);
+                return;
+            }
+        }
+        if (i == this.getNodeRoles().size()) {
+            this.getNodeRoles().add(role);
+        }
+    }
+    
+    /**
+     * @return the nodeRoles
+     */
+    public List<NodeRole> getNodeRoles() {
+        return nodeRoles;
+    }
+
+    /**
+     * @param nodeRoles the nodeRoles to set
+     */
+    public void setNodeRoles(List<NodeRole> nodeRoles) {
+        this.nodeRoles = nodeRoles;
+    }
+    
+    /**
+     * @return the clusterName
+     */
+    public String getClusterName() {
+        return clusterName;
+    }
+
+    /**
+     * @param clusterName the clusterName to set
+     */
+    public void setClusterName(String clusterName) {
+        this.clusterName = clusterName;
+    }
+
+
+    /**
+     * @return the allocatedToCluster
+     */
+    public Boolean getAllocatedToCluster() {
+            return allocatedToCluster;
+    }
+
+    /**
+     * @param allocatedToCluster the allocatedToCluster to set
+     */
+    public void setAllocatedToCluster(Boolean allocatedToCluster) {
+            this.allocatedToCluster = allocatedToCluster;
+    }
+    
+    /**
+     * @return the agentInstalled
+     */
+    public Boolean getAgentInstalled() {
+            return agentInstalled;
+    }
+
+    /**
+     * @param agentInstalled the agentInstalled to set
+     */
+    public void setAgentInstalled(Boolean agentInstalled) {
+            this.agentInstalled = agentInstalled;
+    }
+   
+    /**
+     * @return the health
+     */
+    public Boolean getHealth() {
+            return health;
+    }
+    
+    /**
+     * @param health (true for healthy)
+     */
+    public void setHealth(Boolean health) {
+            this.health = health;
+    }
+    
+    /**
+     * @param results list of results that failed
+     */
+    public void setFailedCommandResults(List<CommandResult> results) {
+      if (results == null || results.size() == 0) {
+        this.failedCommandStderrs = null;
+        this.failedCommandStdouts = null;
+        return;
+      }
+      for (CommandResult r : results) {
+        if (r.getError() != null) {
+          if (this.failedCommandStderrs == null) {
+            this.failedCommandStderrs = new ArrayList<String>();
+          }
+          this.failedCommandStderrs.add(r.getError());
+        }
+        if (r.getOutput() != null) {
+          if (this.failedCommandStdouts == null) {
+            this.failedCommandStdouts = new ArrayList<String>();
+          }
+          this.failedCommandStdouts.add(r.getOutput());
+        }
+      }
+    }
+    
+    /**
+     * @return the stdouts of failed commands
+     */
+    public List<String> getFailedCommandStdouts() {
+      return failedCommandStdouts;
+    }
+    
+    /*
+     * @return the stderrs of failed commands
+     */
+    public List<String> getFailedCommandStderrs() {
+      return failedCommandStderrs;
+    }
+    
+    /**
+     * @return the lastHeartbeatTime
+     */
+    public XMLGregorianCalendar getLastHeartbeatTime() {
+            return lastHeartbeatTime;
+    }
+
+    /**
+     * @param lastHeartbeatTime the lastHeartbeatTime to set
+     */
+    public void setLastHeartbeatTime(XMLGregorianCalendar lastHeartbeatTime) {
+            this.lastHeartbeatTime = lastHeartbeatTime;
+    }
+
+    /**
+     * @param lastHeartbeatTime the lastHeartbeatTime to set
+     */
+    public void setLastHeartbeatTime(Date lastHeartbeatTime) throws Exception {
+        if (lastHeartbeatTime == null) {
+            this.lastHeartbeatTime = null;
+        } else {
+            GregorianCalendar cal = new GregorianCalendar();
+            cal.setTime(lastHeartbeatTime);
+            this.lastHeartbeatTime = DatatypeFactory.newInstance().newXMLGregorianCalendar(cal);
+        }
+    }
+
+}
diff --git a/client/src/main/java/org/apache/ambari/common/rest/entities/Property.java b/client/src/main/java/org/apache/ambari/common/rest/entities/Property.java
new file mode 100644
index 0000000..4818685
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/common/rest/entities/Property.java
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.common.rest.entities;
+
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlRootElement;
+import javax.xml.bind.annotation.XmlType;
+
+/**
+ * A single key/value pair inside a Configuration.
+ */
+@XmlAccessorType(XmlAccessType.FIELD)
+@XmlType(name = "Property", propOrder = {
+    "name",
+    "value",
+    "force"
+})
+@XmlRootElement(name = "Property")
+public class Property {
+
+    @XmlAttribute(required = true)
+    protected String name;
+    @XmlAttribute(required = true)
+    protected String value;
+    @XmlAttribute(required = false)
+    protected boolean force = false;
+
+    public Property() {
+      // PASS
+    }
+    
+    public Property(String key, String value) {
+      this.name = key;
+      this.value = value;
+    }
+
+    /**
+     * Gets the value of the name property.
+     * 
+     * @return
+     *     possible object is
+     *     {@link String }
+     *     
+     */
+    public String getName() {
+        return name;
+    }
+
+    /**
+     * Sets the value of the name property.
+     * 
+     * @param value
+     *     allowed object is
+     *     {@link String }
+     *     
+     */
+    public void setName(String value) {
+        this.name = value;
+    }
+
+    /**
+     * Gets the value of the value property.
+     * 
+     * @return
+     *     possible object is
+     *     {@link String }
+     *     
+     */
+    public String getValue() {
+        return value;
+    }
+
+    /**
+     * Sets the value of the value property.
+     * 
+     * @param value
+     *     allowed object is
+     *     {@link String }
+     *     
+     */
+    public void setValue(String value) {
+        this.value = value;
+    }
+    
+    /**
+     * Get whether this property is forced.
+     * @return true if it is forced
+     */
+    public boolean getForce() {
+      return force;
+    }
+    
+    /**
+     * Set whether this property is forced
+     * @param force mark it as forced
+     */
+    public void setForce(boolean force) {
+      this.force = force;
+    }
+}
diff --git a/client/src/main/java/org/apache/ambari/common/rest/entities/RepositoryKind.java b/client/src/main/java/org/apache/ambari/common/rest/entities/RepositoryKind.java
new file mode 100644
index 0000000..9d410b5
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/common/rest/entities/RepositoryKind.java
@@ -0,0 +1,130 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.common.rest.entities;
+
+import java.util.Arrays;
+import java.util.List;
+
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlRootElement;
+import javax.xml.bind.annotation.XmlType;
+
+
+/**
+ * Entity for defining a list of repositories inside a Stack.
+ */
+@XmlAccessorType(XmlAccessType.FIELD)
+@XmlType(name = "PackageType", propOrder = {
+    "urls"
+})
+@XmlRootElement
+public class RepositoryKind {
+
+    @XmlAttribute(required = true)
+    private String kind;
+    
+    @XmlElement(required = true)
+    private List<String> urls;
+
+    public RepositoryKind() {
+      // PASS
+    }
+
+    public RepositoryKind(String name, String... urls) {
+      this.kind = name;
+      this.urls = Arrays.asList(urls);
+    }
+
+    @Override
+    public boolean equals(Object other) {
+      if (this == other) {
+        return true;
+      } else if (other == null || other.getClass() != getClass()) {
+        return false;
+      } else {
+        RepositoryKind repoKind = (RepositoryKind) other;
+        if (!kind.equals(repoKind.kind) || urls.size() != repoKind.urls.size()){
+          return false;
+        } else {
+          for(int i = 0; i < urls.size(); i++) {
+            if (!urls.get(i).equals(repoKind.urls.get(i))) {
+              return false;
+            }
+          }
+          return true;
+        }
+      }
+    }
+
+    @Override
+    public int hashCode() {
+      return kind.hashCode();
+    }
+
+    /**
+     * Gets the value of the urls property.
+     * 
+     * @return
+     *     possible object is
+     *     {@link String }
+     *     
+     */
+    public List<String> getUrls() {
+        return urls;
+    }
+
+    /**
+     * Sets the value of the locationURL property.
+     * 
+     * @param value
+     *     allowed object is
+     *     {@link String }
+     *     
+     */
+    public void setUrls(List<String> value) {
+        this.urls = value;
+    }
+
+    /**
+     * Gets the value of the kind property.
+     * 
+     * @return
+     *     possible object is
+     *     {@link String }
+     *     
+     */
+    public String getKind() {
+        return kind;
+    }
+
+    /**
+     * Sets the value of the kind property.
+     * 
+     * @param value
+     *     allowed object is
+     *     {@link String }
+     *     
+     */
+    public void setKind(String value) {
+        this.kind = value;
+    }
+
+}
diff --git a/client/src/main/java/org/apache/ambari/common/rest/entities/Role.java b/client/src/main/java/org/apache/ambari/common/rest/entities/Role.java
new file mode 100644
index 0000000..177c400
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/common/rest/entities/Role.java
@@ -0,0 +1,100 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.common.rest.entities;
+
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlRootElement;
+import javax.xml.bind.annotation.XmlType;
+
+/**
+ * Details of the configuration for a role inside of a Stack.
+ */
+@XmlAccessorType(XmlAccessType.FIELD)
+@XmlType(name = "RoleType", propOrder = {
+    "name",
+    "configuration"
+})
+@XmlRootElement
+public class Role {
+
+    @XmlAttribute(required = true)
+    protected String name;
+    @XmlElement(required = true)
+    protected Configuration configuration;
+
+    public Role() {
+      // PASS
+    }
+    
+    public Role(String name, Configuration conf) {
+      this.name = name;
+      this.configuration = conf;
+    }
+
+    /**
+     * Gets the value of the name property.
+     * 
+     * @return
+     *     possible object is
+     *     {@link String }
+     *     
+     */
+    public String getName() {
+        return name;
+    }
+
+    /**
+     * Sets the value of the name property.
+     * 
+     * @param value
+     *     allowed object is
+     *     {@link String }
+     *     
+     */
+    public void setName(String value) {
+        this.name = value;
+    }
+
+    /**
+     * Gets the value of the configuration property.
+     * 
+     * @return
+     *     possible object is
+     *     {@link Configuration }
+     *     
+     */
+    public Configuration getConfiguration() {
+        return configuration;
+    }
+
+    /**
+     * Sets the value of the configuration property.
+     * 
+     * @param value
+     *     allowed object is
+     *     {@link Configuration }
+     *     
+     */
+    public void setConfiguration(Configuration value) {
+        this.configuration = value;
+    }
+
+}
diff --git a/client/src/main/java/org/apache/ambari/common/rest/entities/RoleToNodes.java b/client/src/main/java/org/apache/ambari/common/rest/entities/RoleToNodes.java
new file mode 100644
index 0000000..45ca69e
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/common/rest/entities/RoleToNodes.java
@@ -0,0 +1,60 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.common.rest.entities;
+
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlAttribute;
+
+
+/**
+ * The nodes associated with a role.
+ */
+@XmlAccessorType(XmlAccessType.FIELD)
+public class RoleToNodes {
+
+    @XmlAttribute(required = true)
+    protected String roleName;
+    @XmlAttribute
+    protected String nodes;
+    
+        /**
+         * @return the roleName
+         */
+        public String getRoleName() {
+                return roleName;
+        }
+        /**
+         * @param roleName the roleName to set
+         */
+        public void setRoleName(String roleName) {
+                this.roleName = roleName;
+        }
+        /**
+         * @return the nodes
+         */
+        public String getNodes() {
+                return nodes;
+        }
+        /**
+         * @param nodes the nodeRangeExpressions to set
+         */
+        public void setNodes(String nodes) {
+                this.nodes = nodes;
+        }
+}
diff --git a/client/src/main/java/org/apache/ambari/common/rest/entities/Stack.java b/client/src/main/java/org/apache/ambari/common/rest/entities/Stack.java
new file mode 100644
index 0000000..be81a10
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/common/rest/entities/Stack.java
@@ -0,0 +1,351 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.common.rest.entities;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlElements;
+import javax.xml.bind.annotation.XmlRootElement;
+import javax.xml.bind.annotation.XmlSchemaType;
+import javax.xml.bind.annotation.XmlType;
+import javax.xml.datatype.XMLGregorianCalendar;
+
+
+/**
+ * Stack definition.
+ * Stacks include the components to deploy and the configuration for the
+ * cluster.
+ * 
+ * <p>
+ * The schema is:
+ * <pre>
+ * grammar {
+ *   start = element stack {
+ *     attribute name { text }?
+ *     attribute revision { text }?
+ *     attribute parentName { text }?
+ *     attribute parentRevision { text }?
+ *     attribute creationTime { text }?
+ *     element repositories {
+ *       attribute name { text }
+ *       element urls { text }*
+ *     }*
+ *     element globals {
+ *       element property {
+ *         attribute name { text }
+ *         attribute value { text }
+ *       }*
+ *     } 
+ *     element configuration { Configuration }?
+ *     element components {
+ *       attribute name { text }
+ *       attribute version { text }?
+ *       attribute architecture { text }?
+ *       attribute provider { text }?
+ *       element definition {
+ *         attribute provider { text }?
+ *         attribute name { text }?
+ *         attribute version { text }?
+ *       }
+ *       element configuration { Configuration }?
+ *       element roles {
+ *         attribute name { text }
+ *         element configuration { Configuration }?
+ *       }*
+ *     }*
+ *   }
+ *   Configuration = element configuration {
+ *     element category {
+ *       attribute name { text }
+ *       element property {
+ *         attribute name { text }
+ *         attribute value { text }
+ *         attribute force { boolean }?
+ *       }*
+ *     }
+ *   }
+ * }
+ * </pre>
+ * </p>
+ * 
+ */
+@XmlAccessorType(XmlAccessType.FIELD)
+@XmlType(name = "stack", propOrder = {
+    "name",
+    "revision",
+    "parentName",
+    "parentRevision",
+    "creationTime",
+    "repositories",
+    "default_user_group",
+    "globals",
+    "configuration",
+    "components"
+})
+@XmlRootElement(name="stack")
+public class Stack {
+
+    /**
+     * The name of the stack.
+     * This is a read-only attribute.
+     */
+    @XmlAttribute
+    protected String name;
+    /**
+     * This revision of this stack. 
+     * This is a read-only attribute.
+     */
+    @XmlAttribute
+    protected String revision;
+    
+    /**
+     * The name of the parent stack. Attributes that aren't defined by this
+     * stack are inherited from the parent.
+     */
+    @XmlAttribute
+    protected String parentName;
+    
+    /**
+     * The revision number of the parent stack.
+     */
+    @XmlAttribute
+    protected int parentRevision = -1;
+    
+    /**
+     * When this revision of the stack was created.
+     * This is a read-only attribute.
+     */
+    @XmlAttribute
+    @XmlSchemaType(name = "dateTime")
+    protected XMLGregorianCalendar creationTime;
+
+    /**
+     * Information about where to pick up the tarballs/rpms for the components.
+     */
+    @XmlElement
+    protected List<RepositoryKind> repositories;
+    
+    /**
+     * Default user group information
+     */
+    @XmlElement
+    protected UserGroup default_user_group;
+    
+    /**
+     * @return the default_user_group
+     */
+    public UserGroup getDefault_user_group() {
+        return default_user_group;
+    }
+
+    /**
+     * @param default_user_group the default_user_group to set
+     */
+    public void setDefault_user_group(UserGroup default_user_group) {
+        this.default_user_group = default_user_group;
+    }
+
+    /**
+     * Stack Golbal variables
+     */
+    @XmlElements({@XmlElement})
+    protected List<KeyValuePair> globals;
+    
+    /**
+     * @return the globals
+     */
+    public List<KeyValuePair> getGlobals() {
+        return globals;
+    }
+
+    /**
+     * @param globals the globals to set
+     */
+    public void setGlobals(List<KeyValuePair> globals) {
+        this.globals = globals;
+    }
+
+    /**
+     * The client configuration.
+     */
+    @XmlElement
+    protected Configuration configuration;
+    
+    /**
+     * The list of components that are included in this stack. This includes
+     * the version of each component and the associated configuration.
+     */
+    @XmlElement
+    protected List<Component> components = new ArrayList<Component>();
+
+    /**
+     * Create an empty stack
+     */
+    public Stack() {
+    }
+
+    /**
+     * Do a shallow copy of the stack
+     * @param orig the source stack
+     */
+    public Stack(Stack orig) {
+      this.components = orig.components;
+      this.configuration = orig.configuration;
+      this.creationTime = orig.creationTime;
+      this.name = orig.name;
+      this.parentName = orig.parentName;
+      this.parentRevision = orig.parentRevision;
+      this.repositories = orig.repositories;
+      this.globals = orig.globals;
+      this.revision = orig.revision;
+      this.default_user_group = orig.default_user_group;
+    }
+
+    /**
+     * Get the name of the stack.
+     * @return the name
+     */
+    public String getName() {
+            return name;
+    }
+
+    /**
+     * @param name the name to set
+     */
+    public void setName(String name) {
+            this.name = name;
+    }
+    /**
+     * @return the revision
+     */
+    public String getRevision() {
+            return revision;
+    }
+    /**
+     * @param revision the revision to set
+     */
+    public void setRevision(String revision) {
+            this.revision = revision;
+    }
+    
+    /**
+     * Get the name of the parent stack. Attributes that aren't defined by this
+     * stack are inherited from the parent.
+     * @return the parentName
+     */
+    public String getParentName() {
+            return parentName;
+    }
+    /**
+     * @param parentName the parentName to set
+     */
+    public void setParentName(String parentName) {
+            this.parentName = parentName;
+    }
+    /**
+     * @return the parentRevision
+     */
+    public int getParentRevision() {
+            return parentRevision;
+    }
+    /**
+     * @param parentRevision the parentRevision to set
+     */
+    public void setParentRevision(int parentRevision) {
+            this.parentRevision = parentRevision;
+    }
+
+    /**
+     * Get the list of package repositories that store the rpms and tarballs.
+     * @return the packageRepositories
+     */
+    public List<RepositoryKind> getPackageRepositories() {
+            return repositories;
+    }
+
+    /**
+     * @param packageRepositories the packageRepositories to set
+     */
+    public void setPackageRepositories(
+                    List<RepositoryKind> value) {
+            this.repositories = value;
+    }
+
+    /**
+     * Get the client configuration.
+     * @return the configuration
+     */
+    public Configuration getConfiguration() {
+            return configuration;
+    }
+
+    /**
+     * Set the client configuration.
+     * @param configuration the configuration to set
+     */
+    public void setConfiguration(Configuration configuration) {
+            this.configuration = configuration;
+    }
+
+    /**
+     * Get the list of components for this stack.
+     * @return the components
+     */
+    public List<Component> getComponents() {
+            return components;
+    }
+
+    /**
+     * @param components the components to set
+     */
+    public void setComponents(List<Component> components) {
+            this.components = components;
+    }
+    
+    /**
+     * Get the creation time of this stack revision.
+     * @return the creation time
+     */
+    public XMLGregorianCalendar getCreationTime() {
+        return creationTime;
+    }
+
+    /**
+     * @param creationTime the creationTime to set
+     */
+    public void setCreationTime(XMLGregorianCalendar creationTime) {
+        this.creationTime = creationTime;
+    }
+    
+    /**
+     * @param componentName 
+     */
+    public Component getComponentByName (String name) {
+        for (Component c : this.components) {
+            if (c.getName().equals(name)) {
+                return c;
+            }
+        }
+        return null;
+    }
+}
diff --git a/client/src/main/java/org/apache/ambari/common/rest/entities/StackInformation.java b/client/src/main/java/org/apache/ambari/common/rest/entities/StackInformation.java
new file mode 100644
index 0000000..f57c4e3
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/common/rest/entities/StackInformation.java
@@ -0,0 +1,147 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.common.rest.entities;
+
+import java.util.List;
+
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlRootElement;
+import javax.xml.bind.annotation.XmlSchemaType;
+import javax.xml.bind.annotation.XmlType;
+import javax.xml.datatype.XMLGregorianCalendar;
+
+/**
+ * Stack metadata information, which is returned when listing stacks.
+ * 
+ * <p>
+ * The schema is:
+ * <pre>
+ * element StackInformation {
+ *   attribute name { text }
+ *   attribute revision { text }
+ *   attribute parentName { text }
+ *   attribute parentRevision { text }
+ *   attribute creationTime { text }
+ *   element component { text }*
+ * }
+ * </pre>
+ * </p>
+ */
+@XmlAccessorType(XmlAccessType.FIELD)
+@XmlType(name = "StackInformation", propOrder = {
+    "name",
+    "revision",
+    "parentName",
+    "parentRevision",
+    "creationTime",
+    "component"
+})
+@XmlRootElement
+public class StackInformation {
+
+    @XmlAttribute
+    protected String name;
+    @XmlAttribute
+    protected String revision;
+    @XmlAttribute
+    protected String parentName;
+    @XmlAttribute
+    protected int parentRevision;
+    @XmlElement
+    protected List<String> component;
+    @XmlAttribute
+    @XmlSchemaType(name = "dateTime")
+    protected XMLGregorianCalendar creationTime;
+
+    /**
+     * @return the component
+     */
+    public List<String> getComponent() {
+        return component;
+    }
+    /**
+     * @param component the component to set
+     */
+    public void setComponent(List<String> component) {
+        this.component = component;
+    }
+    /**
+     * @return the name
+     */
+    public String getName() {
+        return name;
+    }
+    /**
+     * @param name the name to set
+     */
+    public void setName(String name) {
+        this.name = name;
+    }
+    /**
+     * @return the revision
+     */
+    public String getRevision() {
+        return revision;
+    }
+    /**
+     * @param revision the revision to set
+     */
+    public void setRevision(String revision) {
+        this.revision = revision;
+    }
+    /**
+     * @return the parentName
+     */
+    public String getParentName() {
+        return parentName;
+    }
+    /**
+     * @param parentName the parentName to set
+     */
+    public void setParentName(String parentName) {
+        this.parentName = parentName;
+    }
+    /**
+     * @return the parentRevision
+     */
+    public int getParentRevision() {
+        return parentRevision;
+    }
+    /**
+     * @param parentRevision the parentRevision to set
+     */
+    public void setParentRevision(int parentRevision) {
+        this.parentRevision = parentRevision;
+    }
+    
+    /**
+     * @return the creationTime
+     */
+    public XMLGregorianCalendar getCreationTime() {
+        return creationTime;
+    }
+    /**
+     * @param creationTime the creationTime to set
+     */
+    public void setCreationTime(XMLGregorianCalendar creationTime) {
+        this.creationTime = creationTime;
+    }
+}
diff --git a/client/src/main/java/org/apache/ambari/common/rest/entities/UserGroup.java b/client/src/main/java/org/apache/ambari/common/rest/entities/UserGroup.java
new file mode 100644
index 0000000..9679d49
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/common/rest/entities/UserGroup.java
@@ -0,0 +1,94 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.common.rest.entities;
+
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlRootElement;
+import javax.xml.bind.annotation.XmlType;
+
+/**
+ * Details of the configuration for a role inside of a Stack.
+ */
+@XmlAccessorType(XmlAccessType.FIELD)
+@XmlType(name = "UserGroup", propOrder = {
+})
+@XmlRootElement
+public class UserGroup {
+
+    @XmlAttribute(required = true)
+    protected String user;
+    @XmlAttribute(required = true)
+    protected String userid;
+    @XmlAttribute(required = true)
+    protected String group;
+    @XmlAttribute(required = true)
+    protected String groupid;
+
+    
+    
+    /**
+     * @return the user
+     */
+    public String getUser() {
+        return user;
+    }
+    /**
+     * @param user the user to set
+     */
+    public void setUser(String user) {
+        this.user = user;
+    }
+    /**
+     * @return the userid
+     */
+    public String getUserid() {
+        return userid;
+    }
+    /**
+     * @param userid the userid to set
+     */
+    public void setUserid(String userid) {
+        this.userid = userid;
+    }
+    /**
+     * @return the group
+     */
+    public String getGroup() {
+        return group;
+    }
+    /**
+     * @param group the group to set
+     */
+    public void setGroup(String group) {
+        this.group = group;
+    }
+    /**
+     * @return the groupid
+     */
+    public String getGroupid() {
+        return groupid;
+    }
+    /**
+     * @param groupid the groupid to set
+     */
+    public void setGroupid(String groupid) {
+        this.groupid = groupid;
+    }
+}
diff --git a/client/src/main/java/org/apache/ambari/common/rest/entities/package.html b/client/src/main/java/org/apache/ambari/common/rest/entities/package.html
new file mode 100644
index 0000000..1c56a1c
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/common/rest/entities/package.html
@@ -0,0 +1,43 @@
+<html>
+
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<body>
+<p>
+This package defines the entities (messages) for Ambari's REST
+interface. The primary two are <a href="Stack.html">Stack</a>, which
+defines what needs to be deployed and how it should be configured,
+and <a href="ClusterDefinition.html">ClusterDefinition</a>, which
+binds a set of nodes to a given stack.
+</p>
+
+<p>
+These entities are defined
+using JAXB and thus can be represented as either XML or JSON with a
+very simple mapping. Each of the top level entity classes in this package 
+have the
+<a href="http://relaxng.org/compact-tutorial-20030326.html">compact RelaxNG</a>
+schemas for their type.
+</p>
+
+<p>
+ Any new classes that are added, must be manually inserted into 
+ org.apache.ambari.controller.rest.resources.ContextProvider.
+</p>
+</body>
+</html>
diff --git a/client/src/main/java/org/apache/ambari/common/state/InvalidStateTransitonException.java b/client/src/main/java/org/apache/ambari/common/state/InvalidStateTransitonException.java
new file mode 100644
index 0000000..123c573
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/common/state/InvalidStateTransitonException.java
@@ -0,0 +1,41 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.ambari.common.state;
+
+
+public class InvalidStateTransitonException extends RuntimeException {
+
+ private Enum<?> currentState;
+  private Enum<?> event;
+
+  public InvalidStateTransitonException(Enum<?> currentState, Enum<?> event) {
+    super("Invalid event: " + event + " at " + currentState);
+    this.currentState = currentState;
+    this.event = event;
+  }
+
+  public Enum<?> getCurrentState() {
+    return currentState;
+  }
+  
+  public Enum<?> getEvent() {
+    return event;
+  }
+
+}
diff --git a/client/src/main/java/org/apache/ambari/common/state/MultipleArcTransition.java b/client/src/main/java/org/apache/ambari/common/state/MultipleArcTransition.java
new file mode 100644
index 0000000..21bea2f
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/common/state/MultipleArcTransition.java
@@ -0,0 +1,38 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.common.state;
+
+/**
+ * Hook for Transition. 
+ * Post state is decided by Transition hook. Post state must be one of the 
+ * valid post states registered in StateMachine.
+ */
+public interface MultipleArcTransition
+        <OPERAND, EVENT, STATE extends Enum<STATE>> {
+
+  /**
+   * Transition hook.
+   * @return the postState. Post state must be one of the 
+   *                      valid post states registered in StateMachine.
+   * @param operand the entity attached to the FSM, whose internal 
+   *                state may change.
+   * @param event causal event
+   */
+  public STATE transition(OPERAND operand, EVENT event);
+
+}
diff --git a/common/src/main/java/org/apache/hms/common/entity/RestSource.java b/client/src/main/java/org/apache/ambari/common/state/SingleArcTransition.java
old mode 100755
new mode 100644
similarity index 64%
copy from common/src/main/java/org/apache/hms/common/entity/RestSource.java
copy to client/src/main/java/org/apache/ambari/common/state/SingleArcTransition.java
index 8a5e80c..d9eec8c
--- a/common/src/main/java/org/apache/hms/common/entity/RestSource.java
+++ b/client/src/main/java/org/apache/ambari/common/state/SingleArcTransition.java
@@ -15,21 +15,21 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
+package org.apache.ambari.common.state;
 
-package org.apache.hms.common.entity;
-
-import javax.xml.bind.annotation.XmlAccessType;
-import javax.xml.bind.annotation.XmlAccessorType;
-import javax.xml.bind.annotation.XmlRootElement;
-import javax.xml.bind.annotation.XmlType;
 
 /**
- * Base class for HMS Rest API
- *
+ * Hook for Transition. This lead to state machine to move to 
+ * the post state as registered in the state machine.
  */
-@XmlRootElement
-@XmlAccessorType(XmlAccessType.FIELD) 
-@XmlType(name="", propOrder = {})
-public abstract class RestSource {
+public interface SingleArcTransition<OPERAND, EVENT> {
+  /**
+   * Transition hook.
+   * 
+   * @param operand the entity attached to the FSM, whose internal 
+   *                state may change.
+   * @param event causal event
+   */
+  public void transition(OPERAND operand, EVENT event);
 
 }
diff --git a/common/src/main/java/org/apache/hms/common/util/ExceptionUtil.java b/client/src/main/java/org/apache/ambari/common/state/StateMachine.java
old mode 100755
new mode 100644
similarity index 68%
copy from common/src/main/java/org/apache/hms/common/util/ExceptionUtil.java
copy to client/src/main/java/org/apache/ambari/common/state/StateMachine.java
index 5f23e2b..8eaca0c
--- a/common/src/main/java/org/apache/hms/common/util/ExceptionUtil.java
+++ b/client/src/main/java/org/apache/ambari/common/state/StateMachine.java
@@ -15,18 +15,13 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
+package org.apache.ambari.common.state;
 
-package org.apache.hms.common.util;
-
-import java.io.PrintWriter;
-import java.io.StringWriter;
-
-public class ExceptionUtil {
-  public static String getStackTrace(Throwable t) {
-    StringWriter sw = new StringWriter();
-    PrintWriter pw = new PrintWriter(sw);
-    t.printStackTrace(pw);
-    pw.flush();
-    return sw.toString();
-  }
+public interface StateMachine
+                 <STATE extends Enum<STATE>,
+                  EVENTTYPE extends Enum<EVENTTYPE>, EVENT> {
+  public STATE getCurrentState();
+  public STATE doTransition(EVENTTYPE eventType, EVENT event)
+        throws InvalidStateTransitonException;
+  public void setCurrentState(STATE state);
 }
diff --git a/client/src/main/java/org/apache/ambari/common/state/StateMachineFactory.java b/client/src/main/java/org/apache/ambari/common/state/StateMachineFactory.java
new file mode 100644
index 0000000..3b4b9c9
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/common/state/StateMachineFactory.java
@@ -0,0 +1,448 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.common.state;
+
+import java.util.EnumMap;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Set;
+import java.util.Stack;
+
+/**
+ * State machine topology.
+ * This object is semantically immutable.  If you have a
+ * StateMachineFactory there's no operation in the API that changes
+ * its semantic properties.
+ *
+ * @param <OPERAND> The object type on which this state machine operates.
+ * @param <STATE> The state of the entity.
+ * @param <EVENTTYPE> The external eventType to be handled.
+ * @param <EVENT> The event object.
+ *
+ */
+final public class StateMachineFactory
+             <OPERAND, STATE extends Enum<STATE>,
+              EVENTTYPE extends Enum<EVENTTYPE>, EVENT> {
+
+  private final TransitionsListNode transitionsListNode;
+
+  private Map<STATE, Map<EVENTTYPE,
+    Transition<OPERAND, STATE, EVENTTYPE, EVENT>>> stateMachineTable;
+
+  private STATE defaultInitialState;
+
+  private final boolean optimized;
+
+  /**
+   * Constructor
+   *
+   * This is the only constructor in the API.
+   *
+   */
+  public StateMachineFactory(STATE defaultInitialState) {
+    this.transitionsListNode = null;
+    this.defaultInitialState = defaultInitialState;
+    this.optimized = false;
+    this.stateMachineTable = null;
+  }
+  
+  private StateMachineFactory
+      (StateMachineFactory<OPERAND, STATE, EVENTTYPE, EVENT> that,
+       ApplicableTransition t) {
+    this.defaultInitialState = that.defaultInitialState;
+    this.transitionsListNode 
+        = new TransitionsListNode(t, that.transitionsListNode);
+    this.optimized = false;
+    this.stateMachineTable = null;
+  }
+
+  private StateMachineFactory
+      (StateMachineFactory<OPERAND, STATE, EVENTTYPE, EVENT> that,
+       boolean optimized) {
+    this.defaultInitialState = that.defaultInitialState;
+    this.transitionsListNode = that.transitionsListNode;
+    this.optimized = optimized;
+    if (optimized) {
+      makeStateMachineTable();
+    } else {
+      stateMachineTable = null;
+    }
+  }
+
+  private interface ApplicableTransition
+             <OPERAND, STATE extends Enum<STATE>,
+              EVENTTYPE extends Enum<EVENTTYPE>, EVENT> {
+    void apply(StateMachineFactory<OPERAND, STATE, EVENTTYPE, EVENT> subject);
+  }
+
+  private class TransitionsListNode {
+    final ApplicableTransition transition;
+    final TransitionsListNode next;
+
+    TransitionsListNode
+        (ApplicableTransition transition, TransitionsListNode next) {
+      this.transition = transition;
+      this.next = next;
+    }
+  }
+
+  static private class ApplicableSingleOrMultipleTransition
+             <OPERAND, STATE extends Enum<STATE>,
+              EVENTTYPE extends Enum<EVENTTYPE>, EVENT>
+          implements ApplicableTransition<OPERAND, STATE, EVENTTYPE, EVENT> {
+    final STATE preState;
+    final EVENTTYPE eventType;
+    final Transition<OPERAND, STATE, EVENTTYPE, EVENT> transition;
+
+    ApplicableSingleOrMultipleTransition
+        (STATE preState, EVENTTYPE eventType,
+         Transition<OPERAND, STATE, EVENTTYPE, EVENT> transition) {
+      this.preState = preState;
+      this.eventType = eventType;
+      this.transition = transition;
+    }
+
+    @Override
+    public void apply
+             (StateMachineFactory<OPERAND, STATE, EVENTTYPE, EVENT> subject) {
+      Map<EVENTTYPE, Transition<OPERAND, STATE, EVENTTYPE, EVENT>> transitionMap
+        = subject.stateMachineTable.get(preState);
+      if (transitionMap == null) {
+        // I use HashMap here because I would expect most EVENTTYPE's to not
+        //  apply out of a particular state, so FSM sizes would be 
+        //  quadratic if I use EnumMap's here as I do at the top level.
+        transitionMap = new HashMap<EVENTTYPE,
+          Transition<OPERAND, STATE, EVENTTYPE, EVENT>>();
+        subject.stateMachineTable.put(preState, transitionMap);
+      }
+      transitionMap.put(eventType, transition);
+    }
+  }
+
+  /**
+   * @return a NEW StateMachineFactory just like {@code this} with the current
+   *          transition added as a new legal transition.  This overload
+   *          has no hook object.
+   *
+   *         Note that the returned StateMachineFactory is a distinct
+   *         object.
+   *
+   *         This method is part of the API.
+   *
+   * @param preState pre-transition state
+   * @param postState post-transition state
+   * @param eventType stimulus for the transition
+   */
+  public StateMachineFactory
+             <OPERAND, STATE, EVENTTYPE, EVENT>
+          addTransition(STATE preState, STATE postState, EVENTTYPE eventType) {
+    return addTransition(preState, postState, eventType, null);
+  }
+
+  /**
+   * @return a NEW StateMachineFactory just like {@code this} with the current
+   *          transition added as a new legal transition.  This overload
+   *          has no hook object.
+   *
+   *
+   *         Note that the returned StateMachineFactory is a distinct
+   *         object.
+   *
+   *         This method is part of the API.
+   *
+   * @param preState pre-transition state
+   * @param postState post-transition state
+   * @param eventTypes List of stimuli for the transitions
+   */
+  public StateMachineFactory<OPERAND, STATE, EVENTTYPE, EVENT> addTransition(
+      STATE preState, STATE postState, Set<EVENTTYPE> eventTypes) {
+    return addTransition(preState, postState, eventTypes, null);
+  }
+
+  /**
+   * @return a NEW StateMachineFactory just like {@code this} with the current
+   *          transition added as a new legal transition
+   *
+   *         Note that the returned StateMachineFactory is a distinct
+   *         object.
+   *
+   *         This method is part of the API.
+   *
+   * @param preState pre-transition state
+   * @param postState post-transition state
+   * @param eventTypes List of stimuli for the transitions
+   * @param hook transition hook
+   */
+  public StateMachineFactory<OPERAND, STATE, EVENTTYPE, EVENT> addTransition(
+      STATE preState, STATE postState, Set<EVENTTYPE> eventTypes,
+      SingleArcTransition<OPERAND, EVENT> hook) {
+    StateMachineFactory<OPERAND, STATE, EVENTTYPE, EVENT> factory = null;
+    for (EVENTTYPE event : eventTypes) {
+      if (factory == null) {
+        factory = addTransition(preState, postState, event, hook);
+      } else {
+        factory = factory.addTransition(preState, postState, event, hook);
+      }
+    }
+    return factory;
+  }
+
+  /**
+   * @return a NEW StateMachineFactory just like {@code this} with the current
+   *          transition added as a new legal transition
+   *
+   *         Note that the returned StateMachineFactory is a distinct object.
+   *
+   *         This method is part of the API.
+   *
+   * @param preState pre-transition state
+   * @param postState post-transition state
+   * @param eventType stimulus for the transition
+   * @param hook transition hook
+   */
+  public StateMachineFactory
+             <OPERAND, STATE, EVENTTYPE, EVENT>
+          addTransition(STATE preState, STATE postState,
+                        EVENTTYPE eventType,
+                        SingleArcTransition<OPERAND, EVENT> hook){
+    return new StateMachineFactory
+        (this, new ApplicableSingleOrMultipleTransition
+           (preState, eventType, new SingleInternalArc(postState, hook)));
+  }
+
+  /**
+   * @return a NEW StateMachineFactory just like {@code this} with the current
+   *          transition added as a new legal transition
+   *
+   *         Note that the returned StateMachineFactory is a distinct object.
+   *
+   *         This method is part of the API.
+   *
+   * @param preState pre-transition state
+   * @param postStates valid post-transition states
+   * @param eventType stimulus for the transition
+   * @param hook transition hook
+   */
+  public StateMachineFactory
+             <OPERAND, STATE, EVENTTYPE, EVENT>
+          addTransition(STATE preState, Set<STATE> postStates,
+                        EVENTTYPE eventType,
+                        MultipleArcTransition<OPERAND, EVENT, STATE> hook){
+    return new StateMachineFactory
+        (this,
+         new ApplicableSingleOrMultipleTransition
+           (preState, eventType, new MultipleInternalArc(postStates, hook)));
+  }
+
+  /**
+   * @return a StateMachineFactory just like {@code this}, except that if
+   *         you won't need any synchronization to build a state machine
+   *
+   *         Note that the returned StateMachineFactory is a distinct object.
+   *
+   *         This method is part of the API.
+   *
+   *         The only way you could distinguish the returned
+   *         StateMachineFactory from {@code this} would be by
+   *         measuring the performance of the derived 
+   *         {@code StateMachine} you can get from it.
+   *
+   * Calling this is optional.  It doesn't change the semantics of the factory,
+   *   if you call it then when you use the factory there is no synchronization.
+   */
+  public StateMachineFactory
+             <OPERAND, STATE, EVENTTYPE, EVENT>
+          installTopology() {
+    return new StateMachineFactory(this, true);
+  }
+
+  /**
+   * Effect a transition due to the effecting stimulus.
+   * @param state current state
+   * @param eventType trigger to initiate the transition
+   * @param cause causal eventType context
+   * @return transitioned state
+   */
+  private STATE doTransition
+           (OPERAND operand, STATE oldState, EVENTTYPE eventType, EVENT event)
+      throws InvalidStateTransitonException {
+    // We can assume that stateMachineTable is non-null because we call
+    //  maybeMakeStateMachineTable() when we build an InnerStateMachine ,
+    //  and this code only gets called from inside a working InnerStateMachine .
+    Map<EVENTTYPE, Transition<OPERAND, STATE, EVENTTYPE, EVENT>> transitionMap
+      = stateMachineTable.get(oldState);
+    if (transitionMap != null) {
+      Transition<OPERAND, STATE, EVENTTYPE, EVENT> transition
+          = transitionMap.get(eventType);
+      if (transition != null) {
+        return transition.doTransition(operand, oldState, event, eventType);
+      }
+    }
+    throw new InvalidStateTransitonException(oldState, eventType);
+  }
+
+  private synchronized void maybeMakeStateMachineTable() {
+    if (stateMachineTable == null) {
+      makeStateMachineTable();
+    }
+  }
+
+  private void makeStateMachineTable() {
+    Stack<ApplicableTransition> stack = new Stack<ApplicableTransition>();
+
+    Map<STATE, Map<EVENTTYPE, Transition<OPERAND, STATE, EVENTTYPE, EVENT>>>
+      prototype = new HashMap<STATE, Map<EVENTTYPE, Transition<OPERAND, STATE, EVENTTYPE, EVENT>>>();
+
+    prototype.put(defaultInitialState, null);
+
+    // I use EnumMap here because it'll be faster and denser.  I would
+    //  expect most of the states to have at least one transition.
+    stateMachineTable
+       = new EnumMap<STATE, Map<EVENTTYPE,
+                           Transition<OPERAND, STATE, EVENTTYPE, EVENT>>>(prototype);
+
+    for (TransitionsListNode cursor = transitionsListNode;
+         cursor != null;
+         cursor = cursor.next) {
+      stack.push(cursor.transition);
+    }
+
+    while (!stack.isEmpty()) {
+      stack.pop().apply(this);
+    }
+  }
+
+  private interface Transition<OPERAND, STATE extends Enum<STATE>,
+          EVENTTYPE extends Enum<EVENTTYPE>, EVENT> {
+    STATE doTransition(OPERAND operand, STATE oldState,
+                       EVENT event, EVENTTYPE eventType);
+  }
+
+  private class SingleInternalArc
+                    implements Transition<OPERAND, STATE, EVENTTYPE, EVENT> {
+
+    private STATE postState;
+    private SingleArcTransition<OPERAND, EVENT> hook; // transition hook
+
+    SingleInternalArc(STATE postState,
+        SingleArcTransition<OPERAND, EVENT> hook) {
+      this.postState = postState;
+      this.hook = hook;
+    }
+
+    @Override
+    public STATE doTransition(OPERAND operand, STATE oldState,
+                              EVENT event, EVENTTYPE eventType) {
+      if (hook != null) {
+        hook.transition(operand, event);
+      }
+      return postState;
+    }
+  }
+
+  private class MultipleInternalArc
+              implements Transition<OPERAND, STATE, EVENTTYPE, EVENT>{
+
+    // Fields
+    private Set<STATE> validPostStates;
+    private MultipleArcTransition<OPERAND, EVENT, STATE> hook;  // transition hook
+
+    MultipleInternalArc(Set<STATE> postStates,
+                   MultipleArcTransition<OPERAND, EVENT, STATE> hook) {
+      this.validPostStates = postStates;
+      this.hook = hook;
+    }
+
+    @Override
+    public STATE doTransition(OPERAND operand, STATE oldState,
+                              EVENT event, EVENTTYPE eventType)
+        throws InvalidStateTransitonException {
+      STATE postState = hook.transition(operand, event);
+
+      if (!validPostStates.contains(postState)) {
+        throw new InvalidStateTransitonException(oldState, eventType);
+      }
+      return postState;
+    }
+  }
+
+  /* 
+   * @return a {@link StateMachine} that starts in 
+   *         {@code initialState} and whose {@link Transition} s are
+   *         applied to {@code operand} .
+   *
+   *         This is part of the API.
+   *
+   * @param operand the object upon which the returned 
+   *                {@link StateMachine} will operate.
+   * @param initialState the state in which the returned 
+   *                {@link StateMachine} will start.
+   *                
+   */
+  public StateMachine<STATE, EVENTTYPE, EVENT>
+        make(OPERAND operand, STATE initialState) {
+    return new InternalStateMachine(operand, initialState);
+  }
+
+  /* 
+   * @return a {@link StateMachine} that starts in the default initial
+   *          state and whose {@link Transition} s are applied to
+   *          {@code operand} . 
+   *
+   *         This is part of the API.
+   *
+   * @param operand the object upon which the returned 
+   *                {@link StateMachine} will operate.
+   *                
+   */
+  public StateMachine<STATE, EVENTTYPE, EVENT> make(OPERAND operand) {
+    return new InternalStateMachine(operand, defaultInitialState);
+  }
+
+  private class InternalStateMachine
+        implements StateMachine<STATE, EVENTTYPE, EVENT> {
+    private final OPERAND operand;
+    private STATE currentState;
+
+    InternalStateMachine(OPERAND operand, STATE initialState) {
+      this.operand = operand;
+      this.currentState = initialState;
+      if (!optimized) {
+        maybeMakeStateMachineTable();
+      }
+    }
+
+    @Override
+    public synchronized STATE getCurrentState() {
+      return currentState;
+    }
+
+    @Override
+    public synchronized STATE doTransition(EVENTTYPE eventType, EVENT event)
+         throws InvalidStateTransitonException  {
+      currentState = StateMachineFactory.this.doTransition
+          (operand, currentState, eventType, event);
+      return currentState;
+    }
+
+    @Override
+    public void setCurrentState(STATE state) {
+      currentState = state;
+    }
+  }
+}
diff --git a/common/src/main/java/org/apache/hms/common/util/DaemonWatcher.java b/client/src/main/java/org/apache/ambari/common/util/DaemonWatcher.java
similarity index 96%
rename from common/src/main/java/org/apache/hms/common/util/DaemonWatcher.java
rename to client/src/main/java/org/apache/ambari/common/util/DaemonWatcher.java
index 9c64589..4e4af17 100755
--- a/common/src/main/java/org/apache/hms/common/util/DaemonWatcher.java
+++ b/client/src/main/java/org/apache/ambari/common/util/DaemonWatcher.java
@@ -15,7 +15,7 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-package org.apache.hms.common.util;
+package org.apache.ambari.common.util;
 
 public class DaemonWatcher extends PidFile {
   private static DaemonWatcher instance = null;
diff --git a/common/src/main/java/org/apache/hms/common/util/ExceptionUtil.java b/client/src/main/java/org/apache/ambari/common/util/ExceptionUtil.java
similarity index 96%
rename from common/src/main/java/org/apache/hms/common/util/ExceptionUtil.java
rename to client/src/main/java/org/apache/ambari/common/util/ExceptionUtil.java
index 5f23e2b..0a21cd5 100755
--- a/common/src/main/java/org/apache/hms/common/util/ExceptionUtil.java
+++ b/client/src/main/java/org/apache/ambari/common/util/ExceptionUtil.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 
-package org.apache.hms.common.util;
+package org.apache.ambari.common.util;
 
 import java.io.PrintWriter;
 import java.io.StringWriter;
diff --git a/common/src/main/java/org/apache/hms/common/util/FileUtil.java b/client/src/main/java/org/apache/ambari/common/util/FileUtil.java
similarity index 96%
rename from common/src/main/java/org/apache/hms/common/util/FileUtil.java
rename to client/src/main/java/org/apache/ambari/common/util/FileUtil.java
index b345d6b..3703d16 100755
--- a/common/src/main/java/org/apache/hms/common/util/FileUtil.java
+++ b/client/src/main/java/org/apache/ambari/common/util/FileUtil.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 
-package org.apache.hms.common.util;
+package org.apache.ambari.common.util;
 
 import java.io.File;
 
diff --git a/common/src/main/java/org/apache/hms/common/util/HostUtil.java b/client/src/main/java/org/apache/ambari/common/util/HostUtil.java
similarity index 98%
rename from common/src/main/java/org/apache/hms/common/util/HostUtil.java
rename to client/src/main/java/org/apache/ambari/common/util/HostUtil.java
index 7f9b2a2..016b788 100755
--- a/common/src/main/java/org/apache/hms/common/util/HostUtil.java
+++ b/client/src/main/java/org/apache/ambari/common/util/HostUtil.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 
-package org.apache.hms.common.util;
+package org.apache.ambari.common.util;
 
 import java.util.ArrayList;
 import java.util.List;
diff --git a/common/src/main/java/org/apache/hms/common/util/JAXBUtil.java b/client/src/main/java/org/apache/ambari/common/util/JAXBUtil.java
similarity index 80%
rename from common/src/main/java/org/apache/hms/common/util/JAXBUtil.java
rename to client/src/main/java/org/apache/ambari/common/util/JAXBUtil.java
index 98ae237..7b91d98 100755
--- a/common/src/main/java/org/apache/hms/common/util/JAXBUtil.java
+++ b/client/src/main/java/org/apache/ambari/common/util/JAXBUtil.java
@@ -16,12 +16,11 @@
  * limitations under the License.
  */
 
-package org.apache.hms.common.util;
+package org.apache.ambari.common.util;
 
 import java.io.IOException;
 import java.io.StringWriter;
 
-import org.apache.hms.common.entity.RestSource;
 import org.codehaus.jackson.JsonFactory;
 import org.codehaus.jackson.JsonGenerator;
 import org.codehaus.jackson.map.AnnotationIntrospector;
@@ -31,14 +30,16 @@
 public class JAXBUtil {
 
   private static ObjectMapper mapper = new ObjectMapper();
-  private static AnnotationIntrospector introspector = new JaxbAnnotationIntrospector();
+  private static AnnotationIntrospector introspector = 
+      new JaxbAnnotationIntrospector();
   
   public JAXBUtil() {
     mapper.getDeserializationConfig().setAnnotationIntrospector(introspector);
     mapper.getSerializationConfig().setAnnotationIntrospector(introspector);    
   }
   
-  public static byte[] write(RestSource x) throws IOException {
+  //public static byte[] write(RestSource x) throws IOException {
+  public static byte[] write(Object x) throws IOException {
     try {
       return mapper.writeValueAsBytes(x);
     } catch (Throwable e) {
@@ -46,11 +47,12 @@
     }
   }
   
-  public static <T> T read(byte[] buffer, java.lang.Class<T> c) throws IOException {
+  public static <T> T read(byte[] buffer, Class<T> c) throws IOException {
     return (T) mapper.readValue(buffer, 0, buffer.length, c);
   }
 
-  public static String print(RestSource x) throws IOException {
+  //public static String print(RestSource x) throws IOException {
+  public static String print(Object x) throws IOException {
     try {
       JsonFactory jf = new JsonFactory();
       StringWriter sw = new StringWriter();
diff --git a/common/src/main/java/org/apache/hms/common/util/PidFile.java b/client/src/main/java/org/apache/ambari/common/util/PidFile.java
similarity index 92%
rename from common/src/main/java/org/apache/hms/common/util/PidFile.java
rename to client/src/main/java/org/apache/ambari/common/util/PidFile.java
index f861930..0a5ebe3 100755
--- a/common/src/main/java/org/apache/hms/common/util/PidFile.java
+++ b/client/src/main/java/org/apache/ambari/common/util/PidFile.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 
-package org.apache.hms.common.util;
+package org.apache.ambari.common.util;
 
 
 import java.io.*;
@@ -48,9 +48,9 @@
     String pidLong = ManagementFactory.getRuntimeMXBean().getName();
     String[] items = pidLong.split("@");
     String pid = items[0];
-    String chukwaPath = System.getProperty("HMS_HOME");
+    String chukwaPath = System.getProperty("AMBARI_HOME");
     StringBuffer pidFilesb = new StringBuffer();
-    String pidDir = System.getenv("HMS_PID_DIR");
+    String pidDir = System.getenv("AMBARI_PID_DIR");
     if (pidDir == null) {
       pidDir = chukwaPath + File.separator + "var" + File.separator + "run";
     }
@@ -82,9 +82,9 @@
   }
 
   public void clean() {
-    String chukwaPath = System.getenv("HMS_HOME");
+    String chukwaPath = System.getenv("AMBARI_HOME");
     StringBuffer pidFilesb = new StringBuffer();
-    String pidDir = System.getenv("HMS_PID_DIR");
+    String pidDir = System.getenv("AMBARI_PID_DIR");
     if (pidDir == null) {
       pidDir = chukwaPath + File.separator + "var" + File.separator + "run";
     }
diff --git a/common/src/main/java/org/apache/hms/common/util/ServiceDiscovery.java b/client/src/main/java/org/apache/ambari/common/util/ServiceDiscovery.java
similarity index 99%
rename from common/src/main/java/org/apache/hms/common/util/ServiceDiscovery.java
rename to client/src/main/java/org/apache/ambari/common/util/ServiceDiscovery.java
index 72e277e..50f964b 100755
--- a/common/src/main/java/org/apache/hms/common/util/ServiceDiscovery.java
+++ b/client/src/main/java/org/apache/ambari/common/util/ServiceDiscovery.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 
-package org.apache.hms.common.util;
+package org.apache.ambari.common.util;
 
   import java.awt.BorderLayout;
   import java.awt.Color;
diff --git a/common/src/main/java/org/apache/hms/common/util/ServiceDiscoveryUtil.java b/client/src/main/java/org/apache/ambari/common/util/ServiceDiscoveryUtil.java
similarity index 98%
rename from common/src/main/java/org/apache/hms/common/util/ServiceDiscoveryUtil.java
rename to client/src/main/java/org/apache/ambari/common/util/ServiceDiscoveryUtil.java
index edc8f59..d57b509 100755
--- a/common/src/main/java/org/apache/hms/common/util/ServiceDiscoveryUtil.java
+++ b/client/src/main/java/org/apache/ambari/common/util/ServiceDiscoveryUtil.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 
-package org.apache.hms.common.util;
+package org.apache.ambari.common.util;
 
 import java.io.IOException;
 import java.net.InetAddress;
diff --git a/client/src/main/java/org/apache/ambari/event/AbstractEvent.java b/client/src/main/java/org/apache/ambari/event/AbstractEvent.java
new file mode 100644
index 0000000..ca89b8a
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/event/AbstractEvent.java
@@ -0,0 +1,57 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.ambari.event;
+
+/**
+ * parent class of all the events. All events extend this class.
+ */
+public abstract class AbstractEvent<TYPE extends Enum<TYPE>> 
+    implements Event<TYPE> {
+
+  private final TYPE type;
+  private final long timestamp;
+
+  // use this if you DON'T care about the timestamp
+  public AbstractEvent(TYPE type) {
+    this.type = type;
+    // We're not generating a real timestamp here.  It's too expensive.
+    timestamp = -1L;
+  }
+
+  // use this if you care about the timestamp
+  public AbstractEvent(TYPE type, long timestamp) {
+    this.type = type;
+    this.timestamp = timestamp;
+  }
+
+  @Override
+  public long getTimestamp() {
+    return timestamp;
+  }
+
+  @Override
+  public TYPE getType() {
+    return type;
+  }
+
+  @Override
+  public String toString() {
+    return "EventType: " + getType();
+  }
+}
diff --git a/client/src/main/java/org/apache/ambari/event/AsyncDispatcher.java b/client/src/main/java/org/apache/ambari/event/AsyncDispatcher.java
new file mode 100644
index 0000000..eb19876
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/event/AsyncDispatcher.java
@@ -0,0 +1,189 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.event;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.LinkedBlockingQueue;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+/**
+ * Dispatches events in a separate thread. Currently only single thread does
+ * that. Potentially there could be multiple channels for each event type
+ * class and a thread pool can be used to dispatch the events.
+ */
+@SuppressWarnings("rawtypes")
+public class AsyncDispatcher implements Dispatcher {
+
+  private static final Log LOG = LogFactory.getLog(AsyncDispatcher.class);
+
+  private final BlockingQueue<Event> eventQueue;
+  private volatile boolean stopped = false;
+
+  private Thread eventHandlingThread;
+  protected final Map<Class<? extends Enum>, EventHandler> eventDispatchers;
+
+  public AsyncDispatcher() {
+    this(new HashMap<Class<? extends Enum>, EventHandler>(),
+         new LinkedBlockingQueue<Event>());
+  }
+
+  AsyncDispatcher(
+      Map<Class<? extends Enum>, EventHandler> eventDispatchers,
+      BlockingQueue<Event> eventQueue) {
+    this.eventQueue = eventQueue;
+    this.eventDispatchers = eventDispatchers;
+  }
+
+  Runnable createThread() {
+    return new Runnable() {
+      @Override
+      public void run() {
+        while (!stopped && !Thread.currentThread().isInterrupted()) {
+          Event event;
+          try {
+            event = eventQueue.take();
+          } catch(InterruptedException ie) {
+            LOG.info("AsyncDispatcher thread interrupted", ie);
+            return;
+          }
+          if (event != null) {
+            dispatch(event);
+          }
+        }
+      }
+    };
+  }
+
+  @Override
+  public void start() {
+    eventHandlingThread = new Thread(createThread());
+    eventHandlingThread.start();
+  }
+
+  public void stop() {
+    stopped = true;
+    eventHandlingThread.interrupt();
+    try {
+      eventHandlingThread.join();
+    } catch (InterruptedException ie) {
+      LOG.debug("Interruped Exception while stopping", ie);
+    }
+
+  }
+
+  @SuppressWarnings("unchecked")
+  protected void dispatch(Event event) {
+    //all events go thru this loop
+    LOG.debug("Dispatching the event " + event.getClass().getName() + "."
+        + event.toString());
+
+    Class<? extends Enum> type = event.getType().getDeclaringClass();
+
+    try{
+      eventDispatchers.get(type).handle(event);
+    }
+    catch (Throwable t) {
+      //TODO Maybe log the state of the queue
+      LOG.fatal("Error in dispatcher thread. Exiting..", t);
+      System.exit(-1);
+    }
+  }
+
+  @Override
+  @SuppressWarnings("rawtypes")
+  public void register(Class<? extends Enum> eventType,
+      EventHandler handler) {
+    /* check to see if we have a listener registered */
+    @SuppressWarnings("unchecked")
+    EventHandler<Event> registeredHandler = (EventHandler<Event>)
+    eventDispatchers.get(eventType);
+    LOG.info("Registering " + eventType + " for " + handler.getClass());
+    if (registeredHandler == null) {
+      eventDispatchers.put(eventType, handler);
+    } else if (!(registeredHandler instanceof MultiListenerHandler)){
+      /* for multiple listeners of an event add the multiple listener handler */
+      MultiListenerHandler multiHandler = new MultiListenerHandler();
+      multiHandler.addHandler(registeredHandler);
+      multiHandler.addHandler(handler);
+      eventDispatchers.put(eventType, multiHandler);
+    } else {
+      /* already a multilistener, just add to it */
+      MultiListenerHandler multiHandler
+      = (MultiListenerHandler) registeredHandler;
+      multiHandler.addHandler(handler);
+    }
+  }
+
+  @Override
+  public EventHandler getEventHandler() {
+    return new GenericEventHandler();
+  }
+
+  class GenericEventHandler implements EventHandler<Event> {
+    public void handle(Event event) {
+      /* all this method does is enqueue all the events onto the queue */
+      int qSize = eventQueue.size();
+      if (qSize !=0 && qSize %1000 == 0) {
+        LOG.info("Size of event-queue is " + qSize);
+      }
+      int remCapacity = eventQueue.remainingCapacity();
+      if (remCapacity < 1000) {
+        LOG.info("Very low remaining capacity in the event-queue: "
+            + remCapacity);
+      }
+      try {
+        eventQueue.put(event);
+      } catch (InterruptedException e) {
+        throw new RuntimeException(e);
+      }
+    };
+  }
+
+  /**
+   * Multiplexing an event. Sending it to different handlers that
+   * are interested in the event.
+   * @param <T> the type of event these multiple handlers are interested in.
+   */
+  @SuppressWarnings("rawtypes")
+  static class MultiListenerHandler implements EventHandler<Event> {
+    List<EventHandler<Event>> listofHandlers;
+
+    public MultiListenerHandler() {
+      listofHandlers = new ArrayList<EventHandler<Event>>();
+    }
+
+    @Override
+    public void handle(Event event) {
+      for (EventHandler<Event> handler: listofHandlers) {
+        handler.handle(event);
+      }
+    }
+
+    void addHandler(EventHandler<Event> handler) {
+      listofHandlers.add(handler);
+    }
+
+  }
+}
diff --git a/client/src/main/java/org/apache/ambari/event/Dispatcher.java b/client/src/main/java/org/apache/ambari/event/Dispatcher.java
new file mode 100644
index 0000000..5bb4cf0
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/event/Dispatcher.java
@@ -0,0 +1,34 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.ambari.event;
+
+/**
+ * Event Dispatcher interface. It dispatches events to registered 
+ * event handlers based on event types.
+ * 
+ */
+public interface Dispatcher {
+
+  EventHandler getEventHandler();
+
+  void register(Class<? extends Enum> eventType, EventHandler handler);
+  
+  void start();
+
+}
diff --git a/client/src/main/java/org/apache/ambari/event/Event.java b/client/src/main/java/org/apache/ambari/event/Event.java
new file mode 100644
index 0000000..cdbd9b0
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/event/Event.java
@@ -0,0 +1,30 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.ambari.event;
+
+/**
+ * Interface defining events api.
+ *
+ */
+public interface Event<TYPE extends Enum<TYPE>> {
+
+  TYPE getType();
+  long getTimestamp();
+  String toString();
+}
diff --git a/client/src/main/java/org/apache/ambari/event/EventHandler.java b/client/src/main/java/org/apache/ambari/event/EventHandler.java
new file mode 100644
index 0000000..24a34f0
--- /dev/null
+++ b/client/src/main/java/org/apache/ambari/event/EventHandler.java
@@ -0,0 +1,30 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.ambari.event;
+
+/**
+ * Interface for handling events of type T
+ *
+ * @param <T> paremeterized event of type T
+ */
+public interface EventHandler<T extends Event> {
+
+  void handle(T event);
+
+}
diff --git a/client/src/main/java/org/apache/hms/client/Client.java b/client/src/main/java/org/apache/hms/client/Client.java
deleted file mode 100755
index e8d789f..0000000
--- a/client/src/main/java/org/apache/hms/client/Client.java
+++ /dev/null
@@ -1,369 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.client;
-
-import java.io.IOException;
-import java.net.URL;
-
-import javax.activity.InvalidActivityException;
-import org.apache.commons.cli.BasicParser;
-import org.apache.commons.cli.CommandLine;
-import org.apache.commons.cli.HelpFormatter;
-import org.apache.commons.cli.Option;
-import org.apache.commons.cli.OptionBuilder;
-import org.apache.commons.cli.Options;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hms.common.entity.Response;
-import org.apache.hms.common.entity.cluster.MachineState;
-import org.apache.hms.common.entity.command.CommandStatus;
-import org.apache.hms.common.entity.command.CreateClusterCommand;
-import org.apache.hms.common.entity.command.DeleteClusterCommand;
-import org.apache.hms.common.entity.command.UpgradeClusterCommand;
-import org.apache.hms.common.entity.manifest.ClusterManifest;
-import org.apache.hms.common.entity.manifest.ConfigManifest;
-import org.apache.hms.common.entity.manifest.NodesManifest;
-import org.apache.hms.common.entity.manifest.SoftwareManifest;
-import org.apache.hms.common.util.ExceptionUtil;
-import org.apache.hms.common.util.JAXBUtil;
-
-import com.sun.jersey.api.Responses;
-import com.sun.jersey.api.client.UniformInterfaceException;
-
-public class Client {
-  private static Log log = LogFactory.getLog(Client.class);
-  private static Executor clientRunner = Executor.getInstance();
- 
-  @SuppressWarnings("static-access")
-  private final static Option help = OptionBuilder.withLongOpt("help").withDescription("Output usage menu and quit").create("h");
-
-  @SuppressWarnings("static-access")
-  private final static Option createCluster = OptionBuilder.withLongOpt("create-cluster").withArgName("cluster-name")
-    .hasArg().withDescription("Create a cluster")
-    .create();
-
-  @SuppressWarnings("static-access")
-  private final static Option deleteCluster = OptionBuilder.withLongOpt("delete-cluster").withArgName("cluster-name")
-    .hasArg().withDescription("Deletea cluster")
-    .create();
-  
-  @SuppressWarnings("static-access")
-  private final static Option clusterStatus = OptionBuilder.withLongOpt("cluster-status").withArgName("cluster-name")
-    .hasArg().withDescription("Check cluster status")
-    .create("cs");
-
-  @SuppressWarnings("static-access")
-  private final static Option upgradeCluster = OptionBuilder.withLongOpt("upgrade-cluster").withArgName("cluster-name")
-    .hasArg().withDescription("Upgrade a cluster")
-    .create();
-
-  @SuppressWarnings("static-access")
-  private final static Option nodeStatus = OptionBuilder.withLongOpt( "node-status" ).withArgName( "nodepath" )
-    .hasArg().withDescription("check node status")
-    .create("ns");
-
-  @SuppressWarnings("static-access")
-  private final static Option cmdStatus = OptionBuilder.withArgName("command-id")
-    .hasArg().withDescription("Check command status")
-    .create("q");
-  
-  @SuppressWarnings("static-access")
-  private final static Option softwareManifest = OptionBuilder.withLongOpt("software").withArgName("software-url")
-    .hasArg().withDescription("Location of software manifest")
-    .create();
-  
-  @SuppressWarnings("static-access")
-  private final static Option nodesManifest = OptionBuilder.withLongOpt("nodes").withArgName("nodes-url")
-    .hasArg().withDescription("Location of nodes manifest")
-    .create();
-  
-  @SuppressWarnings("static-access")
-  private final static Option configManifest = OptionBuilder.withLongOpt("config").withArgName("config-url")
-    .hasArg().withDescription("Location of config manifest")
-    .create();
-
-  @SuppressWarnings("static-access")
-  private final static Option dryRun = OptionBuilder.withLongOpt( "dryrun" )
-    .withDescription( "Test command only" ).create();
-
-  @SuppressWarnings("static-access")
-  private final static Option verbose = OptionBuilder.withLongOpt( "verbose" )
-    .withDescription( "Print verbose information" ).create("v");
-
-  private static Options opt = setupOptions();
-
-  public static Options setupOptions() {
-    if(opt==null) {
-      opt = new Options();
-    }
-    opt.addOption(help);
-    opt.addOption(nodeStatus);
-    opt.addOption(cmdStatus);
-    opt.addOption(createCluster);
-    opt.addOption(deleteCluster);
-    opt.addOption(upgradeCluster);
-    opt.addOption(clusterStatus);
-    opt.addOption(nodesManifest);
-    opt.addOption(configManifest);
-    opt.addOption(softwareManifest);
-    opt.addOption(verbose);
-
-    opt.addOption(dryRun);
-    return opt;
-  }
-
-  /**
-   * Construct a create cluster command
-   * @param clusterName - Cluster name
-   * @param nodes - Nodes manifest is a url to a XML file which describes the server compositions of the cluster
-   * @param software - Software manifest is a url to a XML file which describes the software compositions of the cluster
-   * @param config - Configuration manifest is a url to a XML file which describes the configuration steps for the cluster
-   * @return
-   * @throws IOException
-   */
-  public Response createCluster(String clusterName, URL nodes, URL software, URL config) throws IOException {
-    ClusterManifest cluster = new ClusterManifest();
-    cluster.setClusterName(clusterName);
-    NodesManifest nodesM = new NodesManifest();
-    nodesM.setUrl(nodes);
-    cluster.setNodes(nodesM);
-    SoftwareManifest softwareM = new SoftwareManifest();
-    softwareM.setUrl(software);
-    cluster.setSoftware(softwareM);
-    ConfigManifest configM = new ConfigManifest();
-    configM.setUrl(config);
-    cluster.setConfig(configM);
-    return clientRunner.sendToController(new CreateClusterCommand(cluster));
-  }
-
-  /**
-   * Construct a upgrade cluster command
-   * @param clusterName - Cluster name
-   * @param nodes - Nodes manifest is a url to a XML file which describes the server compositions of the cluster
-   * @param software - Software manifest is a url to a XML file which describes the software compositions of the cluster
-   * @param config - Configuration manifest is a url to a XML file which describes the configuration steps for the cluster
-   * @return
-   * @throws IOException
-   */
-  public Response upgradeCluster(String clusterName, URL nodes, URL software, URL config) throws IOException {
-    ClusterManifest cluster = new ClusterManifest();
-    cluster.setClusterName(clusterName);
-    NodesManifest nodesM = new NodesManifest();
-    nodesM.setUrl(nodes);
-    cluster.setNodes(nodesM);
-    SoftwareManifest softwareM = new SoftwareManifest();
-    softwareM.setUrl(software);
-    cluster.setSoftware(softwareM);
-    ConfigManifest configM = new ConfigManifest();
-    configM.setUrl(config);
-    cluster.setConfig(configM);
-    return clientRunner.sendToController(new UpgradeClusterCommand(cluster));
-  }
-
-  /**
-   * Construct a delete cluster command
-   * @param clusterName - Cluster name
-   * @param config - Configuration manifest is a url to a XML file which describes the decommission steps for the cluster
-   * @return
-   * @throws IOException
-   */
-  public Response deleteCluster(String clusterName, URL config) throws IOException {
-    ClusterManifest cluster = new ClusterManifest();
-    ConfigManifest configM = new ConfigManifest();
-    configM.setUrl(config);
-    cluster.setConfig(configM);
-    return clientRunner.sendToController(new DeleteClusterCommand(clusterName, cluster));
-  }
-  
-  /**
-   * Query command status
-   * @param id - Command ID
-   * @return
-   * @throws IOException
-   */
-  public CommandStatus queryCommandStatus(String id) throws IOException {
-    return clientRunner.queryController(id);
-  }
-  
-  /**
-   * Parse command line arguments and construct HMS command for HMS Client Executor class
-   * @param args
-   */
-  public void run(String[] args) {
-    BasicParser parser = new BasicParser();
-    try {
-      CommandLine cl = parser.parse(opt, args);
-      /* Dry run */
-      boolean dryRun = false;
-      if ( cl.hasOption("t") ) {
-        dryRun = true;
-      }
-      
-    if ( cl.hasOption("q")) {
-        String cmdid = cl.getOptionValue("q");
-        try {
-          CommandStatus cs = queryCommandStatus(cmdid);
-          if( cl.hasOption("v")) {
-            System.out.println(JAXBUtil.print(cs));
-          } else {
-            System.out.println("Command ID: "+cmdid);
-            System.out.println("Status: "+cs.getStatus());
-            System.out.println("Total actions: "+cs.getTotalActions());
-            System.out.println("Completed actions: "+cs.getCompletedActions());
-          }
-        } catch(UniformInterfaceException e) {
-          if(e.getResponse().getStatus()==Responses.NOT_FOUND) {
-            System.out.println("Command ID:"+cmdid+" does not exist.");
-          } else {
-            System.out.println("Unknown error occurred, check stack trace.");
-            System.out.println(ExceptionUtil.getStackTrace(e));            
-          }
-        }
-      } else if ( cl.hasOption("delete-command")) {
-        // TODO: Remove command from the system
-        String cmdId = cl.getOptionValue("delete-command");
-        if (cmdId == null) {
-          throw new RuntimeException("Command ID must be specified for Delete operation");
-        }
-        // System.out.println(clientRunner.sendToController(new DeleteCommand(cmdId)));        
-      } else if ( cl.hasOption("delete-cluster") ) {
-        /* delete a cluster */
-        String clusterName = cl.getOptionValue("delete-cluster");
-        if (clusterName == null) {
-          throw new RuntimeException("cluster name must be specified for DELETE operation");
-        }
-        URL config = new URL(cl.getOptionValue("config-manifest"));
-        if (config == null) {
-          throw new RuntimeException("config manifest must be specified for DELETE operation");
-        }
-        try {
-          Response response = deleteCluster(clusterName, config);
-          showResponse(response, cl.hasOption("v"));
-        } catch(Throwable e) {
-          showErrors(e);
-        }
-      } else if ( cl.hasOption("create-cluster") ) {
-        /* create a cluster */
-        String clusterName = cl.getOptionValue("create-cluster");
-        if (clusterName == null) {
-          throw new RuntimeException("cluster name must be specified for CREATE operation");
-        }
-        URL nodes = new URL(cl.getOptionValue("nodes"));
-        if (nodes == null) {
-          throw new RuntimeException("nodes manifest must be specified for CREATE operation");
-        }
-        URL software = new URL(cl.getOptionValue("software"));
-        if (software == null) {
-          throw new RuntimeException("software manifest must be specified for CREATE operation");
-        }
-        URL config = new URL(cl.getOptionValue("config"));
-        if (config == null) {
-          throw new RuntimeException("config manifest must be specified for CREATE operation");
-        }
-        Response response = createCluster(clusterName, nodes, software, config);
-        showResponse(response, cl.hasOption("v"));        
-      } else if ( cl.hasOption("upgrade-cluster") ) {
-        /* upgrade a cluster */
-        String clusterName = cl.getOptionValue("upgrade-cluster");
-        if (clusterName == null) {
-          throw new RuntimeException("cluster name must be specified for CREATE operation");
-        }
-        URL nodes = new URL(cl.getOptionValue("nodes"));
-        if (nodes == null) {
-          throw new RuntimeException("nodes manifest must be specified for CREATE operation");
-        }
-        URL software = new URL(cl.getOptionValue("software"));
-        if (software == null) {
-          throw new RuntimeException("software manifest must be specified for CREATE operation");
-        }
-        URL config = new URL(cl.getOptionValue("config"));
-        if (config == null) {
-          throw new RuntimeException("config manifest must be specified for CREATE operation");
-        }
-        Response response = upgradeCluster(clusterName, nodes, software, config);
-        showResponse(response, cl.hasOption("v"));
-      } else if ( cl.hasOption("cluster-status") ) {
-        /* check cluster status */
-        String clusterId = cl.getOptionValue("cluster-status");
-        if (clusterId == null) {
-          throw new RuntimeException("Cluster path must be specified for cluster-status operation");
-        }
-        ClusterManifest cm = clientRunner.checkClusterStatus(clusterId);
-        System.out.println(JAXBUtil.print(cm));
-      } else if ( cl.hasOption("node-status") ) {
-        /* check node status */
-        String nodepath = cl.getOptionValue("node-status");
-        if (nodepath == null) {
-          throw new RuntimeException("nodePath must be specified for nodestatus operation");
-        }
-        MachineState ms = clientRunner.checkNodeStatus(nodepath);
-        System.out.println(JAXBUtil.print(ms));
-      } else if ( cl.hasOption("help")) {
-        usage();
-      } else {
-        throw new InvalidActivityException("Invalid arguement.");
-      }
-    } catch (InvalidActivityException e) {
-      usage();
-      System.out.println("Argument Error: " + e.getMessage());
-    } catch (Throwable e) {
-      showErrors(e);
-    }
-  }
-  
-  /**
-   * Generic utility to handle error feedback for HMS command line client.
-   * @param e
-   */
-  private void showErrors(Throwable e) {
-    log.error(ExceptionUtil.getStackTrace(e));
-    System.out.println("Error in issuing command.");
-    System.out.println(ExceptionUtil.getStackTrace(e));    
-  }
-
-  /**
-   * Generic utility method to display the response of HMS Controller Rest API.
-   * @param response - Response object from HMS Controller Rest API.
-   * @param verbose - Display response verbosely.
-   * @throws IOException
-   */
-  private void showResponse(Response response, boolean verbose) throws IOException {
-    if(response.getCode()==0) {
-      System.out.println("Command has been queued.  Command ID: "+response.getOutput());
-    }
-    if(verbose) {
-      System.out.println("Verbose Output:");
-      System.out.println(JAXBUtil.print(response));
-    }    
-  }
-
-  /**
-   * Display usage of HMS command line client
-   */
-  public static void usage() {
-    HelpFormatter f = new HelpFormatter();
-    f.printHelp("hms client", opt);
-  }
-  
-  public static void main(String[] args) {
-    Client c = new Client();
-    c.run(args);
-  }
-
-}
diff --git a/client/src/main/java/org/apache/hms/client/Executor.java b/client/src/main/java/org/apache/hms/client/Executor.java
deleted file mode 100755
index bbaa75f..0000000
--- a/client/src/main/java/org/apache/hms/client/Executor.java
+++ /dev/null
@@ -1,163 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.client;
-
-import java.io.BufferedWriter;
-import java.io.FileWriter;
-import java.io.IOException;
-
-import javax.ws.rs.WebApplicationException;
-import javax.ws.rs.core.MediaType;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hms.common.entity.cluster.MachineState;
-import org.apache.hms.common.entity.command.Command;
-import org.apache.hms.common.entity.command.CommandStatus;
-import org.apache.hms.common.entity.command.Command.CmdType;
-import org.apache.hms.common.entity.manifest.ClusterManifest;
-import org.apache.hms.common.entity.Response;
-import org.apache.hms.common.util.ExceptionUtil;
-import org.apache.hms.common.util.JAXBUtil;
-
-
-import com.sun.jersey.api.client.Client;
-import com.sun.jersey.api.client.WebResource;
-
-public class Executor {
-  private static Log LOG = LogFactory.getLog(Executor.class);
-
-  private static Executor instance;
-  private static String CONTROLLER = "localhost:4080/v1";
-  
-  public Executor() {
-  }
-  
-  /**
-   * Executor manages the Rest API communication between HMS command line client and HMS Controller 
-   * @return
-   */
-  public static Executor getInstance() {
-    if(instance == null) {
-      instance = new Executor();
-    }
-    return instance;
-  }
-  
-  /**
-   * Generic method to call HMS Controller Rest API for issuing commands.
-   * @param cmd - Command Object
-   * @return
-   * @throws IOException
-   */
-  public Response sendToController(Command cmd) throws IOException {
-    try {
-      StringBuilder url = new StringBuilder();
-      url.append("http://");
-      url.append(CONTROLLER);
-      url.append("/controller");
-      Client wsClient = Client.create();
-      WebResource webResource = wsClient.resource(url.toString());
-      Response result;
-      if(cmd instanceof org.apache.hms.common.entity.command.CreateClusterCommand) {
-        result = webResource.path("create/cluster").type(MediaType.APPLICATION_JSON_TYPE).post(Response.class, cmd);        
-      } else if(cmd instanceof org.apache.hms.common.entity.command.UpgradeClusterCommand) {
-        result = webResource.path("upgrade/cluster").type(MediaType.APPLICATION_JSON_TYPE).post(Response.class, cmd);        
-      } else if(cmd instanceof org.apache.hms.common.entity.command.DeleteClusterCommand) {
-        result = webResource.path("delete/cluster").type(MediaType.APPLICATION_JSON_TYPE).post(Response.class, cmd); 
-      } else if (cmd instanceof org.apache.hms.common.entity.command.DeleteCommand) {
-        webResource.path("delete/command").path(cmd.getId()).type(MediaType.APPLICATION_JSON_TYPE).delete(); 
-        result = new Response();
-        result.setCode(0);
-        result.setOutput(cmd.getId()+" command deleted.");
-      } else {
-        result = webResource.type("application/json").get(Response.class);         
-      }
-      return result;
-    } catch(Exception e) {
-      LOG.error(ExceptionUtil.getStackTrace(e));
-      throw new IOException(e);
-    }
-  }
-  
-  /**
-   * Call HMS Controller Rest API to query command status.
-   * @param id - Command ID
-   * @return
-   * @throws IOException
-   * @throws WebApplicationException
-   */
-  public CommandStatus queryController(String id) throws IOException, WebApplicationException {
-    StringBuilder url = new StringBuilder();
-    url.append("http://");
-    url.append(CONTROLLER);
-    url.append("/command/status/");
-    url.append(id);
-    Client wsClient = Client.create();
-    WebResource webResource = wsClient.resource(url.toString());
-    CommandStatus result = webResource.type("application/json").get(CommandStatus.class);         
-    return result;
-  }
-
-  /**
-   * Call HMS Controller Rest API to query cluster status.
-   * @param clusterId - Cluster ID
-   * @return
-   * @throws IOException
-   */
-  public ClusterManifest checkClusterStatus(String clusterId) throws IOException {
-    try {
-      StringBuilder url = new StringBuilder();
-      url.append("http://");
-      url.append(CONTROLLER);
-      url.append("/cluster/status/");
-      url.append(clusterId);
-      Client wsClient = Client.create();
-      WebResource webResource = wsClient.resource(url.toString());
-      ClusterManifest result = webResource.type("application/json").get(ClusterManifest.class);         
-      return result;
-    } catch(Exception e) {
-      LOG.error(ExceptionUtil.getStackTrace(e));
-      throw new IOException(e);
-    }    
-  }
-  
-  /**
-   * Call HMS Controller Rest API to query node status.
-   * @param nodeId - Full path to the node in ZooKeeper
-   * @return
-   * @throws IOException
-   */
-  public MachineState checkNodeStatus(String nodeId) throws IOException {
-    try {
-      StringBuilder url = new StringBuilder();
-      url.append("http://");
-      url.append(CONTROLLER);
-      url.append("/cluster/node/status");
-      Client wsClient = Client.create();
-      WebResource webResource = wsClient.resource(url.toString()).queryParam("node", nodeId);
-      MachineState result = webResource.type("application/json").get(MachineState.class);         
-      return result;
-    } catch(Exception e) {
-      LOG.error(ExceptionUtil.getStackTrace(e));
-      throw new IOException(e);
-    }    
-  }
-
-}
diff --git a/client/src/main/resources/org/apache/ambari/common/rest/entities/jaxb.index b/client/src/main/resources/org/apache/ambari/common/rest/entities/jaxb.index
new file mode 100644
index 0000000..205e6da
--- /dev/null
+++ b/client/src/main/resources/org/apache/ambari/common/rest/entities/jaxb.index
@@ -0,0 +1,2 @@
+ClusterDefinition
+Stack
diff --git a/client/src/packages/deb/hms-client.control/conffile b/client/src/packages/deb/hms-client.control/conffile
index e69de29..ae1e83e 100644
--- a/client/src/packages/deb/hms-client.control/conffile
+++ b/client/src/packages/deb/hms-client.control/conffile
@@ -0,0 +1,14 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/client/src/packages/deb/hms-client.control/control b/client/src/packages/deb/hms-client.control/control
index ee7d6bf..881e7bc 100644
--- a/client/src/packages/deb/hms-client.control/control
+++ b/client/src/packages/deb/hms-client.control/control
@@ -1,9 +1,23 @@
-Package: hms-client
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+Package: ambari-client
 Version: @version@
 Section: misc
 Priority: optional
 Architecture: all
-Depends: openjdk-6-jre-headless
-Maintainer: Apache Software Foundation <hms-dev@incubator.apache.org>
-Description: HMS Client
+Maintainer: Apache Software Foundation <ambari-dev@incubator.apache.org>
+Description: Ambari Client
 Distribution: development
diff --git a/client/src/packages/deb/hms-client.control/postinst b/client/src/packages/deb/hms-client.control/postinst
index b3c6127..891dfa5 100755
--- a/client/src/packages/deb/hms-client.control/postinst
+++ b/client/src/packages/deb/hms-client.control/postinst
@@ -15,10 +15,10 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-bash /usr/share/hbase/sbin/update-hms-client-env.sh \
+bash /usr/share/hbase/sbin/update-ambari-client-env.sh \
   --prefix=/usr \
   --bin-dir=/usr/bin \
-  --conf-dir=/etc/hms \
-  --log-dir=/var/log/hms \
-  --pid-dir=/var/run/hms
+  --conf-dir=/etc/ambari \
+  --log-dir=/var/log/ambari \
+  --pid-dir=/var/run/ambari
 
diff --git a/client/src/packages/deb/hms-client.control/prerm b/client/src/packages/deb/hms-client.control/prerm
index 77b54cd..85a8c3a 100755
--- a/client/src/packages/deb/hms-client.control/prerm
+++ b/client/src/packages/deb/hms-client.control/prerm
@@ -15,11 +15,11 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-bash /usr/share/hbase/sbin/update-hms-client-env.sh \
+bash /usr/sbin/update-ambari-client-env.sh \
   --prefix=/usr \
   --bin-dir=/usr/bin \
-  --conf-dir=/etc/hms \
-  --log-dir=/var/log/hms \
-  --pid-dir=/var/run/hms \
+  --conf-dir=/etc/ambari \
+  --log-dir=/var/log/ambari \
+  --pid-dir=/var/run/ambari \
   --uninstal
 
diff --git a/client/src/packages/rpm/spec/hms-client.spec b/client/src/packages/rpm/spec/hms-client.spec
index 68b47e7..a4ebb4a 100644
--- a/client/src/packages/rpm/spec/hms-client.spec
+++ b/client/src/packages/rpm/spec/hms-client.spec
@@ -17,7 +17,7 @@
 # RPM Spec file for HBase version @version@
 #
 
-%define name         hms-client
+%define name         ambari-client
 %define version      @version@
 %define release      @package.release@
 
@@ -35,7 +35,7 @@
 %define _man_dir     %{_prefix}/man
 %define _pid_dir     @package.pid.dir@
 %define _sbin_dir    %{_prefix}/sbin
-%define _share_dir   %{_prefix}/share/hms
+%define _share_dir   %{_prefix}/share/ambari
 %define _src_dir     %{_prefix}/src
 %define _var_dir     %{_prefix}/var/lib
 
@@ -44,9 +44,9 @@
 %define _final_name @final.name@
 %define debug_package %{nil}
 
-Summary: Hadoop Management System Client
+Summary: Ambari Client
 License: Apache License, Version 2.0
-URL: http://incubator.apache.org/hms
+URL: http://incubator.apache.org/ambari
 Vendor: Apache Software Foundation
 Group: Development/Libraries
 Name: %{name}
@@ -60,10 +60,10 @@
 Buildroot: %{_build_dir}
 Requires: sh-utils, textutils, /usr/sbin/useradd, /usr/sbin/usermod, /sbin/chkconfig, /sbin/service, jdk >= 1.6, hadoop
 AutoReqProv: no
-Provides: hms-client
+Provides: ambari-client
 
 %description
-Hadoop Management System Agent manage software installation and configuration for Hadoop software stack.
+Ambari command line interface.
 
 %prep
 %setup -n %{_final_name}
@@ -98,14 +98,14 @@
 mkdir -p ${RPM_BUILD_DIR}%{_share_dir}
 mkdir -p ${RPM_BUILD_DIR}%{_src_dir}
 
-cp ${RPM_BUILD_DIR}/%{_final_name}/src/packages/update-hms-client-env.sh ${RPM_BUILD_DIR}/%{_final_name}/sbin/update-hms-client-env.sh
+cp ${RPM_BUILD_DIR}/%{_final_name}/src/packages/update-ambari-client-env.sh ${RPM_BUILD_DIR}/%{_final_name}/sbin/update-ambari-client-env.sh
 chmod 0755 ${RPM_BUILD_DIR}/%{_final_name}/sbin/*
 mv -f ${RPM_BUILD_DIR}/%{_final_name}/* ${RPM_BUILD_DIR}%{_share_dir}
 
 rm -rf ${RPM_BUILD_DIR}/%{_final_name}
 
 %preun
-${RPM_INSTALL_PREFIX0}/share/hms/sbin/update-hms-client-env.sh \
+${RPM_INSTALL_PREFIX0}/share/ambari/sbin/update-ambari-client-env.sh \
        --prefix=${RPM_INSTALL_PREFIX0} \
        --bin-dir=${RPM_INSTALL_PREFIX0}/bin \
        --conf-dir=${RPM_INSTALL_PREFIX1} \
@@ -116,7 +116,7 @@
 %pre
 
 %post
-${RPM_INSTALL_PREFIX0}/share/hms/sbin/update-hms-client-env.sh \
+${RPM_INSTALL_PREFIX0}/share/ambari/sbin/update-ambari-client-env.sh \
        --prefix=${RPM_INSTALL_PREFIX0} \
        --bin-dir=${RPM_INSTALL_PREFIX0}/bin \
        --conf-dir=${RPM_INSTALL_PREFIX1} \
diff --git a/client/src/packages/tarball/all.xml b/client/src/packages/tarball/all.xml
index b67d062..dec391f 100644
--- a/client/src/packages/tarball/all.xml
+++ b/client/src/packages/tarball/all.xml
@@ -1,4 +1,22 @@
 <?xml version="1.0"?>
+
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
 <assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.1"
           xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
           xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.1 http://maven.apache.org/xsd/assembly-1.1.1.xsd">
@@ -26,16 +44,22 @@
       <directory>conf</directory>
     </fileSet>
     <fileSet>
-      <directory>../bin</directory>
+      <directory>bin</directory>
       <outputDirectory>bin</outputDirectory>
       <fileMode>755</fileMode>
     </fileSet>
     <fileSet>
       <directory>target</directory>
-      <outputDirectory>/</outputDirectory>
+      <outputDirectory>share/ambari</outputDirectory>
       <includes>
           <include>${artifactId}-${project.version}.jar</include>
           <include>${artifactId}-${project.version}-tests.jar</include>
+      </includes>
+    </fileSet>
+    <fileSet>
+      <directory>target</directory>
+      <outputDirectory>share/ambari</outputDirectory>
+      <includes>
           <include>VERSION</include>
       </includes>
     </fileSet>
@@ -48,13 +72,13 @@
       <outputDirectory>sbin</outputDirectory>
       <fileMode>755</fileMode>
       <includes>
-          <include>update-hms-${artifactId}-env.sh</include>
+          <include>update-ambari-${artifactId}-env.sh</include>
       </includes>
     </fileSet>
   </fileSets>
   <dependencySets>
     <dependencySet>
-      <outputDirectory>/lib</outputDirectory>
+      <outputDirectory>share/ambari/lib</outputDirectory>
       <unpack>false</unpack>
       <scope>runtime</scope>
     </dependencySet>
diff --git a/client/src/packages/tarball/binary.xml b/client/src/packages/tarball/binary.xml
new file mode 100644
index 0000000..e9e67eb
--- /dev/null
+++ b/client/src/packages/tarball/binary.xml
@@ -0,0 +1,80 @@
+<?xml version="1.0"?>
+
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.1"
+          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+          xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.1 http://maven.apache.org/xsd/assembly-1.1.1.xsd">
+  <!--This 'all' id is not appended to the produced bundle because we do this:
+    http://maven.apache.org/plugins/maven-assembly-plugin/faq.html#required-classifiers
+  -->
+  <formats>
+    <format>${package.type}</format>
+  </formats>
+  <fileSets>
+    <fileSet>
+      <outputDirectory>share/ambari</outputDirectory>
+      <includes>
+        <include>${basedir}/*.txt</include>
+      </includes>
+    </fileSet>
+    <fileSet>
+      <directory>conf</directory>
+      <outputDirectory>etc/ambari</outputDirectory>
+    </fileSet>
+    <fileSet>
+      <directory>bin</directory>
+      <outputDirectory>bin</outputDirectory>
+      <fileMode>755</fileMode>
+    </fileSet>
+    <fileSet>
+      <directory>target</directory>
+      <outputDirectory>share/ambari</outputDirectory>
+      <includes>
+          <include>${artifactId}-${project.version}.jar</include>
+          <include>${artifactId}-${project.version}-tests.jar</include>
+      </includes>
+    </fileSet>
+    <fileSet>
+      <directory>target</directory>
+      <outputDirectory>share/ambari</outputDirectory>
+      <includes>
+          <include>VERSION</include>
+      </includes>
+    </fileSet>
+    <fileSet>
+      <directory>target/site</directory>
+      <outputDirectory>docs</outputDirectory>
+    </fileSet>
+    <fileSet>
+      <directory>src/packages</directory>
+      <outputDirectory>sbin</outputDirectory>
+      <fileMode>755</fileMode>
+      <includes>
+          <include>update-hms-${artifactId}-env.sh</include>
+      </includes>
+    </fileSet>
+  </fileSets>
+  <dependencySets>
+    <dependencySet>
+      <outputDirectory>share/ambari/lib</outputDirectory>
+      <unpack>false</unpack>
+      <scope>runtime</scope>
+    </dependencySet>
+  </dependencySets>
+</assembly>
diff --git a/client/src/packages/tarball/source.xml b/client/src/packages/tarball/source.xml
new file mode 100644
index 0000000..6b893d1
--- /dev/null
+++ b/client/src/packages/tarball/source.xml
@@ -0,0 +1,60 @@
+<?xml version="1.0"?>
+
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.1"
+          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+          xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.1 http://maven.apache.org/xsd/assembly-1.1.1.xsd">
+  <!--This 'all' id is not appended to the produced bundle because we do this:
+    http://maven.apache.org/plugins/maven-assembly-plugin/faq.html#required-classifiers
+  -->
+  <formats>
+    <format>tar.gz</format>
+  </formats>
+  <fileSets>
+    <fileSet>
+      <includes>
+        <include>${basedir}/*.txt</include>
+      </includes>
+    </fileSet>
+    <fileSet>
+      <includes>
+        <include>pom.xml</include>
+      </includes>
+    </fileSet>
+    <fileSet>
+      <directory>src</directory>
+    </fileSet>
+    <fileSet>
+      <directory>conf</directory>
+    </fileSet>
+    <fileSet>
+      <directory>bin</directory>
+      <outputDirectory>bin</outputDirectory>
+      <fileMode>755</fileMode>
+    </fileSet>
+    <fileSet>
+      <directory>src/packages</directory>
+      <outputDirectory>sbin</outputDirectory>
+      <fileMode>755</fileMode>
+      <includes>
+          <include>update-hms-${artifactId}-env.sh</include>
+      </includes>
+    </fileSet>
+  </fileSets>
+</assembly>
diff --git a/client/src/packages/update-hms-client-env.sh b/client/src/packages/update-ambari-client-env.sh
similarity index 100%
rename from client/src/packages/update-hms-client-env.sh
rename to client/src/packages/update-ambari-client-env.sh
diff --git a/client/src/test/java/org/apache/hms/client/TestClient.java b/client/src/test/java/org/apache/hms/client/TestClient.java
deleted file mode 100755
index 1057e59..0000000
--- a/client/src/test/java/org/apache/hms/client/TestClient.java
+++ /dev/null
@@ -1,146 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.client;
-
-import java.io.BufferedWriter;
-import java.io.File;
-import java.io.FileWriter;
-import java.net.URL;
-import java.util.ArrayList;
-import java.util.List;
-
-import junit.framework.Assert;
-
-import org.apache.hms.common.entity.action.Action;
-import org.apache.hms.common.entity.action.PackageAction;
-import org.apache.hms.common.entity.manifest.ClusterManifest;
-import org.apache.hms.common.entity.manifest.ConfigManifest;
-import org.apache.hms.common.entity.manifest.NodesManifest;
-import org.apache.hms.common.entity.manifest.PackageInfo;
-import org.apache.hms.common.entity.manifest.Role;
-import org.apache.hms.common.entity.manifest.SoftwareManifest;
-import org.apache.hms.common.util.ExceptionUtil;
-import org.apache.hms.common.util.JAXBUtil;
-import org.testng.annotations.Test;
-
-public class TestClient {
-  @Test
-  public void testCreateCluster() {
-    try {
-      File nodesXmlFile = File.createTempFile("nodes", ".xml");
-      nodesXmlFile.deleteOnExit();
-      File softwareXmlFile = File.createTempFile("software", ".xml");
-      softwareXmlFile.deleteOnExit();
-      File configXmlFile = File.createTempFile("config", ".xml");
-      configXmlFile.deleteOnExit();
-
-      String nodesXmlPath = nodesXmlFile.getAbsolutePath();
-      String softwareXmlPath = softwareXmlFile.getAbsolutePath();
-      String configXmlPath = configXmlFile.getAbsolutePath();
-
-      // Setup simulated controller
-      // Create node manifest
-      NodesManifest n = new NodesManifest();
-      List<Role> roles = new ArrayList<Role>();
-      Role role = new Role();
-      role.setName("namenode");
-      String [] hosts = { "localhost" };
-      role.setHosts(hosts);
-      roles.add(role);
-      n.setNodes(roles);
-      FileWriter fstream = new FileWriter(nodesXmlPath);
-      BufferedWriter out = new BufferedWriter(fstream);
-      out.write( new String(JAXBUtil.write(n)).toCharArray());
-      out.close();
-      fstream.close();
-      
-      // Create software manifest
-      SoftwareManifest sm = new SoftwareManifest();
-      sm.setName("hadoop");
-      sm.setVersion("0.20.203");
-      List<Role> softwareRoles = new ArrayList<Role>();
-      Role softwareRole = new Role();
-      softwareRole.setName("namenode");
-      PackageInfo[] packages = new PackageInfo[1];
-      packages[0]= new PackageInfo();
-      packages[0].setName("hadoop-0.20.203");
-      softwareRole.setPackages(packages);
-      softwareRoles.add(softwareRole);
-      sm.setRoles(softwareRoles);
-      fstream = new FileWriter(softwareXmlPath);
-      out = new BufferedWriter(fstream);
-      out.write(new String(JAXBUtil.write(sm)).toCharArray());
-      out.close();
-      fstream.close();
-      
-      // Create config manifest
-      PackageAction installHadoop = new PackageAction();
-      installHadoop.setPackages(packages);
-      List<Action> actions = new ArrayList<Action>();
-      actions.add(installHadoop);
-      ConfigManifest cm = new ConfigManifest();
-      cm.setActions(actions);
-      fstream = new FileWriter(configXmlPath);
-      out = new BufferedWriter(fstream);
-      out.write(new String(JAXBUtil.write(cm)).toCharArray());
-      out.close();
-      fstream.close();
-      URL softwareUrl = new URL("file://"+softwareXmlPath);
-      URL nodeUrl = new URL("file://"+nodesXmlPath);
-      URL configUrl = new URL("file://"+configXmlPath);
-      
-      // Create cluster manifest
-      ClusterManifest clusterM = new ClusterManifest();
-      NodesManifest nodes = new NodesManifest();
-      nodes.setUrl(nodeUrl);
-      clusterM.setNodes(nodes);
-      SoftwareManifest softwareM = new SoftwareManifest();
-      softwareM.setUrl(softwareUrl);      
-      clusterM.setSoftware(softwareM);
-      ConfigManifest configM = new ConfigManifest();
-      configM.setUrl(configUrl);
-      clusterM.setConfig(configM);
-      
-      // Fetch data back from file
-      clusterM.load();
-      
-      // Verify original data and fetched data are the same
-      NodesManifest actualNodeManifest = clusterM.getNodes();
-      for(Role actualRole: actualNodeManifest.getRoles()) {
-        Assert.assertEquals(actualRole.getName(), role.getName());
-        for(String host : actualRole.getHosts()) {
-          Assert.assertEquals(host, hosts[0]);
-        }
-      }
-      SoftwareManifest actualSoftwareManifest = clusterM.getSoftware();
-      for(Role actualRole: actualSoftwareManifest.getRoles()) {
-        Assert.assertEquals(actualRole.getName(), role.getName());
-        for(PackageInfo pi: actualRole.getPackages()) {
-          Assert.assertEquals(pi.getName(), packages[0].getName());
-        }
-      }
-      
-      // Send to controller for testing
-      Client client = new Client();
-      //Assert.assertEquals(0, client.createCluster("foobar", nodeUrl, softwareUrl, configUrl));
-    } catch (Exception e) {
-      Assert.fail(ExceptionUtil.getStackTrace(e));
-    }
-  }
-}
diff --git a/common/bin/hms b/common/bin/hms
deleted file mode 100755
index 68565fd..0000000
--- a/common/bin/hms
+++ /dev/null
@@ -1,81 +0,0 @@
-#!/usr/bin/env bash
-
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-# The HMS command script
-#
-# Environment Variables
-#
-#   JAVA_HOME        The java implementation to use.  Overrides JAVA_HOME.
-#   HMS_CONF_DIR     Alternate conf dir.  Default is ${HMS_HOME}/conf.
-#
-
-bin=`dirname "$0"`
-bin=`cd "$bin"; pwd`
-
-. "$bin"/hms-config.sh
-
-# if no args specified, show usage
-if [ $# = 0 ]; then
-  echo "Usage: hms [--config confdir] COMMAND"
-  echo "where COMMAND is one of:"
-  echo "  agent         run a HMS Agent"
-  echo "  version       print the version"
-  exit 1
-fi
-
-# get arguments
-COMMAND=$1
-shift
-
-if [ -f "${HMS_CONF_DIR}/hms-env.sh" ]; then
-  . "${HMS_CONF_DIR}/hms-env.sh"
-fi
-
-# Java parameters
-if [ "$JAVA_HOME" != "" ]; then
-  JAVA_HOME=$JAVA_HOME
-fi
-
-if [ "$JAVA_HOME" = "" ]; then
-  echo "Error: JAVA_HOME is not set."
-  exit 1
-fi
-
-if [ "$HMS_CONF_DIR" != "" ]; then
-  CLASSPATH=${HMS_CONF_DIR}:${CLASSPATH}
-fi
-
-BACKGROUND="true"
-
-# configure command parameters
-if [ "$COMMAND" = "agent" ]; then
-  APP='agent'
-  CLASS='org.apache.hms.agent.Agent'
-  PID="Agent"
-elif [ "$COMMAND" = "version" ]; then
-  echo `cat ${HMS_HOME}/bin/VERSION`
-  exit 0
-fi
-
-if [ "$1" = "stop" ]; then
-  kill -TERM `cat ${HMS_PID_DIR}/$PID.pid`
-else 
-  # run command
-  exec ${JAVA_HOME}/bin/java ${JAVA_OPT} -Djava.library.path=${JAVA_LIBRARY_PATH} -DHMS_HOME=${HMS_HOME} -DHMS_CONF_DIR=${HMS_CONF_DIR} -DHMS_LOG_DIR=${HMS_LOG_DIR} -DHMS_DATA_DIR=${HMS_DATA_DIR} -DAPP=${APP} -Dlog4j.configuration=log4j.properties -classpath ${HMS_CONF_DIR}:${CLASSPATH}:${HMS_CORE}:${HMS_JAR}:${COMMON}:${tools} ${CLASS} $OPTS $@
-fi
-
diff --git a/common/bin/hms-config.sh b/common/bin/hms-config.sh
deleted file mode 100644
index 6fd4dfd..0000000
--- a/common/bin/hms-config.sh
+++ /dev/null
@@ -1,87 +0,0 @@
-#!/usr/bin/env bash
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# included in all the hadoop scripts with source command
-# should not be executable directly
-# also should not be passed any arguments, since we need original $*
-
-# resolve links - $0 may be a softlink
-
-this="$0"
-while [ -h "$this" ]; do
-  ls=`ls -ld "$this"`
-  link=`expr "$ls" : '.*-> \(.*\)$'`
-  if expr "$link" : '.*/.*' > /dev/null; then
-    this="$link"
-  else
-    this=`dirname "$this"`/"$link"
-  fi
-done
-
-# convert relative path to absolute path
-bin=`dirname "$this"`
-script=`basename "$this"`
-bin=`cd "$bin"; pwd`
-this="$bin/$script"
-
-#check to see if the conf dir or hms home are given as an optional arguments
-if [ $# -gt 1 ]
-then
-  if [ "--config" = "$1" ]
-  then
-    shift
-    confdir=$1
-    shift
-    HMS_CONF_DIR=$confdir
-  fi
-fi
-
-# the root of the hms installation
-export HMS_HOME=`dirname "$this"`/..
-
-if [ -z ${HMS_LOG_DIR} ]; then
-    export HMS_LOG_DIR="${HMS_HOME}/logs"
-fi
-
-if [ -z ${HMS_PID_DIR} ]; then
-    export HMS_PID_DIR="${HMS_HOME}/var/run"
-fi
-
-HMS_VERSION=`cat ${HMS_HOME}/VERSION`
-
-# Allow alternate conf dir location.
-if [ -z "${HMS_CONF_DIR}" ]; then
-    HMS_CONF_DIR="${HMS_CONF_DIR:-$HMS_HOME/conf}"
-    export HMS_CONF_DIR=${HMS_HOME}/conf
-fi
-
-if [ -f "${HMS_CONF_DIR}/hms-env.sh" ]; then
-  . "${HMS_CONF_DIR}/hms-env.sh"
-fi
-
-COMMON=`ls ${HMS_HOME}/lib/*.jar`
-export COMMON=`echo ${COMMON} | sed 'y/ /:/'`
-
-export HMS_CORE=${HMS_HOME}/hms-core-${HMS_VERSION}.jar
-export HMS_AGENT=${HMS_HOME}/hms-agent-${HMS_VERSION}.jar
-export CURRENT_DATE=`date +%Y%m%d%H%M`
-
-if [ -z "$JAVA_HOME" ] ; then
-  echo ERROR! You forgot to set JAVA_HOME in conf/hms-env.sh
-fi
-
-export JPS="ps ax"
-
diff --git a/common/pom.xml b/common/pom.xml
deleted file mode 100644
index 49a1d48..0000000
--- a/common/pom.xml
+++ /dev/null
@@ -1,26 +0,0 @@
-<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
-
-    <parent>
-        <groupId>org.apache.hms</groupId>
-        <artifactId>hms</artifactId>
-        <version>0.1.0</version>
-    </parent>
-
-    <modelVersion>4.0.0</modelVersion>
-    <groupId>org.apache.hms</groupId>
-    <artifactId>common</artifactId>
-    <packaging>jar</packaging>
-    <version>0.1.0-SNAPSHOT</version>
-    <name>common</name>
-    <description>Hadoop Management System Common Library</description>
-
-    <dependencies>
-      <dependency>
-        <groupId>dk.brics.automaton</groupId>
-        <artifactId>automaton</artifactId>
-        <version>1.11.2</version>
-      </dependency>
-    </dependencies>
-
-</project>
diff --git a/common/src/main/java/org/apache/hms/common/conf/CommonConfigurationKeys.java b/common/src/main/java/org/apache/hms/common/conf/CommonConfigurationKeys.java
deleted file mode 100755
index 07c362b..0000000
--- a/common/src/main/java/org/apache/hms/common/conf/CommonConfigurationKeys.java
+++ /dev/null
@@ -1,64 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.common.conf;
-
-/**
- * General HMS configuration parameter definitions
- *
- */
-public class CommonConfigurationKeys {
-
-  /** Location of zookeeper servers */
-  public static final String ZOOKEEPER_ADDRESS_KEY = "hms.zookeeper.address";
-  /** Default location of zookeeper servers */
-  public static final String ZOOKEEPER_ADDRESS_DEFAULT = "localhost:2181";
-  
-  /** Path to zookeeper cluster root */
-  public static final String ZOOKEEPER_CLUSTER_ROOT_KEY = "hms.zookeeper.cluster.path";
-  /** Default location of zookeeper cluster root */
-  public static final String ZOOKEEPER_CLUSTER_ROOT_DEFAULT = "/clusters";
-  
-  /** Path to zookeeper command queue */
-  public static final String ZOOKEEPER_COMMAND_QUEUE_PATH_KEY = "hms.zookeeper.command.queue.path";
-  /** Default location of zookeeper command queue */
-  public static final String ZOOKEEPER_COMMAND_QUEUE_PATH_DEFAULT = "/cmdqueue";
-  
-  /** Path to zookeeper live controller queue */
-  public static final String ZOOKEEPER_LIVE_CONTROLLER_PATH_KEY = "hms.zookeeper.live.controller.path";
-  /** Default location of zookeeper live controller queue */
-  public static final String ZOOKEEPER_LIVE_CONTROLLER_PATH_DEFAULT = "/livecontrollers";
-  
-  /** Path to zookeeper lock queue */
-  public static final String ZOOKEEPER_LOCK_QUEUE_PATH_KEY = "hms.zookeeper.lock.queue.path";
-  /** Default location of zookeeper lock queue */
-  public static final String ZOOKEEPER_LOCK_QUEUE_PATH_DEFAULT = "/locks";
- 
-  /** Reference key for path to nodes manifest */
-  public static final String ZOOKEEPER_NODES_MANIFEST_KEY = "hms.nodes.manifest.path";
-  /** Default location of nodes manifest */
-  public static final String ZOOKEEPER_NODES_MANIFEST_PATH_DEFAULT = "/nodes-manifest";
-  
-  /** Zeroconf zookeeper type */
-  public static final String ZEROCONF_ZOOKEEPER_TYPE = "_zookeeper._tcp.local.";
-  
-  /** Path to zookeeper status qeueue */
-  public static final String ZOOKEEPER_STATUS_QUEUE_PATH_KEY = "hms.zookeeper.status.queue.path";
-  /** Default location of zookeeper status queue */
-  public static final String ZOOKEEPER_STATUS_QUEUE_PATH_DEFAULT = "/status";
-}
diff --git a/common/src/main/java/org/apache/hms/common/entity/Response.java b/common/src/main/java/org/apache/hms/common/entity/Response.java
deleted file mode 100755
index 7a04914..0000000
--- a/common/src/main/java/org/apache/hms/common/entity/Response.java
+++ /dev/null
@@ -1,70 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.common.entity;
-
-import javax.xml.bind.annotation.XmlElement;
-import javax.xml.bind.annotation.XmlRootElement;
-
-import org.apache.hms.common.entity.RestSource;
-
-@XmlRootElement
-public class Response extends RestSource {
-  @XmlElement(name="exit_code")
-  public int code;
-  @XmlElement
-  public String output;
-  @XmlElement
-  public String error;
-  
-  public int getCode() {
-    return code;
-  }
-  
-  public String getOutput() {
-    return output;
-  }
-  
-  public String getError() {
-    return error;
-  }
-  
-  public void setCode(int code) {
-    this.code = code;  
-  }
-  
-  public void setOutput(String output) {
-    this.output = output;
-  }
-  
-  public void setError(String error) {
-    this.error = error;
-  }
-  
-  public String toString() {
-    StringBuilder sb = new StringBuilder();
-    sb.append("exit code:");
-    sb.append(code);
-    sb.append("\nstdout:\n");
-    sb.append(output);
-    sb.append("\nstderr:\n");
-    sb.append(error);
-    sb.append("\n");
-    return sb.toString();
-  }
-}
diff --git a/common/src/main/java/org/apache/hms/common/entity/Status.java b/common/src/main/java/org/apache/hms/common/entity/Status.java
deleted file mode 100755
index bbc1199..0000000
--- a/common/src/main/java/org/apache/hms/common/entity/Status.java
+++ /dev/null
@@ -1,56 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.common.entity;
-
-import javax.xml.bind.annotation.XmlAccessType;
-import javax.xml.bind.annotation.XmlAccessorType;
-import javax.xml.bind.annotation.XmlRootElement;
-import javax.xml.bind.annotation.XmlType;
-import javax.xml.bind.annotation.adapters.XmlAdapter;
-
-/**
- * List of HMS command status type
- *
- */
-@XmlRootElement
-@XmlAccessorType(XmlAccessType.FIELD)
-@XmlType(name = "", propOrder = {})
-public enum Status {
-  UNQUEUED, QUEUED, STARTED, SUCCEEDED, FAILED, INSTALLED, STOPPED;
-
-  public static class StatusAdapter extends XmlAdapter<String, Status> {
-
-    @Override
-    public String marshal(Status obj) throws Exception {
-      return obj.toString();
-    }
-
-    @Override
-    public Status unmarshal(String str) throws Exception {
-      for (Status j : Status.class.getEnumConstants()) {
-        if (j.toString().equals(str)) {
-          return j;
-        }
-      }
-      throw new Exception("Can't convert " + str + " to "
-          + Status.class.getName());
-    }
-
-  }
-}
\ No newline at end of file
diff --git a/common/src/main/java/org/apache/hms/common/entity/action/Action.java b/common/src/main/java/org/apache/hms/common/entity/action/Action.java
deleted file mode 100755
index abd92c7..0000000
--- a/common/src/main/java/org/apache/hms/common/entity/action/Action.java
+++ /dev/null
@@ -1,151 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.common.entity.action;
-
-import java.util.List;
-
-import javax.xml.bind.annotation.XmlElement;
-import javax.xml.bind.annotation.XmlRootElement;
-import javax.xml.bind.annotation.XmlSeeAlso;
-import org.apache.hms.common.entity.RestSource;
-import org.apache.hms.common.entity.cluster.MachineState.StateEntry;
-import org.codehaus.jackson.annotate.JsonTypeInfo;
-
-/**
- * HMS Action defines a operation for HMS Agent to execute, this abstract class defines the basic
- * structure required to construct an HMS Action. 
- *
- */
-@JsonTypeInfo(use=JsonTypeInfo.Id.CLASS, include=JsonTypeInfo.As.PROPERTY, property="@action")
-@XmlSeeAlso({ ScriptAction.class, DaemonAction.class, PackageAction.class })
-@XmlRootElement
-public abstract class Action extends RestSource {
-  @XmlElement
-  protected int actionId;
-  
-  /**
-   * Reference to original command which generated this action.
-   */
-  @XmlElement
-  protected String cmdPath;
-  
-  /**
-   * Unique identifier of the action type.
-   */
-  @XmlElement
-  protected String actionType;
-  
-  /**
-   * A list of states, this action depends on.
-   */
-  @XmlElement
-  protected List<ActionDependency> dependencies;
-  
-  /**
-   * When the action is successfully executed, expectedResults stores the state
-   * entry for the action.
-   */
-  @XmlElement
-  protected List<StateEntry> expectedResults;
-  
-  /**
-   * Role is a reference to a list of nodes that should execute this action.
-   */
-  @XmlElement
-  protected String role;
-  
-  public int getActionId() {
-    return actionId;
-  }
-  
-  public String getCmdPath() {
-    return cmdPath;
-  }
-  
-  public String getActionType() {
-    return actionType;
-  }
-  
-  public List<ActionDependency> getDependencies() {
-    return dependencies;
-  }
-  
-  public List<StateEntry> getExpectedResults() {
-    return expectedResults;
-  }
-  
-  public String getRole() {
-    return role;
-  }
-  
-  public void setActionId(int actionId) {
-    this.actionId = actionId;
-  }
-  
-  public void setCmdPath(String cmdPath) {
-    this.cmdPath = cmdPath;
-  }
-  
-  public void setActionType(String actionType) {
-    this.actionType = actionType;
-  }
-  
-  public void setDependencies(List<ActionDependency> dependencies) {
-    this.dependencies = dependencies;
-  }
-  
-  public void setExpectedResults(List<StateEntry> expectedResults) {
-    this.expectedResults = expectedResults;
-  }
-  
-  public void setRole(String role) {
-    this.role = role;
-  }
-  
-  public String toString() {
-    StringBuilder sb = new StringBuilder();
-    sb.append("actionId=");
-    sb.append(actionId);
-    sb.append(", cmdPath=");
-    sb.append(cmdPath);
-    sb.append(", actionType=");
-    sb.append(actionType);
-    if (role != null) {
-      sb.append(", role=");
-      sb.append(role);
-    }
-    sb.append(", dependencies=[");
-    if (dependencies != null) {
-      for(ActionDependency a : dependencies) {
-        sb.append(a);
-        sb.append(", ");
-      }
-    }
-    sb.append("]");
-    sb.append(", expectedResults=[");
-    if (expectedResults != null) {
-      for(StateEntry a : expectedResults) {
-        sb.append(a);
-        sb.append(", ");
-      }
-    }
-    sb.append("]");
-    return sb.toString();
-  }
-}
diff --git a/common/src/main/java/org/apache/hms/common/entity/action/ActionContextProvider.java b/common/src/main/java/org/apache/hms/common/entity/action/ActionContextProvider.java
deleted file mode 100755
index f0329ef..0000000
--- a/common/src/main/java/org/apache/hms/common/entity/action/ActionContextProvider.java
+++ /dev/null
@@ -1,49 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.common.entity.action;
-
-import javax.ws.rs.ext.ContextResolver;
-import javax.ws.rs.ext.Provider;
-import javax.xml.bind.JAXBContext;
-
-import com.sun.jersey.api.json.JSONConfiguration;
-import com.sun.jersey.api.json.JSONJAXBContext;
-
-/**
- * Utility class to resolve the formatting style of the serialized action.
- *
- */
-@Provider
-public class ActionContextProvider implements ContextResolver<JAXBContext> {
-
-  private JAXBContext context;
-  private Class[] types = { Action.class, DaemonAction.class, PackageAction.class, ScriptAction.class };
-
-  public ActionContextProvider() throws Exception {
-    this.context = new JSONJAXBContext(JSONConfiguration.badgerFish().build(), types);
-  }
-
-  public JAXBContext getContext(Class<?> objectType) {
-    for (Class type : types) {
-      if (type.equals(objectType))
-        return context;
-    }
-    return null;
-  } 
-}
diff --git a/common/src/main/java/org/apache/hms/common/entity/action/ActionDependency.java b/common/src/main/java/org/apache/hms/common/entity/action/ActionDependency.java
deleted file mode 100755
index 5f17c45..0000000
--- a/common/src/main/java/org/apache/hms/common/entity/action/ActionDependency.java
+++ /dev/null
@@ -1,106 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.common.entity.action;
-
-import java.util.List;
-import java.util.Map;
-import java.util.Set;
-
-import javax.xml.bind.annotation.XmlAccessType;
-import javax.xml.bind.annotation.XmlAccessorType;
-import javax.xml.bind.annotation.XmlElement;
-import javax.xml.bind.annotation.XmlRootElement;
-import javax.xml.bind.annotation.XmlType;
-
-import org.apache.hms.common.entity.RestSource;
-import org.apache.hms.common.entity.cluster.MachineState.StateEntry;
-import org.apache.hms.common.entity.manifest.Role;
-
-/**
- * Defines the list of states that a action depends on.
- */
-@XmlRootElement
-@XmlAccessorType(XmlAccessType.FIELD) 
-@XmlType(name="", propOrder = {})
-public class ActionDependency extends RestSource {
-    @XmlElement
-    protected List<String> hosts;
-    @XmlElement
-    protected List<StateEntry> states;
-    @XmlElement
-    protected Set<String> roles;
-    
-    public ActionDependency(){
-    }
-    
-    public ActionDependency(Set<String> roles, List<StateEntry> states) {
-      this.roles = roles;
-      this.states = states;
-    }
-    
-    public ActionDependency(List<String> hosts, List<StateEntry> states) {
-      this.hosts = hosts;
-      this.states = states;
-    }
-    
-    public List<String> getHosts() {
-      return hosts;
-    }
-    
-    public List<StateEntry> getStates() {
-      return states;
-    }
-    
-    public Set<String> getRoles() {
-      return roles;
-    }
-    
-    public void setHosts(List<String> hosts) {
-      this.hosts = hosts;
-    }
-    
-    public void setStates(List<StateEntry> states) {
-      this.states = states;
-    }
-    
-    public void setRoles(Set<String> roles) {
-      this.roles = roles;
-    }
-    
-    public String toString() {
-      StringBuilder sb = new StringBuilder();
-      sb.append("[hosts={");
-      if (hosts != null) {
-        for(String a : hosts) {
-          sb.append(a);
-          sb.append(", ");
-        }
-      }
-      sb.append("}, states={");
-      if (states != null) {
-        for(StateEntry a : states) {
-          sb.append(a);
-          sb.append(", ");
-        }
-      }
-      sb.append("}]");
-      return sb.toString();
-    }
-}
-
diff --git a/common/src/main/java/org/apache/hms/common/entity/action/ActionStatus.java b/common/src/main/java/org/apache/hms/common/entity/action/ActionStatus.java
deleted file mode 100755
index df64196..0000000
--- a/common/src/main/java/org/apache/hms/common/entity/action/ActionStatus.java
+++ /dev/null
@@ -1,102 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.common.entity.action;
-
-import javax.xml.bind.annotation.XmlAccessType;
-import javax.xml.bind.annotation.XmlAccessorType;
-import javax.xml.bind.annotation.XmlElement;
-import javax.xml.bind.annotation.XmlRootElement;
-import javax.xml.bind.annotation.XmlType;
-import javax.xml.bind.annotation.adapters.XmlJavaTypeAdapter;
-
-import org.apache.hms.common.entity.Response;
-import org.apache.hms.common.entity.Status;
-import org.apache.hms.common.entity.Status.StatusAdapter;
-import org.apache.hms.common.entity.action.Action;
-
-/**
- * ActionStatus record the execution result of an action.
- */
-@XmlRootElement
-@XmlAccessorType(XmlAccessType.FIELD) 
-@XmlType(name="", propOrder = {})
-public class ActionStatus extends Response{
-  @XmlElement
-  @XmlJavaTypeAdapter(StatusAdapter.class)
-  protected Status status;
-
-  @XmlElement
-  protected String host;
-  @XmlElement
-  protected String cmdPath;
-  @XmlElement
-  protected int actionId;
-  @XmlElement
-  public Action action;
-  @XmlElement
-  private String actionPath;
-  
-  public Status getStatus() {
-    return this.status;
-  }
-  
-  public void setStatus(Status status) {
-    this.status = status;
-  }
-  
-  public String getHost() {
-    return host;
-  }
-  
-  public void setHost(String host) {
-    this.host = host;
-  }
-  
-  public String getCmdPath() {
-    return cmdPath;
-  }
-  
-  public void setCmdPath(String cmdPath) {
-    this.cmdPath = cmdPath;
-  }
-  
-  public int getActionId() {
-    return this.actionId;
-  }
-  
-  public void setActionId(int actionId) {
-    this.actionId = actionId;
-  }
-  
-  public Action getAction() {
-    return this.action;
-  }
-  
-  public void setAction(Action action) {
-    this.action = action;
-  }
-  
-  public String getActionPath() {
-    return this.actionPath;
-  }
-  
-  public void setActionPath(String actionPath) {
-    this.actionPath = actionPath;
-  }
-}
diff --git a/common/src/main/java/org/apache/hms/common/entity/action/DaemonAction.java b/common/src/main/java/org/apache/hms/common/entity/action/DaemonAction.java
deleted file mode 100755
index bc89c63..0000000
--- a/common/src/main/java/org/apache/hms/common/entity/action/DaemonAction.java
+++ /dev/null
@@ -1,56 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.common.entity.action;
-
-import javax.xml.bind.annotation.XmlElement;
-import javax.xml.bind.annotation.XmlRootElement;
-import javax.xml.bind.annotation.XmlType;
-
-/**
- * Action class for describing a daemon related action.
- * The valid operation are: 
- * 
- * - start a daemon
- * - stop a daemon
- * - check daemon status
- *
- */
-@XmlRootElement
-@XmlType(propOrder = { "daemonName" })
-public class DaemonAction extends Action {
-
-  @XmlElement(name="daemon")
-  private String daemonName;
-  
-  public String getDaemonName() {
-    return daemonName;
-  }
-  
-  public void setDaemonName(String daemonName) {
-    this.daemonName = daemonName;
-  }
-  
-  public String toString() {
-    StringBuilder sb = new StringBuilder();
-    sb.append(super.toString());
-    sb.append(", daemon=");
-    sb.append(daemonName);
-    return sb.toString();
-  }
-}
diff --git a/common/src/main/java/org/apache/hms/common/entity/action/PackageAction.java b/common/src/main/java/org/apache/hms/common/entity/action/PackageAction.java
deleted file mode 100755
index 11dea29..0000000
--- a/common/src/main/java/org/apache/hms/common/entity/action/PackageAction.java
+++ /dev/null
@@ -1,69 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.common.entity.action;
-
-import javax.xml.bind.annotation.XmlElement;
-import javax.xml.bind.annotation.XmlRootElement;
-import javax.xml.bind.annotation.XmlType;
-
-import org.apache.hms.common.entity.manifest.PackageInfo;
-
-/**
- * Action describes what package to install or remove from a node.
- *
- */
-@XmlRootElement
-@XmlType(propOrder = { "packages", "dryRun" })
-public class PackageAction extends Action {
-  @XmlElement
-  private PackageInfo[] packages;
-  @XmlElement(name="dry-run")
-  private boolean dryRun = false;
-  
-  public PackageInfo[] getPackages() {
-    return packages;
-  }
-  
-  public boolean getDryRun() {
-    return dryRun;
-  }
-
-  public void setPackages(PackageInfo[] packages) {
-    this.packages = packages;
-  }
-  
-  public void setDryRun(boolean dryRun) {
-    this.dryRun = dryRun;
-  }
-  
-  public String toString() {
-    StringBuilder sb = new StringBuilder();
-    sb.append(super.toString());
-    sb.append(", dry-run=");
-    sb.append(dryRun);
-    sb.append(", packages=");
-    if (packages != null) {
-      for(PackageInfo p : packages) {
-        sb.append(p);
-        sb.append(" ");
-      }
-    }
-    return sb.toString();
-  }
-}
diff --git a/common/src/main/java/org/apache/hms/common/entity/action/ScriptAction.java b/common/src/main/java/org/apache/hms/common/entity/action/ScriptAction.java
deleted file mode 100755
index e2ce0a9..0000000
--- a/common/src/main/java/org/apache/hms/common/entity/action/ScriptAction.java
+++ /dev/null
@@ -1,72 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.common.entity.action;
-
-import javax.xml.bind.annotation.XmlElement;
-import javax.xml.bind.annotation.XmlRootElement;
-import javax.xml.bind.annotation.XmlType;
-
-/**
- * Generic scripting action for describing parameters for running 
- * arbitrary unix command on a node.
- *
- */
-@XmlRootElement
-@XmlType(propOrder = { "script", "parameters" })
-public class ScriptAction extends Action {
-  @XmlElement
-  private String script;
-  @XmlElement(name="parameters")
-  private String[] parameters;
-  
-  public String getScript() {
-    return script;
-  }
-  
-  public String[] getParameters() {
-    return parameters;
-  }
-  
-  public void setScript(String script) {
-    this.script =script;
-  }
-  
-  public void setParameters(String[] parameters) {
-    this.parameters = parameters;
-  }
-  
-  public String toString() {
-    StringBuilder sb = new StringBuilder();
-    sb.append(super.toString());
-    sb.append(", script=");
-    sb.append(script);
-    sb.append(", parameters=");
-    if (parameters != null) {
-      for (String p : parameters) {
-        sb.append(p);
-        sb.append(" ");
-      }
-    }
-    if (role != null) {
-      sb.append(", role=");
-      sb.append(role);
-    }
-    return sb.toString();
-  }
-}
diff --git a/common/src/main/java/org/apache/hms/common/entity/cluster/MachineState.java b/common/src/main/java/org/apache/hms/common/entity/cluster/MachineState.java
deleted file mode 100755
index 8727294..0000000
--- a/common/src/main/java/org/apache/hms/common/entity/cluster/MachineState.java
+++ /dev/null
@@ -1,188 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.common.entity.cluster;
-
-import java.util.Set;
-
-import javax.xml.bind.annotation.XmlAccessType;
-import javax.xml.bind.annotation.XmlAccessorType;
-import javax.xml.bind.annotation.XmlElement;
-import javax.xml.bind.annotation.XmlRootElement;
-import javax.xml.bind.annotation.XmlType;
-import javax.xml.bind.annotation.adapters.XmlAdapter;
-import javax.xml.bind.annotation.adapters.XmlJavaTypeAdapter;
-
-import org.apache.hms.common.entity.RestSource;
-import org.apache.hms.common.entity.Status;
-import org.apache.hms.common.entity.Status.StatusAdapter;
-
-/**
- * MachineState defines a list of state entries of a node.
- * For example, a node can have HADOOP-0.20.206 installed, and 
- * namenode is started.  Both states are stored in the MachineState.
- *
- */
-@XmlRootElement
-@XmlAccessorType(XmlAccessType.FIELD)
-@XmlType(name = "", propOrder = {})
-public class MachineState extends RestSource {
-  
-  Set<StateEntry> stateEntries;
-  
-  public Set<StateEntry> getStates() {
-    return stateEntries;
-  }
-  
-  public void setStates(Set<StateEntry> stateEntries) {
-    this.stateEntries = stateEntries;
-  }
-  
-  public String toString() {
-    StringBuilder sb = new StringBuilder();
-    if (stateEntries != null) {
-      for (StateEntry a : stateEntries) {
-        sb.append(a);
-        sb.append(" ");
-      }
-    }
-    return sb.toString();
-  }
-  
-  /**
-   * A state entry compose of:
-   * 
-   * Type of state to record, valid type are: PACKAGE, DAEMON
-   * A unique name field to identify the package or daemon
-   * Status is the state that a node must maintain.
-   * @author eyang
-   *
-   */
-  @XmlRootElement
-  @XmlAccessorType(XmlAccessType.FIELD)
-  public static class StateEntry {
-    private static final int PRIME = 16777619;
-    
-    @XmlElement
-    @XmlJavaTypeAdapter(StateTypeAdapter.class)
-    protected StateType type;
-    @XmlElement
-    protected String name;
-    @XmlElement
-    @XmlJavaTypeAdapter(StatusAdapter.class)
-    protected Status status;
-    
-    public StateEntry(){
-    }
-    
-    public StateEntry(StateType type, String name, Status status) {
-      this.type = type;
-      this.name = name;
-      this.status = status;
-    }
-    
-    public StateType getType() {
-      return type;
-    }
-    
-    public String getName() {
-      return name;
-    }
-    
-    public Status getStatus() {
-      return status;
-    }
-    
-    public void setType(StateType type) {
-      this.type = type;
-    }
-    
-    public void setName(String name) {
-      this.name = name;
-    }
-    
-    public void setStatus(Status status) {
-      this.status = status;
-    }
-    
-    static boolean isEqual(Object a, Object b) {
-      return a == null ? b == null : a.equals(b);
-    }
-
-    @Override
-    public boolean equals(Object obj) {
-      if (obj == this) {
-        return true;
-      }
-      if (obj instanceof StateEntry) {
-        StateEntry that = (StateEntry) obj;
-        return this.type == that.type
-            && isEqual(this.name, that.name);
-      }
-      return false;
-    }
-    
-    @Override
-    public int hashCode() {
-      int result = 1;
-      result = PRIME * result + ((type == null) ? 0 : type.hashCode());
-      result = PRIME * result + ((name == null) ? 0 : name.hashCode());
-      return result;
-    }
-    
-    public String toString() {
-      StringBuilder sb = new StringBuilder();
-      sb.append("(");
-      sb.append(type);
-      sb.append(":");
-      sb.append(name);
-      sb.append(":");
-      sb.append(status);
-      sb.append(")");
-      return sb.toString();
-    }
-  }
-  
-  /**
-   * Type of state that is recorded per node.
-   */
-  @XmlRootElement
-  public enum StateType {
-    PACKAGE, DAEMON;
-  }
-
-  public static class StateTypeAdapter extends XmlAdapter<String, StateType> {
-
-    @Override
-    public String marshal(StateType obj) throws Exception {
-      return obj.toString();
-    }
-
-    @Override
-    public StateType unmarshal(String str) throws Exception {
-      for (StateType j : StateType.class.getEnumConstants()) {
-        if (j.toString().equals(str)) {
-          return j;
-        }
-      }
-      throw new Exception("Can't convert " + str + " to "
-          + StateType.class.getName());
-    }
-  }
-}
-
diff --git a/common/src/main/java/org/apache/hms/common/entity/command/ClusterCommand.java b/common/src/main/java/org/apache/hms/common/entity/command/ClusterCommand.java
deleted file mode 100755
index 2ccc341..0000000
--- a/common/src/main/java/org/apache/hms/common/entity/command/ClusterCommand.java
+++ /dev/null
@@ -1,73 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.common.entity.command;
-
-import javax.xml.bind.annotation.XmlAccessType;
-import javax.xml.bind.annotation.XmlAccessorType;
-import javax.xml.bind.annotation.XmlElement;
-import javax.xml.bind.annotation.XmlRootElement;
-import javax.xml.bind.annotation.XmlType;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hms.common.entity.manifest.ClusterManifest;
-import org.apache.hms.common.util.ExceptionUtil;
-import org.apache.hms.common.entity.command.Command;
-import org.codehaus.jackson.annotate.JsonTypeInfo;
-
-@XmlRootElement
-@JsonTypeInfo(use=JsonTypeInfo.Id.CLASS, include=JsonTypeInfo.As.PROPERTY, property="@command")
-@XmlAccessorType(XmlAccessType.PUBLIC_MEMBER) 
-@XmlType(name="", propOrder = {})
-public class ClusterCommand extends Command {
-  private ClusterManifest cm;
-  private static Log LOG = LogFactory.getLog(ClusterCommand.class);
-
-  public ClusterCommand() {
-    this.cmd = CmdType.CREATE;
-  }
-
-  public ClusterCommand(String clusterName, ClusterManifest cm) {
-    cm.setClusterName(clusterName);
-    this.cm = cm;
-  }
-  
-  public ClusterCommand(String clusterName) {
-    ClusterManifest cm = new ClusterManifest();
-    cm.setClusterName(clusterName);
-    this.cm = cm;
-  }
-  
-  public ClusterCommand(ClusterManifest cm) {
-    this.cm = cm;
-  }
-
-  public ClusterManifest getClusterManifest() {
-    try {
-      this.cm.load();
-    } catch(Exception e) {
-      LOG.error(ExceptionUtil.getStackTrace(e));
-    }
-    return this.cm;
-  }
-  
-  public void setClusterManifest(ClusterManifest cm) {
-    this.cm = cm;
-  }
-}
diff --git a/common/src/main/java/org/apache/hms/common/entity/command/Command.java b/common/src/main/java/org/apache/hms/common/entity/command/Command.java
deleted file mode 100755
index cf96fb4..0000000
--- a/common/src/main/java/org/apache/hms/common/entity/command/Command.java
+++ /dev/null
@@ -1,116 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.common.entity.command;
-
-import javax.xml.bind.annotation.XmlAccessType;
-import javax.xml.bind.annotation.XmlAccessorType;
-import javax.xml.bind.annotation.XmlElement;
-import javax.xml.bind.annotation.XmlRootElement;
-import javax.xml.bind.annotation.XmlType;
-import javax.xml.bind.annotation.adapters.XmlAdapter;
-import javax.xml.bind.annotation.adapters.XmlJavaTypeAdapter;
-
-import org.apache.hms.common.entity.RestSource;
-import org.codehaus.jackson.annotate.JsonTypeInfo;
-
-@XmlRootElement
-@JsonTypeInfo(use=JsonTypeInfo.Id.CLASS, include=JsonTypeInfo.As.PROPERTY, property="@command")
-@XmlAccessorType(XmlAccessType.FIELD) 
-@XmlType(name="", propOrder = {})
-public abstract class Command extends RestSource {
-  @XmlElement
-  protected String id;
-  
-  @XmlElement
-  @XmlJavaTypeAdapter(CmdTypeAdapter.class)
-  protected CmdType cmd;
-
-  @XmlElement(name="dry-run")
-  protected boolean dryRun = false;
-  
-  public String getId() {
-    return id;
-  }
-
-  public CmdType getCmd() {
-    return cmd;
-  }
-
-  public boolean getDryRun() {
-    return dryRun;
-  }
-  
-  public void setId(String id) {
-    this.id = id;
-  }
-  
-  public void setCmd(CmdType cmd) {
-    this.cmd = cmd;
-  }
-  
-  public void setDryRun(boolean dryRun) {
-    this.dryRun = dryRun;
-  }
-  
-  public String toString() {
-    StringBuilder sb = new StringBuilder();
-    sb.append("cmd=");
-    sb.append(cmd);
-    sb.append(", dry-run=");
-    sb.append(dryRun);
-    return sb.toString();
-  }
-  
-  public static enum CmdType {
-    CREATE("create"),
-    DELETE("delete"),
-    STATUS("status"),
-    UPGRADE("upgrade");
-    
-    String cmd;
-    private CmdType(String cmd) {
-      this.cmd = cmd;
-    }
-    
-    @Override
-    public String toString() {
-      return cmd;      
-    }
-  }
-  
-  public static class CmdTypeAdapter extends XmlAdapter<String, CmdType> {
-
-    @Override
-    public String marshal(CmdType obj) throws Exception {
-      return obj.toString();
-    }
-
-    @Override
-    public CmdType unmarshal(String str) throws Exception {
-      for (CmdType j : CmdType.class.getEnumConstants()) {
-        if (j.toString().equals(str)) {
-          return j;
-        }
-      }
-      throw new Exception("Can't convert " + str + " to " + CmdType.class.getName());
-    }
-    
-  }
-
-}
diff --git a/common/src/main/java/org/apache/hms/common/entity/command/CommandContextProvider.java b/common/src/main/java/org/apache/hms/common/entity/command/CommandContextProvider.java
deleted file mode 100755
index 4271e1a..0000000
--- a/common/src/main/java/org/apache/hms/common/entity/command/CommandContextProvider.java
+++ /dev/null
@@ -1,45 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.common.entity.command;
-
-import javax.ws.rs.ext.ContextResolver;
-import javax.ws.rs.ext.Provider;
-import javax.xml.bind.JAXBContext;
-
-import com.sun.jersey.api.json.JSONConfiguration;
-import com.sun.jersey.api.json.JSONJAXBContext;
-
-@Provider
-public class CommandContextProvider implements ContextResolver<JAXBContext> {
-
-  private JAXBContext context;
-  private Class[] types = { Command.class, CreateClusterCommand.class, DeleteClusterCommand.class, UpgradeClusterCommand.class, StatusCommand.class };
-
-  public CommandContextProvider() throws Exception {
-    this.context = new JSONJAXBContext(JSONConfiguration.badgerFish().build(), types);
-  }
-
-  public JAXBContext getContext(Class<?> objectType) {
-    for (Class type : types) {
-      if (type.equals(objectType))
-        return context;
-    }
-    return null;
-  } 
-}
diff --git a/common/src/main/java/org/apache/hms/common/entity/command/CommandStatus.java b/common/src/main/java/org/apache/hms/common/entity/command/CommandStatus.java
deleted file mode 100755
index 22e255d..0000000
--- a/common/src/main/java/org/apache/hms/common/entity/command/CommandStatus.java
+++ /dev/null
@@ -1,242 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.common.entity.command;
-
-import java.util.List;
-
-import javax.xml.bind.annotation.XmlAccessType;
-import javax.xml.bind.annotation.XmlAccessorType;
-import javax.xml.bind.annotation.XmlElement;
-import javax.xml.bind.annotation.XmlRootElement;
-import javax.xml.bind.annotation.XmlType;
-import javax.xml.bind.annotation.adapters.XmlJavaTypeAdapter;
-
-import org.apache.hms.common.entity.RestSource;
-import org.apache.hms.common.entity.Status;
-import org.apache.hms.common.entity.Status.StatusAdapter;
-import org.apache.hms.common.entity.action.Action;
-
-@XmlRootElement
-@XmlAccessorType(XmlAccessType.FIELD) 
-@XmlType(name="", propOrder = {})
-public class CommandStatus extends RestSource {
-  @XmlElement
-  @XmlJavaTypeAdapter(StatusAdapter.class)
-  protected Status status;
-  
-  @XmlElement
-  protected String startTime;
-
-  @XmlElement
-  protected String endTime;
-  
-  @XmlElement
-  protected String clusterName;
-  
-  @XmlElement
-  protected int totalActions;
-  
-  @XmlElement
-  protected int completedActions;
-  
-  @XmlElement
-  protected List<ActionEntry> actionEntries;
-  
-  public CommandStatus() {
-  }
-  
-  public CommandStatus(Status status, String startTime) {
-    this.status = status;
-    this.startTime = startTime;
-  }
-  
-  public CommandStatus(Status status, String startTime, String clusterName) {
-    this(status, startTime);
-    this.clusterName = clusterName;
-  }
-  
-  public Status getStatus() {
-    return status;
-  }
-  
-  public String getStartTime() {
-    return startTime;
-  }
-  
-  public String getEndTime() {
-    return endTime;
-  }
-  
-  public String getClusterName() {
-    return clusterName;
-  }
-  
-  public int getTotalActions() {
-    return totalActions;
-  }
-  
-  public int getCompletedActions() {
-    return completedActions;
-  }
-  
-  public List<ActionEntry> getActionEntries() {
-    return actionEntries;
-  }
-  
-  public void setStatus(Status status) {
-    this.status = status;  
-  }
-  
-  public void setStartTime(String startTime) {
-    this.startTime = startTime;
-  }
-  
-  public void setEndTime(String endTime) {
-    this.endTime = endTime;
-  }
-  
-  public void setClusterName(String clusterName) {
-    this.clusterName = clusterName;
-  }
-  
-  public void setTotalActions(int totalActions) {
-    this.totalActions = totalActions;
-  }
-  
-  public void setCompletedActions(int completedActions) {
-    this.completedActions = completedActions;
-  }
-  
-  public void setActionEntries(List<ActionEntry> actionEntries) {
-    this.actionEntries = actionEntries;
-  }
-  
-  public String toString() {
-    StringBuilder sb = new StringBuilder();
-    sb.append("cmdStatus=");
-    sb.append(status);
-    sb.append(", startTime=");
-    sb.append(startTime);
-    sb.append(", endTime=");
-    sb.append(endTime);
-    sb.append(", clusterName=");
-    sb.append(clusterName);
-    sb.append(", totalActions=");
-    sb.append(totalActions);
-    sb.append(", completedActions=");
-    sb.append(completedActions);
-    sb.append(", actions=");
-    if (actionEntries != null) {
-      for(ActionEntry a : actionEntries) {
-        sb.append("\n");
-        sb.append(a);
-      }
-    }
-    return sb.toString();
-  }
-  
-  @XmlAccessorType(XmlAccessType.PUBLIC_MEMBER) 
-  @XmlRootElement
-  @XmlType(name="", propOrder = {})
-  public static class ActionEntry {
-    protected Action action;    
-    protected List<HostStatusPair> hostStatus;
-    
-    public ActionEntry() {
-    }
-    
-    public ActionEntry(Action action, List<HostStatusPair> hostStatus) {
-      this.action = action;
-      this.hostStatus = hostStatus;
-    }
-    
-    public Action getAction() {
-      return action;
-    }
-    
-    public List<HostStatusPair> getHostStatus() {
-      return hostStatus;
-    }
-    
-    public void setAction(Action action) {
-      this.action = action;
-    }
-    
-    public void setHostStatus(List<HostStatusPair> hostStatus) {
-      this.hostStatus = hostStatus;
-    }
-    
-    public String toString() {
-      StringBuilder sb = new StringBuilder();
-      sb.append("[(");
-      sb.append(action);
-      sb.append("), (hoststatus=");
-      if (hostStatus != null) {
-        for(HostStatusPair a : hostStatus) {
-          sb.append(a);
-          sb.append(", ");
-        }
-      }
-      sb.append(")]");
-      return sb.toString();
-    }
-  }
-  
-  @XmlAccessorType(XmlAccessType.PUBLIC_MEMBER)
-  @XmlRootElement
-  @XmlType(name="", propOrder = {})
-  public static class HostStatusPair {
-    protected String host;
-    
-    protected Status status;
-    
-    public HostStatusPair(){
-    }
-    
-    public HostStatusPair(String host, Status status) {
-      this.host = host;
-      this.status = status;
-    }
-   
-    public String getHost() {
-      return host;
-    }
-    
-    @XmlJavaTypeAdapter(StatusAdapter.class)    
-    public Status getStatus() {
-      return status;
-    }
-    
-    public void setHost(String host) {
-      this.host = host;
-    }
-    
-    public void setStatus(Status status) {
-      this.status = status;
-    }
-    
-    public String toString() {
-      StringBuilder sb = new StringBuilder();
-      sb.append(host);
-      sb.append(":");
-      sb.append(status);
-      return sb.toString();
-    }
-  }
-}
diff --git a/common/src/main/java/org/apache/hms/common/entity/command/CreateClusterCommand.java b/common/src/main/java/org/apache/hms/common/entity/command/CreateClusterCommand.java
deleted file mode 100755
index c64c84c..0000000
--- a/common/src/main/java/org/apache/hms/common/entity/command/CreateClusterCommand.java
+++ /dev/null
@@ -1,51 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.common.entity.command;
-
-import javax.xml.bind.annotation.XmlAccessType;
-import javax.xml.bind.annotation.XmlAccessorType;
-import javax.xml.bind.annotation.XmlRootElement;
-import javax.xml.bind.annotation.XmlType;
-
-import org.apache.hms.common.entity.manifest.ClusterManifest;
-import org.codehaus.jackson.annotate.JsonTypeInfo;
-
-@XmlRootElement
-@JsonTypeInfo(use=JsonTypeInfo.Id.CLASS, include=JsonTypeInfo.As.PROPERTY, property="@command")
-@XmlAccessorType(XmlAccessType.PUBLIC_MEMBER) 
-@XmlType(name="", propOrder = {})
-public class CreateClusterCommand extends ClusterCommand {
-  
-  public CreateClusterCommand() {
-    this.cmd = CmdType.CREATE;
-  }
-
-  public CreateClusterCommand(String clusterName, ClusterManifest cm) {
-    super(clusterName, cm);
-  }
-  
-  public CreateClusterCommand(String clusterName) {
-    super(clusterName);
-  }
-  
-  public CreateClusterCommand(ClusterManifest cm) {
-    super(cm);
-  }
-
-}
diff --git a/common/src/main/java/org/apache/hms/common/entity/command/CreateCommand.java b/common/src/main/java/org/apache/hms/common/entity/command/CreateCommand.java
deleted file mode 100755
index d3f2294..0000000
--- a/common/src/main/java/org/apache/hms/common/entity/command/CreateCommand.java
+++ /dev/null
@@ -1,99 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.common.entity.command;
-
-import java.util.List;
-
-import javax.xml.bind.annotation.XmlAccessType;
-import javax.xml.bind.annotation.XmlAccessorType;
-import javax.xml.bind.annotation.XmlElement;
-import javax.xml.bind.annotation.XmlRootElement;
-import javax.xml.bind.annotation.XmlType;
-
-import org.apache.hms.common.entity.manifest.ClusterManifest;
-
-@XmlRootElement
-@XmlAccessorType(XmlAccessType.FIELD) 
-@XmlType(name="", propOrder = {})
-public class CreateCommand extends Command {
-  @XmlElement
-  protected String clusterName;
-  @XmlElement
-  protected List<String> hosts;
-  @XmlElement
-  protected String[] packages;
-
-  public CreateCommand() {
-    this.cmd = CmdType.CREATE;
-  }
-  
-  public CreateCommand(boolean dryRun, String clusterName, List<String> hosts, String[] packages) {
-    this.cmd = CmdType.CREATE;
-    this.dryRun = dryRun;
-    this.clusterName = clusterName;
-    this.hosts = hosts;
-    this.packages = packages;
-  }
-  
-  public void setClusterName(String clusterName) {
-    this.clusterName = clusterName;
-  }
-  
-  public void setHosts(List<String> hosts) {
-    this.hosts = hosts;
-  }
-  
-  public void setPackages(String[] packages) {
-    this.packages = packages;
-  }
-  
-  public String getClusterName() {
-    return clusterName;
-  }
-  
-  public List<String> getHosts() {
-    return hosts;
-  }
-  
-  public String[] getPackages() {
-    return packages;
-  }
-  
-  public String toString() {
-    StringBuilder sb = new StringBuilder();
-    sb.append(super.toString());
-    sb.append(", cluster-name=");
-    sb.append(clusterName);
-    sb.append(", packages=");
-    if (packages != null) {
-      for (String p : packages) {
-        sb.append(p);
-        sb.append(" ");
-      }
-    }
-    sb.append(", hosts=");
-    if (hosts != null) {
-      for (String s : hosts) {
-        sb.append(s);
-        sb.append(" ");
-      }
-    }
-    return sb.toString();
-  }
-}
diff --git a/common/src/main/java/org/apache/hms/common/entity/command/DeleteClusterCommand.java b/common/src/main/java/org/apache/hms/common/entity/command/DeleteClusterCommand.java
deleted file mode 100755
index 7a0ca22..0000000
--- a/common/src/main/java/org/apache/hms/common/entity/command/DeleteClusterCommand.java
+++ /dev/null
@@ -1,48 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.common.entity.command;
-
-import javax.xml.bind.annotation.XmlAccessType;
-import javax.xml.bind.annotation.XmlAccessorType;
-import javax.xml.bind.annotation.XmlElement;
-import javax.xml.bind.annotation.XmlRootElement;
-import javax.xml.bind.annotation.XmlType;
-
-import org.apache.hms.common.entity.manifest.ClusterManifest;
-import org.codehaus.jackson.annotate.JsonTypeInfo;
-
-@XmlRootElement
-@JsonTypeInfo(use=JsonTypeInfo.Id.CLASS, include=JsonTypeInfo.As.PROPERTY, property="@command")
-@XmlAccessorType(XmlAccessType.PUBLIC_MEMBER) 
-@XmlType(name="", propOrder = {})
-public class DeleteClusterCommand extends ClusterCommand {
-  
-  public DeleteClusterCommand() {
-    this.cmd = CmdType.DELETE;
-  }
-
-  public DeleteClusterCommand(String clusterName, ClusterManifest cm) {
-    super(clusterName, cm);    
-  }
-  
-  public DeleteClusterCommand(String clusterName) {
-    super(clusterName);
-  }
-
-}
diff --git a/common/src/main/java/org/apache/hms/common/entity/command/DeleteCommand.java b/common/src/main/java/org/apache/hms/common/entity/command/DeleteCommand.java
deleted file mode 100755
index 872eb96..0000000
--- a/common/src/main/java/org/apache/hms/common/entity/command/DeleteCommand.java
+++ /dev/null
@@ -1,61 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.common.entity.command;
-
-import java.util.List;
-
-import javax.xml.bind.annotation.XmlAccessType;
-import javax.xml.bind.annotation.XmlAccessorType;
-import javax.xml.bind.annotation.XmlElement;
-import javax.xml.bind.annotation.XmlRootElement;
-import javax.xml.bind.annotation.XmlType;
-
-@XmlRootElement
-@XmlAccessorType(XmlAccessType.FIELD) 
-@XmlType(name="", propOrder = {})
-public class DeleteCommand extends Command {
-  @XmlElement
-  protected String clusterName;
-
-  public DeleteCommand() {
-    this.cmd = CmdType.DELETE;
-  }
-  
-  public DeleteCommand(boolean dryRun, String clusterName) {
-    this.cmd = CmdType.DELETE;
-    this.dryRun = dryRun;
-    this.clusterName = clusterName;
-  }
-  
-  public void setClusterName(String clusterName) {
-    this.clusterName = clusterName;
-  }
-  
-  public String getClusterName() {
-    return clusterName;
-  }
-  
-  public String toString() {
-    StringBuilder sb = new StringBuilder();
-    sb.append(super.toString());
-    sb.append(", cluster-name=");
-    sb.append(clusterName);
-    return sb.toString();
-  }
-}
diff --git a/common/src/main/java/org/apache/hms/common/entity/command/StatusCommand.java b/common/src/main/java/org/apache/hms/common/entity/command/StatusCommand.java
deleted file mode 100755
index c5484c2..0000000
--- a/common/src/main/java/org/apache/hms/common/entity/command/StatusCommand.java
+++ /dev/null
@@ -1,71 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.common.entity.command;
-
-import javax.xml.bind.annotation.XmlAccessType;
-import javax.xml.bind.annotation.XmlAccessorType;
-import javax.xml.bind.annotation.XmlElement;
-import javax.xml.bind.annotation.XmlRootElement;
-import javax.xml.bind.annotation.XmlType;
-
-@XmlRootElement
-@XmlAccessorType(XmlAccessType.FIELD) 
-@XmlType(name="", propOrder = {})
-public class StatusCommand extends Command {
-  @XmlElement
-  protected String cmdId;
-  @XmlElement
-  protected String nodePath;
-
-  public StatusCommand() {
-    this.cmd = CmdType.STATUS;
-  }
-  
-  public StatusCommand(boolean dryRun, String cmdId, String nodePath) {
-    this.cmd = CmdType.STATUS;
-    this.cmdId = cmdId;
-    this.nodePath = nodePath;
-  }
-  
-  public void setCmdId(String cmdId) {
-    this.cmdId = cmdId;
-  }
-  
-  public String getCmdId() {
-    return cmdId;
-  }
-  
-  public void setNodePath(String nodePath) {
-    this.nodePath = nodePath;
-  }
-  
-  public String getNodePath() {
-    return nodePath;
-  }
-  
-  public String toString() {
-    StringBuilder sb = new StringBuilder();
-    sb.append(super.toString());
-    sb.append(", cmdId=");
-    sb.append(cmdId);
-    sb.append(", nodePath=");
-    sb.append(nodePath);
-    return sb.toString();
-  }
-}
diff --git a/common/src/main/java/org/apache/hms/common/entity/command/UpgradeClusterCommand.java b/common/src/main/java/org/apache/hms/common/entity/command/UpgradeClusterCommand.java
deleted file mode 100755
index 6a692a3..0000000
--- a/common/src/main/java/org/apache/hms/common/entity/command/UpgradeClusterCommand.java
+++ /dev/null
@@ -1,50 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.common.entity.command;
-
-import javax.xml.bind.annotation.XmlAccessType;
-import javax.xml.bind.annotation.XmlAccessorType;
-import javax.xml.bind.annotation.XmlRootElement;
-import javax.xml.bind.annotation.XmlType;
-
-import org.apache.hms.common.entity.manifest.ClusterManifest;
-import org.codehaus.jackson.annotate.JsonTypeInfo;
-
-@XmlRootElement
-@JsonTypeInfo(use=JsonTypeInfo.Id.CLASS, include=JsonTypeInfo.As.PROPERTY, property="@command")
-@XmlAccessorType(XmlAccessType.PUBLIC_MEMBER) 
-@XmlType(name="", propOrder = {})
-public class UpgradeClusterCommand extends ClusterCommand {
-  
-  public UpgradeClusterCommand() {
-    this.cmd = CmdType.UPGRADE;
-  }
-  
-  public UpgradeClusterCommand(String clusterName, ClusterManifest cm) {
-    super(clusterName, cm);
-  }
-  
-  public UpgradeClusterCommand(String clusterName) {
-    super(clusterName);
-  }
-  
-  public UpgradeClusterCommand(ClusterManifest cm) {
-    super(cm);
-  }
-}
diff --git a/common/src/main/java/org/apache/hms/common/entity/manifest/ClusterHistory.java b/common/src/main/java/org/apache/hms/common/entity/manifest/ClusterHistory.java
deleted file mode 100755
index ad6f864..0000000
--- a/common/src/main/java/org/apache/hms/common/entity/manifest/ClusterHistory.java
+++ /dev/null
@@ -1,48 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.common.entity.manifest;
-
-import java.util.ArrayList;
-import java.util.List;
-
-import javax.xml.bind.annotation.XmlElement;
-
-import org.apache.hms.common.entity.RestSource;
-import org.codehaus.jackson.annotate.JsonTypeInfo;
-
-@JsonTypeInfo(use=JsonTypeInfo.Id.CLASS, include=JsonTypeInfo.As.PROPERTY, property="@history")
-public class ClusterHistory extends RestSource {
-  @XmlElement
-  private List<ClusterManifest> history;
-  
-  public List<ClusterManifest> getHistory() {
-    return this.history;
-  }
-  
-  public void setHistory(ArrayList<ClusterManifest> history) {
-    this.history = history;
-  }
-  
-  public void add(ClusterManifest cm) {
-    if(history==null) {
-      history = new ArrayList<ClusterManifest>();
-    }
-    history.add(cm);
-  }
-}
diff --git a/common/src/main/java/org/apache/hms/common/entity/manifest/ClusterManifest.java b/common/src/main/java/org/apache/hms/common/entity/manifest/ClusterManifest.java
deleted file mode 100755
index b651274..0000000
--- a/common/src/main/java/org/apache/hms/common/entity/manifest/ClusterManifest.java
+++ /dev/null
@@ -1,112 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.common.entity.manifest;
-
-import java.io.BufferedReader;
-import java.io.FileReader;
-import java.io.IOException;
-import java.net.URL;
-
-import javax.xml.bind.annotation.XmlAttribute;
-import javax.xml.bind.annotation.XmlElement;
-import javax.xml.bind.annotation.XmlRootElement;
-
-import org.apache.hms.common.util.JAXBUtil;
-
-import com.sun.jersey.api.client.WebResource;
-
-@XmlRootElement
-public class ClusterManifest extends Manifest {
-  @XmlAttribute
-  private String clusterName;
-  @XmlElement
-  private NodesManifest nodes;
-  @XmlElement
-  private SoftwareManifest software;
-  @XmlElement
-  private ConfigManifest config;
-  
-  public String getClusterName() {
-    return this.clusterName;
-  }
-  
-  public NodesManifest getNodes() {
-    return this.nodes;
-  }
-  
-  public SoftwareManifest getSoftware() {
-    return this.software;
-  }
-  
-  public ConfigManifest getConfig() {
-    return this.config;
-  }
-  
-  public void setClusterName(String cluster) {
-    this.clusterName = cluster;
-  }
-  
-  public void setNodes(NodesManifest nodes) {
-    this.nodes = nodes;
-  }
-  
-  public void setSoftware(SoftwareManifest software) {
-    this.software = software;
-  }
-  
-  public void setConfig(ConfigManifest config) {
-    this.config = config;
-  }
-  
-  public void load() throws IOException {
-    if(nodes!=null && nodes.getUrl()!=null && nodes.getRoles()==null) {
-      URL url = nodes.getUrl();
-      nodes = fetch(url, NodesManifest.class);
-      nodes.setUrl(url);
-    }
-    if(software!=null && software.getUrl()!=null && software.getRoles()==null) {
-      URL url = software.getUrl();
-      software = fetch(url, SoftwareManifest.class);
-      software.setUrl(url);
-    }
-    if(config!=null && config.getUrl()!=null && config.getActions()==null) {
-      URL url = config.getUrl();
-      config = fetch(url, ConfigManifest.class);
-      config.setUrl(url);
-      config.expand(nodes);
-    }
-  }
-  
-  private <T> T fetch(URL url, java.lang.Class<T> c) throws IOException {
-    if(url.getProtocol().toLowerCase().equals("file")) {
-      FileReader fstream = new FileReader(url.getPath());
-      BufferedReader in = new BufferedReader(fstream);
-      StringBuilder buffer = new StringBuilder();
-      String str;
-      while((str = in.readLine()) != null) {
-        buffer.append(str);
-      }
-      return JAXBUtil.read(buffer.toString().getBytes(), c);
-    } else {
-      com.sun.jersey.api.client.Client wsClient = com.sun.jersey.api.client.Client.create();
-      WebResource webResource = wsClient.resource(url.toString());
-      return webResource.get(c);
-    }
-  }
-}
diff --git a/common/src/main/java/org/apache/hms/common/entity/manifest/ConfigManifest.java b/common/src/main/java/org/apache/hms/common/entity/manifest/ConfigManifest.java
deleted file mode 100755
index 175d628..0000000
--- a/common/src/main/java/org/apache/hms/common/entity/manifest/ConfigManifest.java
+++ /dev/null
@@ -1,84 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.common.entity.manifest;
-
-import java.util.List;
-import java.util.regex.Matcher;
-import java.util.regex.Pattern;
-
-import javax.xml.bind.annotation.XmlAccessType;
-import javax.xml.bind.annotation.XmlAccessorType;
-import javax.xml.bind.annotation.XmlElement;
-import javax.xml.bind.annotation.XmlRootElement;
-import javax.xml.bind.annotation.XmlType;
-
-import org.apache.commons.lang.StringUtils;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hms.common.entity.action.Action;
-import org.apache.hms.common.entity.action.ScriptAction;
-
-@XmlAccessorType(XmlAccessType.FIELD) 
-@XmlType(name="", propOrder = {})
-@XmlRootElement
-public class ConfigManifest  extends Manifest {
-  private static Log LOG = LogFactory.getLog(ConfigManifest.class);
-
-  @XmlElement
-  private List<Action> actions;
-  
-  public List<Action> getActions() {
-    return actions;
-  }
-  
-  public void setActions(List<Action> actions) {
-    this.actions = actions;
-  }
-
-  public void expand(NodesManifest nodes) {
-    List<Role> roles = nodes.getRoles();
-    Pattern p = Pattern.compile("\\$\\{(.*?)\\}");
-    int index = 0;
-    for(Action action: actions) {
-      if(action instanceof ScriptAction) {
-        String[] params = ((ScriptAction) action).getParameters();
-        String[] expandedParams = new String[params.length];
-        int i = 0;
-        for(String param : params) {
-          expandedParams[i]=param;          
-          Matcher m = p.matcher(param);
-          while(m.find()) {
-            String token = m.group(1);
-            for(Role role : roles) {
-              if(role.name.equals(token)) {
-                String[] hosts = role.getHosts();
-                String replacement = StringUtils.join(hosts, ",");
-                expandedParams[i] = param.replace(m.group(0), replacement);
-              }
-            }
-          }
-          i++;
-        }
-        ((ScriptAction) action).setParameters(expandedParams);
-        actions.set(index, action);
-      }
-      index++;
-    }    
-  }
-}
diff --git a/common/src/main/java/org/apache/hms/common/entity/manifest/Manifest.java b/common/src/main/java/org/apache/hms/common/entity/manifest/Manifest.java
deleted file mode 100755
index ac8f357..0000000
--- a/common/src/main/java/org/apache/hms/common/entity/manifest/Manifest.java
+++ /dev/null
@@ -1,44 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.common.entity.manifest;
-
-import java.net.URL;
-
-import javax.xml.bind.annotation.XmlAttribute;
-import javax.xml.bind.annotation.XmlSeeAlso;
-import javax.xml.bind.annotation.XmlTransient;
-
-import org.apache.hms.common.entity.RestSource;
-import org.codehaus.jackson.annotate.JsonTypeInfo;
-
-@XmlSeeAlso({ ClusterManifest.class, ConfigManifest.class, NodesManifest.class, SoftwareManifest.class })
-@XmlTransient
-@JsonTypeInfo(use=JsonTypeInfo.Id.CLASS, include=JsonTypeInfo.As.PROPERTY, property="@manifest")
-public abstract class Manifest extends RestSource {
-  @XmlAttribute
-  private URL url;
-  
-  public URL getUrl() {
-    return this.url;
-  }
-  
-  public void setUrl(URL url) {
-    this.url = url;
-  }
-}
diff --git a/common/src/main/java/org/apache/hms/common/entity/manifest/Node.java b/common/src/main/java/org/apache/hms/common/entity/manifest/Node.java
deleted file mode 100755
index 27301ff..0000000
--- a/common/src/main/java/org/apache/hms/common/entity/manifest/Node.java
+++ /dev/null
@@ -1,51 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.common.entity.manifest;
-
-import javax.xml.bind.annotation.XmlAccessType;
-import javax.xml.bind.annotation.XmlAccessorType;
-import javax.xml.bind.annotation.XmlAttribute;
-import javax.xml.bind.annotation.XmlRootElement;
-import javax.xml.bind.annotation.XmlType;
-
-@XmlAccessorType(XmlAccessType.FIELD) 
-@XmlType(name="", propOrder = {})
-@XmlRootElement
-public class Node {
-  @XmlAttribute
-  public String name;
-  @XmlAttribute
-  public String type;
-  
-  public String getName() {
-    return this.name;
-  }
-  
-  public String getType() {
-    return this.type;
-  }
-  
-  public void setName(String name) {
-    this.name = name;
-  }
-  
-  public void setType(String type) {
-    this.type = type;
-  }
-}
diff --git a/common/src/main/java/org/apache/hms/common/entity/manifest/PackageInfo.java b/common/src/main/java/org/apache/hms/common/entity/manifest/PackageInfo.java
deleted file mode 100755
index ae17368..0000000
--- a/common/src/main/java/org/apache/hms/common/entity/manifest/PackageInfo.java
+++ /dev/null
@@ -1,65 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.common.entity.manifest;
-
-import javax.xml.bind.annotation.XmlElement;
-import org.apache.hms.common.entity.RestSource;
-
-public class PackageInfo extends RestSource {
-  @XmlElement
-  private String name;
-  @XmlElement
-  private String[] relocate;
-  
-  public String getName() {
-    return name;
-  }
-  
-  public String[] getRelocate() {
-    return relocate;
-  }
-  
-  public void setName(String name) {
-    this.name = name;
-  }
-  
-  public void setRelocate(String[] relocate) {
-    this.relocate = relocate;
-  }
-  
-  public String toString() {
-    StringBuilder sb = new StringBuilder();
-    sb.append("package: ");
-    sb.append(name);
-    return sb.toString();
-  }
-  
-  @Override
-  public boolean equals(Object obj) {
-    if(obj == null) {
-      return false;
-    }
-    return name.equals(((PackageInfo) obj).getName());
-  }
-  
-  @Override
-  public int hashCode() {
-    return name.hashCode();
-  }
-}
diff --git a/common/src/main/java/org/apache/hms/common/entity/manifest/Role.java b/common/src/main/java/org/apache/hms/common/entity/manifest/Role.java
deleted file mode 100644
index fed12db..0000000
--- a/common/src/main/java/org/apache/hms/common/entity/manifest/Role.java
+++ /dev/null
@@ -1,57 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.common.entity.manifest;
-
-import javax.xml.bind.annotation.XmlAttribute;
-import javax.xml.bind.annotation.XmlElement;
-
-import org.apache.hms.common.entity.RestSource;
-
-public class Role extends RestSource {
-  @XmlAttribute
-  public String name;
-  @XmlElement(name="package")
-  private PackageInfo[] packages;
-  @XmlElement(name="host")
-  private String[] hosts;
-  
-  public String getName() {
-    return this.name;
-  }
-  
-  public PackageInfo[] getPackages() {
-    return this.packages;
-  }
-  
-  public String[] getHosts() {
-    return this.hosts;
-  }
-  
-  public void setName(String name) {
-    this.name = name;
-  }
-  
-  public void setPackages(PackageInfo[] packages) {
-    this.packages = packages;
-  }
-  
-  public void setHosts(String[] hosts) {
-    this.hosts = hosts;
-  }
-}
diff --git a/common/src/main/java/org/apache/hms/common/util/MulticastDNS.java b/common/src/main/java/org/apache/hms/common/util/MulticastDNS.java
deleted file mode 100755
index 8759b62..0000000
--- a/common/src/main/java/org/apache/hms/common/util/MulticastDNS.java
+++ /dev/null
@@ -1,93 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.common.util;
-
-import java.io.IOException;
-import java.net.InetAddress;
-import java.net.UnknownHostException;
-import java.util.Hashtable;
-
-import javax.jmdns.JmDNS;
-import javax.jmdns.ServiceInfo;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-
-public class MulticastDNS {
-  private static Log LOG = LogFactory.getLog(MulticastDNS.class);
-  public static JmDNS jmdns;
-  private InetAddress addr;
-  private String svcType = "_zookeeper._tcp.local.";
-  private String svcName;
-  private int svcPort = 2181;
-  private Hashtable<String, String> settings;
- 
-  public MulticastDNS() throws UnknownHostException {
-    super();
-    InetAddress addr = InetAddress.getLocalHost();
-    String hostname = addr.getHostName();
-    if(hostname.indexOf('.')>0) {
-      hostname = hostname.substring(0, hostname.indexOf('.'));
-    }
-    svcName = hostname;
-    settings = new Hashtable<String,String>();
-    settings.put("host", svcName);
-    settings.put("port", new Integer(svcPort).toString());
-  }
-  
-  public MulticastDNS(String svcType, int svcPort) throws UnknownHostException {
-    this();
-    this.svcType = svcType;
-    this.svcPort = svcPort;
-  }
- 
-  public MulticastDNS(InetAddress addr, String svcType, int svcPort) throws UnknownHostException {
-    this();
-    this.addr = addr;
-    this.svcType = svcType;
-    this.svcPort = svcPort;
-  }
-
-  protected void handleRegisterCommand() throws IOException {
-    if(jmdns==null) {
-      if(addr!=null) {
-        jmdns = JmDNS.create(this.addr);      
-      } else {
-        jmdns = JmDNS.create();
-      }
-    }
-    ServiceInfo svcInfo = ServiceInfo.create(svcType, svcName, svcPort, 1, 1, settings);
-    try {
-      this.jmdns.registerService(svcInfo);
-      LOG.info("Registered service '" + svcName + "' as: " + svcInfo);
-    } catch (IOException e) {
-      LOG.error("Failed to register service '" + svcName + "'");
-    }
-  }
-
-  protected void handleUnregisterCommand() {
-    this.jmdns.unregisterAllServices();
-    try {
-      this.jmdns.close();
-    } catch (IOException e) {
-      LOG.error(ExceptionUtil.getStackTrace(e));
-    }
-  }
- 
-}
diff --git a/common/src/main/java/org/apache/hms/common/util/ZookeeperUtil.java b/common/src/main/java/org/apache/hms/common/util/ZookeeperUtil.java
deleted file mode 100755
index decb285..0000000
--- a/common/src/main/java/org/apache/hms/common/util/ZookeeperUtil.java
+++ /dev/null
@@ -1,61 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.common.util;
-
-import java.util.regex.Matcher;
-import java.util.regex.Pattern;
-
-import org.apache.hms.common.conf.CommonConfigurationKeys;
-
-public class ZookeeperUtil {
-  public static String COMMAND_STATUS = "/status";
-  private static final Pattern BASENAME = Pattern.compile(".*?([^/]*)$");
-  
-  public static String getClusterPath(String clusterName) {
-    StringBuilder clusterNode = new StringBuilder();
-    clusterNode.append(CommonConfigurationKeys.ZOOKEEPER_CLUSTER_ROOT_DEFAULT);
-    clusterNode.append("/");
-    clusterNode.append(clusterName);
-    return clusterNode.toString();
-  }
-  
-  public static String getCommandStatusPath(String cmdPath) {
-    StringBuilder cmdStatusPath = new StringBuilder();
-    cmdStatusPath.append(cmdPath);
-    cmdStatusPath.append(COMMAND_STATUS);
-    return cmdStatusPath.toString();
-  }
-  
-  public static String getBaseURL(String url) {
-      Matcher matcher = BASENAME.matcher(url);
-      if (matcher.matches()) {
-        return matcher.group(1);
-      } else {
-        throw new IllegalArgumentException("Can't parse " + url);
-      }
-  }
-  
-  public static String getNodesManifestPath(String id) {
-    StringBuilder nodesPath = new StringBuilder();
-    nodesPath.append(CommonConfigurationKeys.ZOOKEEPER_NODES_MANIFEST_PATH_DEFAULT);
-    nodesPath.append("/");
-    nodesPath.append(id);
-    return nodesPath.toString();
-  }
-}
diff --git a/common/src/packages/build.xml b/common/src/packages/build.xml
deleted file mode 100644
index 8af9aa8..0000000
--- a/common/src/packages/build.xml
+++ /dev/null
@@ -1,24 +0,0 @@
-<?xml version="1.0"?>
-
-<!--
-   Licensed to the Apache Software Foundation (ASF) under one or more
-   contributor license agreements.  See the NOTICE file distributed with
-   this work for additional information regarding copyright ownership.
-   The ASF licenses this file to You under the Apache License, Version 2.0
-   (the "License"); you may not use this file except in compliance with
-   the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-   Unless required by applicable law or agreed to in writing, software
-   distributed under the License is distributed on an "AS IS" BASIS,
-   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-   See the License for the specific language governing permissions and
-   limitations under the License.
--->
-
-<project name="hms common packaging">
-  <target name="package-deb"/>
-  <target name="package-rpm"/>
-
-</project>
diff --git a/common/src/packages/tarball/all.xml b/common/src/packages/tarball/all.xml
deleted file mode 100644
index a62112d..0000000
--- a/common/src/packages/tarball/all.xml
+++ /dev/null
@@ -1,60 +0,0 @@
-<?xml version="1.0"?>
-<assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.1"
-          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-          xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.1 http://maven.apache.org/xsd/assembly-1.1.1.xsd">
-  <!--This 'all' id is not appended to the produced bundle because we do this:
-    http://maven.apache.org/plugins/maven-assembly-plugin/faq.html#required-classifiers
-  -->
-  <formats>
-    <format>tar.gz</format>
-  </formats>
-  <fileSets>
-    <fileSet>
-      <includes>
-        <include>${basedir}/*.txt</include>
-      </includes>
-    </fileSet>
-    <fileSet>
-      <includes>
-        <include>pom.xml</include>
-      </includes>
-    </fileSet>
-    <fileSet>
-      <directory>src</directory>
-    </fileSet>
-    <fileSet>
-      <directory>conf</directory>
-    </fileSet>
-    <fileSet>
-      <directory>bin</directory>
-      <fileMode>755</fileMode>
-    </fileSet>
-    <fileSet>
-      <directory>target</directory>
-      <outputDirectory>/</outputDirectory>
-      <includes>
-          <include>${artifactId}-${project.version}.jar</include>
-          <include>${artifactId}-${project.version}-tests.jar</include>
-      </includes>
-    </fileSet>
-    <fileSet>
-      <directory>target/site</directory>
-      <outputDirectory>docs</outputDirectory>
-    </fileSet>
-    <fileSet>
-      <directory>src/packages</directory>
-      <outputDirectory>sbin</outputDirectory>
-      <fileMode>755</fileMode>
-      <includes>
-          <include>update-hms-${artifactId}-env.sh</include>
-      </includes>
-    </fileSet>
-  </fileSets>
-  <dependencySets>
-    <dependencySet>
-      <outputDirectory>/lib</outputDirectory>
-      <unpack>false</unpack>
-      <scope>runtime</scope>
-    </dependencySet>
-  </dependencySets>
-</assembly>
diff --git a/common/src/test/java/org/apache/hms/common/util/TestMulticastDNS.java b/common/src/test/java/org/apache/hms/common/util/TestMulticastDNS.java
deleted file mode 100755
index 22bf4eb..0000000
--- a/common/src/test/java/org/apache/hms/common/util/TestMulticastDNS.java
+++ /dev/null
@@ -1,62 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.common.util;
-
-import java.net.InetAddress;
-import java.util.Collection;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.testng.Assert;
-import org.testng.annotations.Test;
-import org.apache.hms.common.util.ExceptionUtil;
-
-/**
- * Test MulticastDNS and ServiceDiscoveryUtil class
- *
- */
-public class TestMulticastDNS {
-  private static Log LOG = LogFactory.getLog(TestMulticastDNS.class);
-
-  @Test
-  public void testZeroconf() {
-    try {
-      String type = "_test._tcp.local.";
-      InetAddress addr = InetAddress.getLocalHost(); ;
-      ServiceDiscoveryUtil sdu = new ServiceDiscoveryUtil(addr, type);
-      sdu.start();
-      MulticastDNS mdns = new MulticastDNS(addr, type, 2181);
-      mdns.handleRegisterCommand();
-      Thread.sleep(5000);
-      Collection<String> list = sdu.resolve();
-      mdns.handleUnregisterCommand();
-      boolean test = false;
-      for(String x : list) {
-        if(x.contains(addr.getHostAddress())) {
-          test = true;
-        }
-      }
-      Assert.assertEquals(test, true);
-    } catch (Exception e) {
-      LOG.error(ExceptionUtil.getStackTrace(e));
-      Assert.fail(ExceptionUtil.getStackTrace(e));
-    }
-  }
-  
-}
diff --git a/common/src/test/java/org/apache/hms/common/util/TestSerialization.java b/common/src/test/java/org/apache/hms/common/util/TestSerialization.java
deleted file mode 100755
index 3ffd683..0000000
--- a/common/src/test/java/org/apache/hms/common/util/TestSerialization.java
+++ /dev/null
@@ -1,138 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.common.util;
-
-import java.io.IOException;
-import java.util.ArrayList;
-import java.util.List;
-
-import org.apache.hms.common.entity.Status;
-import org.apache.hms.common.entity.action.Action;
-import org.apache.hms.common.entity.action.ActionDependency;
-import org.apache.hms.common.entity.action.DaemonAction;
-import org.apache.hms.common.entity.action.ActionStatus;
-import org.apache.hms.common.entity.action.ScriptAction;
-import org.apache.hms.common.entity.cluster.MachineState.StateEntry;
-import org.apache.hms.common.entity.cluster.MachineState.StateType;
-import org.apache.hms.common.entity.command.CommandStatus;
-import org.apache.hms.common.entity.command.CommandStatus.ActionEntry;
-import org.apache.hms.common.entity.command.CommandStatus.HostStatusPair;
-import org.testng.Assert;
-import org.testng.annotations.Test;
-
-/**
- * Test Java Object to JSON serialization using Jackson
- *
- */
-public class TestSerialization {
-  
-  /**
-   * Test polymorphic handing of jackson serialization
-   */
-  @Test
-  public void testPolymorphHandling() {
-    DaemonAction x = new DaemonAction();
-    x.setActionType("start");
-    x.setDaemonName("hadoop-namenode");
-    Action y = null;
-    try {
-      y = JAXBUtil.read(JAXBUtil.write(x), Action.class);
-      if(y instanceof DaemonAction) {
-        Assert.assertEquals(y.getClass().getCanonicalName(),x.getClass().getName());
-        DaemonAction z = (DaemonAction) y;
-        Assert.assertEquals(z.getActionType(), "start");
-      } else {
-        Assert.fail("y is not instance of DaemonAction");
-      }
-    } catch (IOException e) {
-      Assert.fail("Serialization failed. "+e.getStackTrace());
-    } catch (Exception e) {
-      Assert.fail(x.getClass().getName()+" and "+y.getClass().getCanonicalName()+" does not match.");
-    }
-  }
-  
-  /**
-   * Test action status serialization
-   */
-  @Test
-  public void testActionStatus() {
-    ActionStatus s = new ActionStatus();
-    DaemonAction expected = new DaemonAction();
-    expected.setActionType("start");
-    expected.setDaemonName("hadoop-namenode");
-    s.setCode(0);
-    s.setAction(expected);
-    s.setOutput("output");
-    s.setError("error");
-    
-    try {
-      ActionStatus e = JAXBUtil.read(JAXBUtil.write(s), ActionStatus.class);
-      DaemonAction actual = (DaemonAction) e.getAction();
-      Assert.assertEquals(actual.getActionType(), expected.getActionType());
-      Assert.assertEquals(actual.getDaemonName(), expected.getDaemonName());
-      Assert.assertEquals(e.getCode(), s.getCode());
-      Assert.assertEquals(e.getOutput(), s.getOutput());
-      Assert.assertEquals(e.getError(), s.getError());
-
-    } catch (Exception e) {
-      Assert.fail(e.getMessage());
-    }
-  }
-  
-  /**
-   * Test command status serialization
-   */
-  @Test
-  public void testCommandStatus() {
-    String test = "test";
-    CommandStatus cs = new CommandStatus();
-    List<ActionDependency> ad = new ArrayList<ActionDependency>();
-    ActionDependency actionDep = new ActionDependency();
-    List<String> hosts = new ArrayList<String>();
-    hosts.add("localhost");
-    actionDep.setHosts(hosts);
-    List<StateEntry> states = new ArrayList<StateEntry>();    
-    states.add(new StateEntry(StateType.DAEMON, "hadoop-namenode", Status.INSTALLED));
-    actionDep.setStates(states);
-    ad.add(actionDep);
-    
-    ScriptAction sa = new ScriptAction();
-    sa.setScript("ls");
-    sa.setDependencies(ad);
-    
-    String[] parameters = { "-l" };
-    sa.setParameters(parameters);
-    ActionEntry actionEntry = new ActionEntry();
-    actionEntry.setAction(sa);
-    List<HostStatusPair> hostStatus = new ArrayList<HostStatusPair>();
-    actionEntry.setHostStatus(hostStatus);
-    
-    List<ActionEntry> alist = new ArrayList<ActionEntry>();
-    alist.add(actionEntry);
-    
-    cs.setActionEntries(alist);
-    cs.setClusterName(test);
-    try {
-      CommandStatus read = JAXBUtil.read(JAXBUtil.write(cs), CommandStatus.class);
-      Assert.assertEquals(read.getClusterName(), cs.getClusterName());
-    } catch(Exception e) {
-      Assert.fail(ExceptionUtil.getStackTrace(e));
-    }
-  }
-}
diff --git a/common/src/test/resources/log4j.properties b/common/src/test/resources/log4j.properties
deleted file mode 100755
index 622381d..0000000
--- a/common/src/test/resources/log4j.properties
+++ /dev/null
@@ -1,28 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-log4j.rootLogger=INFO, R
-log4j.appender.R=org.apache.log4j.RollingFileAppender
-log4j.appender.R.File=${HMS_LOG_DIR}/test.log
-log4j.appender.R.MaxFileSize=10MB
-log4j.appender.R.MaxBackupIndex=10
-log4j.appender.R.layout=org.apache.log4j.PatternLayout
-log4j.appender.R.layout.ConversionPattern=%d{ISO8601} %p %t %c{1} - %m%n
-
-log4j.appender.stdout=org.apache.log4j.ConsoleAppender
-log4j.appender.stdout.follow=true
-log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
-log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} %p %t %c{1} - %m%n
-
diff --git a/common/bin/hms-daemon.sh b/controller/bin/hms-daemon.sh
similarity index 100%
rename from common/bin/hms-daemon.sh
rename to controller/bin/hms-daemon.sh
diff --git a/agent/src/packages/deb/hms-agent.control/postrm b/controller/conf/ambari-env.sh
old mode 100755
new mode 100644
similarity index 91%
copy from agent/src/packages/deb/hms-agent.control/postrm
copy to controller/conf/ambari-env.sh
index a6876c3..770878a
--- a/agent/src/packages/deb/hms-agent.control/postrm
+++ b/controller/conf/ambari-env.sh
@@ -1,5 +1,3 @@
-#!/bin/sh
-
 # Licensed to the Apache Software Foundation (ASF) under one or more
 # contributor license agreements.  See the NOTICE file distributed with
 # this work for additional information regarding copyright ownership.
@@ -15,6 +13,4 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-/usr/sbin/userdel hms 2> /dev/null >/dev/null
-exit 0
-
+AMBARI_PID_DIR=/tmp
diff --git a/controller/conf/auth.conf b/controller/conf/auth.conf
new file mode 100644
index 0000000..ee59e11
--- /dev/null
+++ b/controller/conf/auth.conf
@@ -0,0 +1 @@
+controller: controller, user
diff --git a/controller/conf/hms-env.sh b/controller/conf/hms-env.sh
deleted file mode 100644
index 4ffa9af..0000000
--- a/controller/conf/hms-env.sh
+++ /dev/null
@@ -1 +0,0 @@
-JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.6/Home
diff --git a/controller/src/main/resources/log4j.properties b/controller/conf/log4j.properties
similarity index 95%
rename from controller/src/main/resources/log4j.properties
rename to controller/conf/log4j.properties
index 93b5147..c597ea1 100644
--- a/controller/src/main/resources/log4j.properties
+++ b/controller/conf/log4j.properties
@@ -15,7 +15,7 @@
 
 log4j.rootLogger=INFO, R
 log4j.appender.R=org.apache.log4j.RollingFileAppender
-log4j.appender.R.File=${HMS_LOG_DIR}/hms-controller.log
+log4j.appender.R.File=${AMBARI_LOG_DIR}/ambari-controller.log
 log4j.appender.R.MaxFileSize=10MB
 log4j.appender.R.MaxBackupIndex=10
 log4j.appender.R.layout=org.apache.log4j.PatternLayout
diff --git a/controller/pom.xml b/controller/pom.xml
index e4775e5..0a2ed24 100644
--- a/controller/pom.xml
+++ b/controller/pom.xml
@@ -1,30 +1,340 @@
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
 <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
 
     <parent>
-        <groupId>org.apache.hms</groupId>
-        <artifactId>hms</artifactId>
-        <version>0.1.0</version>
+        <groupId>org.apache.ambari</groupId>
+        <artifactId>ambari</artifactId>
+        <version>0.1.0-SNAPSHOT</version>
     </parent>
 
     <modelVersion>4.0.0</modelVersion>
-    <groupId>org.apache.hms</groupId>
-    <artifactId>hms-controller</artifactId>
+    <groupId>org.apache.ambari</groupId>
+    <artifactId>ambari-controller</artifactId>
     <packaging>jar</packaging>
     <name>controller</name>
     <version>0.1.0-SNAPSHOT</version>
-    <description>Hadoop Management System Controller</description>
+    <description>Ambari Controller</description>
+
+    <build>
+      <resources>
+        <resource>
+         <directory>src/main/resources</directory>
+        </resource>
+      </resources>
+      <plugins>
+        <plugin>
+          <artifactId>maven-assembly-plugin</artifactId>
+        </plugin>
+        <plugin>
+          <groupId>org.apache.maven.plugins</groupId>
+          <artifactId>maven-javadoc-plugin</artifactId>
+          <executions>
+            <execution>
+              <id>public-api</id>
+              <goals>
+                <goal>javadoc</goal>
+              </goals>
+              <phase>compile</phase>
+              <configuration>
+                <encoding>UTF-8</encoding>
+                <verbose>false</verbose>
+                <show>public</show>
+                <subpackages>org.apache.ambari.controller.rest.resources</subpackages>
+                <doclet>com.sun.jersey.wadl.resourcedoc.ResourceDoclet</doclet>
+                <docletPath>${path.separator}${project.build.outputDirectory}</docletPath>
+                <docletArtifacts>
+                  <docletArtifact>
+                    <groupId>com.sun.jersey.contribs</groupId>
+                    <artifactId>wadl-resourcedoc-doclet</artifactId>
+                    <version>1.8</version>
+                  </docletArtifact>
+<!--                  <docletArtifact>
+                    <groupId>com.sun.jersey</groupId>
+                    <artifactId>jersey-server</artifactId>
+                    <version>1.8</version>
+                  </docletArtifact> -->
+                  <docletArtifact>
+                    <groupId>xerces</groupId>
+                    <artifactId>xercesImpl</artifactId>
+                    <version>2.6.1</version>
+                  </docletArtifact>
+                </docletArtifacts>
+                <useStandardDocletOptions>false</useStandardDocletOptions>
+                <additionalparam>-output ${project.build.outputDirectory}/resourcedoc.xml</additionalparam>
+              </configuration>
+            </execution>
+            <execution>
+              <id>private-api</id>
+              <goals>
+                <goal>javadoc</goal>
+              </goals>
+              <phase>compile</phase>
+              <configuration>
+                <encoding>UTF-8</encoding>
+                <verbose>false</verbose>
+                <show>public</show>
+                <subpackages>org.apache.ambari.controller.rest.agent</subpackages>
+                <doclet>com.sun.jersey.wadl.resourcedoc.ResourceDoclet</doclet>
+                <docletPath>${path.separator}${project.build.outputDirectory}</docletPath>
+                <docletArtifacts>
+                  <docletArtifact>
+                    <groupId>com.sun.jersey.contribs</groupId>
+                    <artifactId>wadl-resourcedoc-doclet</artifactId>
+                    <version>1.8</version>
+                  </docletArtifact>
+<!--                  <docletArtifact>
+                    <groupId>com.sun.jersey</groupId>
+                    <artifactId>jersey-server</artifactId>
+                    <version>1.8</version>
+                  </docletArtifact> -->
+                  <docletArtifact>
+                    <groupId>xerces</groupId>
+                    <artifactId>xercesImpl</artifactId>
+                    <version>2.6.1</version>
+                  </docletArtifact>
+                </docletArtifacts>
+                <useStandardDocletOptions>false</useStandardDocletOptions>
+                <additionalparam>-output ${project.build.outputDirectory}/resourcedoc-agent.xml</additionalparam>
+              </configuration>
+            </execution>
+          </executions>
+        </plugin>
+        <plugin>
+          <groupId>com.sun.jersey.contribs</groupId>
+            <artifactId>maven-wadl-plugin</artifactId>
+            <version>1.8</version>
+            <executions>
+              <execution>
+                <id>generate</id>
+                <goals>
+                  <goal>generate</goal>
+                </goals>
+                <phase>compile</phase>
+              </execution>
+            </executions>
+            <configuration>
+              <wadlFile>${project.build.outputDirectory}/application.wadl</wadlFile>
+              <formatWadlFile>true</formatWadlFile>
+              <baseUri>http://ambari.example.com/rest</baseUri>
+              <packagesResourceConfig>
+                <param>org.apache.ambari.controller.rest.resources</param>
+              </packagesResourceConfig>
+              <wadlGenerators>
+                <wadlGeneratorDescription>
+                  <className>com.sun.jersey.server.wadl.generators.WadlGeneratorApplicationDoc</className>
+                  <properties>
+                    <property>
+                      <name>applicationDocsFile</name>
+                      <value>${basedir}/src/main/resources/application-doc.xml</value>
+                    </property>
+                  </properties>
+                </wadlGeneratorDescription>
+                <wadlGeneratorDescription>
+                  <className>com.sun.jersey.server.wadl.generators.WadlGeneratorGrammarsSupport</className>
+                  <properties>
+                    <property>
+                      <name>grammarsFile</name>
+                      <value>${basedir}/src/main/resources/application-grammars.xml</value>
+                    </property>
+                  </properties>
+                </wadlGeneratorDescription>
+                <wadlGeneratorDescription>
+                  <className>com.sun.jersey.server.wadl.generators.resourcedoc.WadlGeneratorResourceDocSupport</className>
+                  <properties>
+                    <property>
+                      <name>resourceDocFile</name>
+                      <value>${project.build.outputDirectory}/resourcedoc.xml</value>
+                    </property>
+                  </properties>
+                </wadlGeneratorDescription>
+              </wadlGenerators>
+            </configuration>
+        </plugin>
+      </plugins>
+    </build>
+
+    <profiles>
+        <profile>
+            <id>docs</id>
+            <activation>
+                <file>
+                    <exists>/usr/bin/xsltproc</exists>
+                </file>
+            </activation>
+            <build>
+                <plugins>
+                    <!--  Create/generate the application.html using xsltproc  -->
+                    <plugin>
+                        <groupId>org.codehaus.mojo</groupId>
+                        <artifactId>exec-maven-plugin</artifactId>
+                        <version>1.1</version>
+                        <executions>
+                            <execution>
+                                <id>exec-xsltproc: target/application.html</id>
+                                <goals>
+                                    <goal>exec</goal>
+                                </goals>
+                                <phase>package</phase>
+                                <configuration>
+                                    <executable>xsltproc</executable>
+                                    <commandlineArgs>-o ../src/site/resources/application.html src/main/webapps/wadl.xsl target/classes/application.wadl</commandlineArgs>
+                                </configuration>
+                            </execution>
+                        </executions>
+                    </plugin>
+                </plugins>
+            </build>
+            <dependencies>
+                <dependency>
+                    <groupId>com.sun.jersey.contribs</groupId>
+                    <artifactId>maven-wadl-plugin</artifactId>
+                    <version>1.8</version>
+                </dependency>
+                <dependency>
+                    <groupId>com.sun.jersey.contribs</groupId>
+                    <artifactId>wadl-resourcedoc-doclet</artifactId>
+                    <version>1.8</version>
+                </dependency>
+            </dependencies>
+        </profile>
+    </profiles>
+
+    <pluginRepositories>
+        <pluginRepository>
+            <id>maven2-repository.dev.java.net</id>
+            <name>Java.net Repository for Maven</name>
+            <url>http://download.java.net/maven/2/</url>
+            <layout>default</layout>
+        </pluginRepository>
+        <pluginRepository>
+            <id>maven2-glassfish-repository.dev.java.net</id>
+            <name>Java.net Repository for Maven</name>
+            <url>http://download.java.net/maven/glassfish/</url>
+        </pluginRepository>
+    </pluginRepositories>
 
     <dependencies>
       <dependency>
-        <groupId>org.apache.hms</groupId>
-        <artifactId>common</artifactId>
+        <groupId>com.google.inject</groupId>
+        <artifactId>guice</artifactId>
+        <version>3.0</version>
+      </dependency>
+      <dependency>
+        <groupId>com.google.inject.extensions</groupId>
+        <artifactId>guice-assistedinject</artifactId>
+        <version>3.0</version>
+      </dependency>
+      <dependency>
+        <groupId>org.mockito</groupId>
+        <artifactId>mockito-core</artifactId>
+        <version>1.8.5</version>
+        <scope>test</scope>
+      </dependency>
+      <dependency>
+        <groupId>org.apache.ambari</groupId>
+        <artifactId>ambari-client</artifactId>
         <version>0.1.0-SNAPSHOT</version>
       </dependency>
       <dependency>
-        <groupId>commons-configuration</groupId>
-        <artifactId>commons-configuration</artifactId>
-        <version>1.6</version>
+        <groupId>org.apache.zookeeper</groupId>
+        <artifactId>zookeeper</artifactId>
+        <version>3.3.2</version>
+        <exclusions>
+          <exclusion>
+            <groupId>log4j</groupId>
+            <artifactId>log4j</artifactId>
+          </exclusion>
+        </exclusions>
+      </dependency>
+      <dependency>
+        <groupId>org.mortbay.jetty</groupId>
+        <artifactId>jetty</artifactId>
+        <version>6.1.26</version>
+      </dependency>
+      <dependency>
+        <groupId>commons-logging</groupId>
+        <artifactId>commons-logging</artifactId>
+        <version>1.1.1</version>
+      </dependency>
+      <dependency>
+        <groupId>commons-codec</groupId>
+        <artifactId>commons-codec</artifactId>
+        <version>1.3</version>
+        <scope>compile</scope>
+      </dependency>
+      <dependency>
+        <groupId>commons-lang</groupId>
+        <artifactId>commons-lang</artifactId>
+        <version>2.4</version>
+      </dependency>
+      <dependency>
+        <groupId>commons-httpclient</groupId>
+        <artifactId>commons-httpclient</artifactId>
+        <version>3.0.1</version>
+      </dependency>
+      <dependency>
+        <groupId>javax.servlet</groupId>
+        <artifactId>servlet-api</artifactId>
+        <version>2.5</version>
+        <scope>provided</scope>
+      </dependency>
+      <dependency>
+        <groupId>com.sun.jersey</groupId>
+        <artifactId>jersey-json</artifactId>
+        <version>1.8</version>
+      </dependency>
+      <dependency>
+        <groupId>com.sun.jersey</groupId>
+        <artifactId>jersey-server</artifactId>
+        <version>1.8</version>
+      </dependency>
+      <dependency>
+        <groupId>com.sun.jersey</groupId>
+        <artifactId>jersey-client</artifactId>
+        <version>1.8</version>
+      </dependency>
+      <dependency>
+        <groupId>com.sun.jersey.contribs</groupId>
+        <artifactId>jersey-multipart</artifactId>
+        <version>1.8</version>
+      </dependency>
+      <dependency>
+        <groupId>com.sun.jersey.jersey-test-framework</groupId>
+        <artifactId>jersey-test-framework-grizzly2</artifactId>
+        <version>1.8</version>
+        <scope>test</scope>
+      </dependency>
+      <!-- for external testing -->
+      <dependency>
+        <groupId>com.sun.jersey.jersey-test-framework</groupId>
+        <artifactId>jersey-test-framework-external</artifactId>
+        <version>1.8</version>
+        <scope>test</scope>
       </dependency>
     </dependencies>
+
+  <distributionManagement>
+    <site>
+      <id>apache-website</id>
+      <name>Apache website</name>
+      <url>scpexe://people.apache.org/www/incubator.apache.org/ambari/ambari-controller</url>
+    </site>
+  </distributionManagement>
+
 </project>
diff --git a/common/src/main/java/org/apache/hms/common/util/ExceptionUtil.java b/controller/src/main/java/org/apache/ambari/components/ComponentModule.java
old mode 100755
new mode 100644
similarity index 70%
copy from common/src/main/java/org/apache/hms/common/util/ExceptionUtil.java
copy to controller/src/main/java/org/apache/ambari/components/ComponentModule.java
index 5f23e2b..17fe6fc
--- a/common/src/main/java/org/apache/hms/common/util/ExceptionUtil.java
+++ b/controller/src/main/java/org/apache/ambari/components/ComponentModule.java
@@ -1,4 +1,4 @@
-/*
+/**
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
  * distributed with this work for additional information
@@ -15,18 +15,17 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
+package org.apache.ambari.components;
 
-package org.apache.hms.common.util;
+import org.apache.ambari.components.impl.XmlComponentPluginFactoryImpl;
 
-import java.io.PrintWriter;
-import java.io.StringWriter;
+import com.google.inject.AbstractModule;
 
-public class ExceptionUtil {
-  public static String getStackTrace(Throwable t) {
-    StringWriter sw = new StringWriter();
-    PrintWriter pw = new PrintWriter(sw);
-    t.printStackTrace(pw);
-    pw.flush();
-    return sw.toString();
+public class ComponentModule extends AbstractModule {
+
+  @Override
+  protected void configure() {
+    bind(ComponentPluginFactory.class).to(XmlComponentPluginFactoryImpl.class);
   }
+
 }
diff --git a/controller/src/main/java/org/apache/ambari/components/ComponentPlugin.java b/controller/src/main/java/org/apache/ambari/components/ComponentPlugin.java
new file mode 100644
index 0000000..31bf4a6
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/components/ComponentPlugin.java
@@ -0,0 +1,94 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.components;
+
+import java.io.IOException;
+
+import org.apache.ambari.common.rest.agent.Action;
+
+/**
+ * An interface for pluggable component definitions.
+ */
+public abstract class ComponentPlugin {
+  
+  /**
+   * Get the name of the component.
+   * @return the name
+   */
+  public abstract String getName();
+  
+  /**
+   * Get the active roles (ie. with servers) for this component.
+   * @return the list of roles in the order that they should be started
+   * @throws IOException
+   */
+  public abstract String[] getActiveRoles() throws IOException;
+  
+  /**
+   * Get the components that this one depends on.
+   * @return the list of components that must be installed for this one
+   * @throws IOException
+   */
+  public abstract String[] getRequiredComponents() throws IOException;
+  
+  
+  /**
+   * Get the commands to start a role's server.
+   * @param cluster the cluster that is being installed
+   * @param role the role that needs to start running its server
+   * @return the commands to execute
+   * @throws IOException
+   */
+  public abstract Action startServer(String cluster,
+                                     String role
+                                     ) throws IOException;
+  
+  /**
+   * Get the role that should run the check service command.
+   * @return the role name
+   * @throws IOException
+   */
+  public abstract String runCheckRole() throws IOException;
+
+  /**
+   * Get the commands to check whether the service is up
+   * @param cluster the name of the cluster
+   * @param role the role that is being checked
+   * @return the commands to run on the agent
+   * @throws IOException
+   */
+  public abstract Action checkService(String cluster, 
+                                      String role) throws IOException;
+
+  /**
+   * Get the role that should run the initialization command.
+   * @return the role name
+   * @throws IOException
+   */
+  public abstract String runPreStartRole() throws IOException;
+  
+  /**
+   * Get the commands to run to preinstall a component
+   * For example, MapReduce needs to have certain directories
+   * on the HDFS before JobTracker can be started.
+   * @param cluster the cluster that is being installed
+   * @param role the role that will run the action
+   */
+  public abstract Action preStartAction(String cluster, 
+                              String role) throws IOException;
+}
diff --git a/common/src/main/java/org/apache/hms/common/util/ExceptionUtil.java b/controller/src/main/java/org/apache/ambari/components/ComponentPluginFactory.java
old mode 100755
new mode 100644
similarity index 70%
copy from common/src/main/java/org/apache/hms/common/util/ExceptionUtil.java
copy to controller/src/main/java/org/apache/ambari/components/ComponentPluginFactory.java
index 5f23e2b..ed504e1
--- a/common/src/main/java/org/apache/hms/common/util/ExceptionUtil.java
+++ b/controller/src/main/java/org/apache/ambari/components/ComponentPluginFactory.java
@@ -1,4 +1,4 @@
-/*
+/**
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
  * distributed with this work for additional information
@@ -15,18 +15,14 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
+package org.apache.ambari.components;
 
-package org.apache.hms.common.util;
+import java.io.IOException;
 
-import java.io.PrintWriter;
-import java.io.StringWriter;
+import org.apache.ambari.common.rest.entities.ComponentDefinition;
 
-public class ExceptionUtil {
-  public static String getStackTrace(Throwable t) {
-    StringWriter sw = new StringWriter();
-    PrintWriter pw = new PrintWriter(sw);
-    t.printStackTrace(pw);
-    pw.flush();
-    return sw.toString();
-  }
+public abstract class ComponentPluginFactory {
+
+  public abstract ComponentPlugin getPlugin(ComponentDefinition info
+                                            ) throws IOException;
 }
diff --git a/controller/src/main/java/org/apache/ambari/components/impl/XmlComponentDefinition.java b/controller/src/main/java/org/apache/ambari/components/impl/XmlComponentDefinition.java
new file mode 100644
index 0000000..7d03f73
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/components/impl/XmlComponentDefinition.java
@@ -0,0 +1,259 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.components.impl;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.List;
+
+import javax.xml.bind.JAXBContext;
+import javax.xml.bind.JAXBException;
+import javax.xml.bind.Unmarshaller;
+import javax.xml.bind.annotation.XmlAccessType;
+import javax.xml.bind.annotation.XmlAccessorType;
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlRootElement;
+import javax.xml.bind.annotation.XmlType;
+import javax.xml.bind.annotation.XmlValue;
+
+import org.apache.ambari.common.rest.entities.ComponentDefinition;
+import org.apache.ambari.common.rest.agent.Action;
+import org.apache.ambari.common.rest.agent.Command;
+import org.apache.ambari.components.ComponentPlugin;
+
+class XmlComponentDefinition extends ComponentPlugin {
+
+  @XmlAccessorType(XmlAccessType.FIELD)
+  @XmlType(name = "component", propOrder = {
+      "requires",
+      "roles",
+      "prestart",
+      "start",
+      "check"
+  })
+  @XmlRootElement
+  public static class Component {
+    @XmlAttribute String name;
+    @XmlElement List<Requires> requires;
+    @XmlElement List<Role> roles;
+    @XmlElement Start start;
+    @XmlElement Check check;
+    @XmlElement Prestart prestart;
+  }
+  
+  @XmlAccessorType
+  @XmlType(name = "role")
+  public static class Role {
+    @XmlAttribute String name;
+  }
+
+  @XmlAccessorType(XmlAccessType.FIELD)
+  @XmlType(name = "requires")
+  public static class Requires {
+    @XmlAttribute String name;
+  }
+
+  public static class ScriptCommand {
+    @XmlValue String script;
+  }
+
+  @XmlAccessorType(XmlAccessType.FIELD)
+  @XmlType(name = "start")
+  public static class Start extends ScriptCommand {
+  }
+
+  @XmlAccessorType(XmlAccessType.FIELD)
+  @XmlType(name = "check")
+  public static class Check extends ScriptCommand {
+    @XmlAttribute String runOn;
+  }
+  
+  @XmlAccessorType(XmlAccessType.FIELD)
+  @XmlType(name = "prestart")
+  public static class Prestart extends ScriptCommand {
+    @XmlAttribute String runOn;
+  }
+  
+  private static final JAXBContext jaxbContext;
+  static {
+    try {
+      jaxbContext = JAXBContext.newInstance("org.apache.ambari.components.impl");
+    } catch (JAXBException e) {
+      throw new RuntimeException("Can't create jaxb context", e);
+    }
+  }
+
+  private final String name;
+  private final String[] roles;
+  private final String[] dependencies;
+  private final String startCommand;
+  private final String startUser = "agent";
+  private final String checkRole;
+  private final String prestartRole;
+  private final String prestartCommand;
+  private final String prestartUser = "agent";
+  private final String checkCommand;
+  private final String checkUser = "agent";
+  
+  @Override
+  public String getName() {
+    return name;
+  }
+
+  @Override
+  public String[] getActiveRoles() throws IOException {
+    return roles;
+  }
+
+  @Override
+  public String[] getRequiredComponents() throws IOException {
+    return dependencies;
+  }
+
+  @Override
+  public Action startServer(String cluster, String role) throws IOException {
+    if (startCommand == null) {
+      return null;
+    }
+    Action result = new Action();
+    result.kind = Action.Kind.START_ACTION;
+    result.setUser(startUser);
+    result.command = new Command(startUser, startCommand, 
+                                 new String[]{cluster, role});
+    return result;
+  }
+
+  @Override
+  public String runCheckRole() throws IOException {
+    return checkRole;
+  }
+
+  @Override
+  public Action checkService(String cluster, String role) throws IOException {
+    if (checkCommand == null) {
+      return null;
+    }
+    Action result = new Action();
+    result.kind = Action.Kind.RUN_ACTION;
+    result.setUser(checkUser);
+    result.command = new Command(checkUser, checkCommand, 
+                                 new String[]{cluster, role});
+    return result;
+  }
+  
+  @Override
+  public String runPreStartRole() throws IOException {
+    return prestartRole;
+  }
+
+  @Override
+  public Action preStartAction(String cluster, String role) throws IOException {
+    if (prestartCommand == null) {
+      return null;
+    }
+    Action result = new Action();
+    result.kind = Action.Kind.RUN_ACTION;
+    result.setUser(prestartUser);
+    result.command = new Command(prestartUser, prestartCommand, 
+                                 new String[]{cluster, role});
+    return result; 
+  }
+
+  private static String getCommand(ScriptCommand cmd) {
+    if (cmd == null) {
+      return null;
+    } else {
+      return cmd.script;
+    }
+  }
+
+  XmlComponentDefinition(InputStream in) throws IOException {
+    Unmarshaller um;
+    try {
+      um = jaxbContext.createUnmarshaller();
+      Component component = (Component) um.unmarshal(in);
+      name = component.name;
+      int i = 0;
+      if (component.requires == null) {
+        dependencies = new String[0];
+      } else {
+        dependencies = new String[component.requires.size()];
+        for(Requires r: component.requires) {
+          dependencies[i++] = r.name;
+        }
+      }
+      i = 0;
+      if (component.roles == null) {
+        roles = new String[0];
+      } else {
+        roles = new String[component.roles.size()];
+        for(Role r: component.roles) {
+          roles[i++] = r.name;
+        }      
+      }
+      startCommand = getCommand(component.start);
+      checkCommand = getCommand(component.check);
+      prestartCommand = getCommand(component.prestart);
+      if (component.check != null) {
+        checkRole = component.check.runOn;
+      } else {
+        checkRole = null;
+      }
+      if (component.prestart != null) {
+        prestartRole = component.prestart.runOn;
+      } else {
+        prestartRole = null;
+      }
+    } catch (JAXBException e) {
+      throw new IOException("Problem parsing component defintion", e);
+    }
+  }
+
+  private static InputStream getInputStream(ComponentDefinition defn) {
+    String name = defn.getProvider().replace('.', '/') + "/acd/" +
+                  defn.getName() + '-' +
+                  defn.getVersion() + ".acd";
+    InputStream result = ClassLoader.getSystemResourceAsStream(name);
+    if (result == null) {
+      throw new IllegalArgumentException("Can't find resource for " + defn);
+    }
+    return result;
+  }
+
+  /**
+   * Get a component definition based on the name.
+   * @param defn the name of the definition
+   * @throws IOException
+   */
+  XmlComponentDefinition(ComponentDefinition defn) throws IOException {
+    this(getInputStream(defn));
+  }
+  
+  public static void main(String[] args) throws Exception {
+    ComponentDefinition defn = new ComponentDefinition();
+    defn.setName("hadoop-hdfs");
+    defn.setProvider("org.apache.ambari");
+    defn.setVersion("0.1.0");
+    XmlComponentDefinition comp = new XmlComponentDefinition(defn);
+    System.out.println(comp.name);
+    defn.setName("hadoop-common");
+    comp = new XmlComponentDefinition(defn);
+    System.out.println(comp.name);
+  }
+}
diff --git a/controller/src/main/java/org/apache/ambari/components/impl/XmlComponentPluginFactoryImpl.java b/controller/src/main/java/org/apache/ambari/components/impl/XmlComponentPluginFactoryImpl.java
new file mode 100644
index 0000000..8ada8c8
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/components/impl/XmlComponentPluginFactoryImpl.java
@@ -0,0 +1,40 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.components.impl;
+
+import java.io.IOException;
+
+import org.apache.ambari.common.rest.entities.ComponentDefinition;
+import org.apache.ambari.components.ComponentPlugin;
+import org.apache.ambari.components.ComponentPluginFactory;
+
+import com.google.inject.Inject;
+
+public class XmlComponentPluginFactoryImpl extends ComponentPluginFactory {
+
+  @Inject
+  XmlComponentPluginFactoryImpl() {
+    // PASS
+  }
+
+  @Override
+  public ComponentPlugin getPlugin(ComponentDefinition info) throws IOException {
+    return new XmlComponentDefinition(info);
+  }
+
+}
diff --git a/controller/src/main/java/org/apache/ambari/configuration/Configuration.java b/controller/src/main/java/org/apache/ambari/configuration/Configuration.java
new file mode 100644
index 0000000..614b714
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/configuration/Configuration.java
@@ -0,0 +1,94 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.configuration;
+
+import java.io.FileInputStream;
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.Properties;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import com.google.inject.Inject;
+
+/**
+ * Ambari configuration.
+ * Reads properties from ambari.properties
+ */
+public class Configuration {
+  private static final String AMBARI_CONF_VAR = "AMBARI_CONF_DIR";
+  private static final String CONFIG_FILE = "ambari.properties";
+  
+  private static final Log LOG = LogFactory.getLog(Configuration.class);
+  
+  private final URI dataStore;
+  
+  @Inject
+  Configuration() {
+    this(readConfigFile());
+  }
+  
+  protected Configuration(Properties properties) {
+    // get the data store
+    String dataStoreString = properties.getProperty("data.store", 
+                                                    "zk://localhost:2181/");
+    try {
+      dataStore = new URI(dataStoreString);
+    } catch (URISyntaxException e) {
+      throw new IllegalArgumentException("Can't parse data.store: " + 
+                                         dataStoreString, e);
+    }    
+  }
+  
+  /**
+   * Find, read, and parse the configuration file.
+   * @return the properties that were found or empty if no file was found
+   */
+  private static Properties readConfigFile() {
+    Properties properties = new Properties();
+
+    // get the configuration directory and filename
+    String confDir = System.getenv(AMBARI_CONF_VAR);
+    if (confDir == null) {
+      confDir = "/etc/ambari";
+    }
+    String filename = confDir + "/" + CONFIG_FILE;
+    
+    // load the properties
+    try {
+      properties.load(new FileInputStream(filename));
+    } catch (FileNotFoundException fnf) {
+      LOG.info("No configuration file " + filename + " found.", fnf);
+    } catch (IOException ie) {
+      throw new IllegalArgumentException("Can't read configuration file " + 
+                                         filename, ie);
+    }
+    return properties;  
+  }
+
+  /**
+   * Get the URI for the persistent data store.
+   * @return the data store URI
+   */
+  public URI getDataStore() {
+    return dataStore;
+  }
+}
diff --git a/controller/src/main/java/org/apache/ambari/controller/Cluster.java b/controller/src/main/java/org/apache/ambari/controller/Cluster.java
new file mode 100644
index 0000000..2a50611
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/controller/Cluster.java
@@ -0,0 +1,197 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.controller;
+
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+
+import javax.ws.rs.WebApplicationException;
+
+import org.apache.ambari.common.rest.entities.Role;
+import org.apache.ambari.common.rest.entities.Stack;
+import org.apache.ambari.common.rest.entities.ClusterDefinition;
+import org.apache.ambari.common.rest.entities.ClusterState;
+import org.apache.ambari.common.rest.entities.Component;
+import org.apache.ambari.common.rest.entities.Configuration;
+import org.apache.ambari.components.ComponentPlugin;
+import org.apache.ambari.components.ComponentPluginFactory;
+import org.apache.ambari.datastore.DataStoreFactory;
+import org.apache.ambari.datastore.DataStore;
+
+import com.google.inject.assistedinject.Assisted;
+import com.google.inject.assistedinject.AssistedInject;
+
+
+public class Cluster {
+        
+    /*
+     * Data Store 
+     */
+    private final DataStore dataStore;
+   
+    /*
+     * Latest revision of cluster definition
+     */
+    private String clusterName = null;
+    private int latestRevisionNumber = -1;
+    private ClusterDefinition latestDefinition = null;
+    
+    /*
+     * Map of cluster revision to cluster definition
+     */
+    private final Map<Integer, ClusterDefinition> 
+      clusterDefinitionRevisionsList = 
+        new ConcurrentHashMap<Integer, ClusterDefinition>();
+    private final Map<String, ComponentInfo> components =
+        new HashMap<String, ComponentInfo>();
+    private final StackFlattener flattener;
+    private final ComponentPluginFactory componentPluginFactory;
+
+    private static class ComponentInfo {
+      final ComponentPlugin plugin;
+      final Map<String, RoleInfo> roles = new HashMap<String,RoleInfo>();
+      ComponentInfo(ComponentPlugin plugin) {
+        this.plugin = plugin;
+      }
+    }
+    
+    private static class RoleInfo {
+      Configuration conf;
+      RoleInfo(Configuration conf) {
+        this.conf = conf;
+      }
+    }
+
+    @AssistedInject
+    public Cluster (StackFlattener flattener,
+                    DataStoreFactory dataStore,
+                    ComponentPluginFactory plugin,
+                    @Assisted String clusterName) {
+        this.flattener = flattener;
+        this.dataStore = dataStore.getInstance();
+        this.componentPluginFactory = plugin;
+        this.clusterName = clusterName;
+    }
+    
+    @AssistedInject
+    public Cluster (StackFlattener flattener,
+                    DataStoreFactory dataStore,
+                    ComponentPluginFactory plugin,
+                    @Assisted ClusterDefinition c, 
+                    @Assisted ClusterState cs) throws Exception {
+        this(flattener, dataStore, plugin, c.getName());
+        this.updateClusterDefinition(c);
+        this.updateClusterState(cs);
+    }
+    
+    public synchronized void init () throws Exception {
+        this.latestRevisionNumber = dataStore.retrieveLatestClusterRevisionNumber(clusterName);
+        this.latestDefinition = dataStore.retrieveClusterDefinition(clusterName, this.latestRevisionNumber);
+        getComponents(this.latestDefinition);  
+        this.clusterDefinitionRevisionsList.put(this.latestRevisionNumber, this.latestDefinition);
+    }
+    
+    /**
+     * @return the clusterDefinition
+     */
+    public synchronized ClusterDefinition getClusterDefinition(int revision) throws IOException {
+        ClusterDefinition cdef = null;
+        if (revision < 0) {
+            cdef = this.latestDefinition;
+        } else {
+            if (!this.clusterDefinitionRevisionsList.containsKey(revision)) {
+                cdef = dataStore.retrieveClusterDefinition(clusterName, revision);
+                if (!this.clusterDefinitionRevisionsList.containsKey(revision)) {
+                    this.clusterDefinitionRevisionsList.put(revision, cdef);
+                }
+            } else {
+                cdef = this.clusterDefinitionRevisionsList.get(revision);
+            }
+        }
+        return cdef;
+    }
+    
+    /**
+     * @return the latestRevision
+     */
+    public int getLatestRevisionNumber() {
+        return latestRevisionNumber;
+    }
+    
+    /**
+     * @return Add Cluster definition
+     */
+    public synchronized void updateClusterDefinition(ClusterDefinition c) throws Exception {
+      this.latestRevisionNumber = dataStore.storeClusterDefinition(c);
+      this.clusterDefinitionRevisionsList.put(this.latestRevisionNumber, c);
+      this.latestDefinition = c;
+
+      // find the plugins for the current definition of the cluster
+      getComponents(c);
+    }
+    
+    private void getComponents(ClusterDefinition cluster
+        ) throws NumberFormatException, WebApplicationException, IOException {
+      Stack flattened = flattener.flattenStack(cluster.getStackName(), 
+          Integer.parseInt(cluster.getStackRevision()));
+      for (Component component: flattened.getComponents()) {
+        ComponentPlugin plugin = 
+            componentPluginFactory.getPlugin(component.getDefinition());
+        ComponentInfo info = new ComponentInfo(plugin);
+        components.put(component.getName(), info);
+        for(Role role: component.getRoles()) {
+          info.roles.put(role.getName(), new RoleInfo(role.getConfiguration()));
+        }
+      }
+    }
+
+    /**
+     * @return the clusterState
+     */
+    public ClusterState getClusterState() throws IOException {
+        return dataStore.retrieveClusterState(this.clusterName);
+    }
+    
+    /**
+     * @param clusterState the clusterState to set
+     */
+    public void updateClusterState(ClusterState clusterState) throws IOException {
+        dataStore.storeClusterState(this.clusterName, clusterState);
+    }
+    
+    public String getName() {
+        return this.latestDefinition.getName();
+    }
+
+    public synchronized Iterable<String> getComponents() {
+      return components.keySet();
+    }
+    
+    public synchronized 
+    ComponentPlugin getComponentDefinition(String component) {
+      return components.get(component).plugin;
+    }
+    
+    public synchronized
+    Configuration getConfiguration(String component, String role) {
+      return components.get(component).roles.get(role).conf;
+    }
+}
diff --git a/common/src/main/java/org/apache/hms/common/util/ExceptionUtil.java b/controller/src/main/java/org/apache/ambari/controller/ClusterFactory.java
old mode 100755
new mode 100644
similarity index 70%
copy from common/src/main/java/org/apache/hms/common/util/ExceptionUtil.java
copy to controller/src/main/java/org/apache/ambari/controller/ClusterFactory.java
index 5f23e2b..30b1a7d
--- a/common/src/main/java/org/apache/hms/common/util/ExceptionUtil.java
+++ b/controller/src/main/java/org/apache/ambari/controller/ClusterFactory.java
@@ -15,18 +15,12 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
+package org.apache.ambari.controller;
 
-package org.apache.hms.common.util;
+import org.apache.ambari.common.rest.entities.ClusterDefinition;
+import org.apache.ambari.common.rest.entities.ClusterState;
 
-import java.io.PrintWriter;
-import java.io.StringWriter;
-
-public class ExceptionUtil {
-  public static String getStackTrace(Throwable t) {
-    StringWriter sw = new StringWriter();
-    PrintWriter pw = new PrintWriter(sw);
-    t.printStackTrace(pw);
-    pw.flush();
-    return sw.toString();
-  }
+public interface ClusterFactory {
+  public Cluster create(String name);
+  public Cluster create(ClusterDefinition definition, ClusterState state);
 }
diff --git a/controller/src/main/java/org/apache/ambari/controller/Clusters.java b/controller/src/main/java/org/apache/ambari/controller/Clusters.java
new file mode 100644
index 0000000..9c00d65
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/controller/Clusters.java
@@ -0,0 +1,1193 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.controller;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Date;
+import java.util.GregorianCalendar;
+import java.util.HashMap;
+import java.util.Hashtable;
+import java.util.List;
+import java.util.Map;
+import java.util.StringTokenizer;
+import java.util.concurrent.ConcurrentHashMap;
+
+import javax.ws.rs.WebApplicationException;
+import javax.ws.rs.core.Response;
+import javax.xml.bind.JAXBContext;
+import javax.xml.bind.Marshaller;
+import javax.xml.datatype.DatatypeFactory;
+import javax.xml.datatype.XMLGregorianCalendar;
+
+import org.apache.ambari.common.rest.entities.ClusterDefinition;
+import org.apache.ambari.common.rest.entities.ClusterInformation;
+import org.apache.ambari.common.rest.entities.ClusterState;
+import org.apache.ambari.common.rest.entities.Component;
+import org.apache.ambari.common.rest.entities.ConfigurationCategory;
+import org.apache.ambari.common.rest.entities.KeyValuePair;
+import org.apache.ambari.common.rest.entities.Node;
+import org.apache.ambari.common.rest.entities.NodeRole;
+import org.apache.ambari.common.rest.entities.Property;
+import org.apache.ambari.common.rest.entities.Role;
+import org.apache.ambari.common.rest.entities.RoleToNodes;
+import org.apache.ambari.common.rest.entities.Stack;
+import org.apache.ambari.common.rest.entities.UserGroup;
+import org.apache.ambari.datastore.DataStoreFactory;
+import org.apache.ambari.datastore.DataStore;
+import org.apache.ambari.resource.statemachine.ClusterFSM;
+import org.apache.ambari.resource.statemachine.FSMDriverInterface;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import com.google.inject.Inject;
+import com.google.inject.Singleton;
+
+@Singleton
+public class Clusters {
+    // TODO: replace system.out.print by LOG
+    private static Log LOG = LogFactory.getLog(Clusters.class);
+    
+    /*
+     * Operational clusters include both active and inactive clusters
+     */
+    protected ConcurrentHashMap<String, Cluster> operational_clusters = new ConcurrentHashMap<String, Cluster>();
+    private final DataStore dataStore;
+    
+    private final Stacks stacks;
+    private final Nodes nodes;
+    private final ClusterFactory clusterFactory;
+    private final StackFlattener flattener;
+    private final FSMDriverInterface fsmDriver;
+        
+    @Inject
+    private Clusters(Stacks stacks, Nodes nodes, 
+                     DataStoreFactory dataStore,
+                     ClusterFactory clusterFactory,
+                     StackFlattener flattener,
+                     FSMDriverInterface fsmDriver) throws Exception {
+      this.stacks = stacks;
+      this.nodes = nodes;
+      this.dataStore = dataStore.getInstance();
+      this.clusterFactory = clusterFactory;
+      this.fsmDriver = fsmDriver;
+      this.flattener = flattener;
+    }
+    
+    /*
+     * Wrapper method over datastore API
+     */
+    public boolean clusterExists(String clusterName) throws IOException {
+        if (!this.operational_clusters.containsKey(clusterName) &&
+            dataStore.clusterExists(clusterName) == false) {
+            return false;
+        }
+        return true;
+    }
+    
+    /* 
+     * Get the cluster by name
+     * Wrapper over datastore API
+     */
+    public synchronized Cluster getClusterByName(String clusterName) throws Exception {
+        if (clusterExists(clusterName)) {
+            if (!this.operational_clusters.containsKey(clusterName)) {
+                Cluster cls = clusterFactory.create(clusterName);
+                cls.init();
+                this.operational_clusters.put(clusterName, cls);
+            }
+            return this.operational_clusters.get(clusterName);
+        } else {
+            return null;
+        }
+    }
+    
+    /*
+     * Purge the cluster entry from memory and the data store
+     */
+    public synchronized void purgeClusterEntry (String clusterName) throws IOException {
+        dataStore.deleteCluster(clusterName);
+        this.operational_clusters.remove(clusterName);
+    }
+    
+    /*
+     * Add Cluster Entry into data store and memory cache
+     */
+    public synchronized Cluster addClusterEntry (ClusterDefinition cdef, 
+                                                 ClusterState cs) throws Exception {
+        Cluster cls = clusterFactory.create(cdef, cs);
+        this.operational_clusters.put(cdef.getName(), cls);
+        return cls;
+    }
+    
+    /*
+     * Rename the cluster
+     */
+    public synchronized void renameCluster(String clusterName, String new_name) throws Exception {
+        if (!clusterExists(clusterName)) {
+            String msg = "Cluster ["+clusterName+"] does not exist";
+            throw new WebApplicationException((new ExceptionResponse(msg, Response.Status.NOT_FOUND)).get());
+        }
+        
+        if (new_name == null || new_name.equals("")) {
+            String msg = "New name of the cluster should be specified as query parameter, (?new_name=xxxx)";
+            throw new WebApplicationException((new ExceptionResponse(msg, Response.Status.BAD_REQUEST)).get());
+        }
+        
+        /*
+         * Check if cluster state is ATTAIC, If yes update the name
+         * don't make new revision of cluster definition as it is in ATTIC state
+         */
+        if (!getClusterByName(clusterName).getClusterState().getState().equals(ClusterState.CLUSTER_STATE_ATTIC)) {
+            String msg = "Cluster state is not ATTIC. Cluster is only allowed to be renamed in ATTIC state";
+            throw new WebApplicationException((new ExceptionResponse(msg, Response.Status.NOT_ACCEPTABLE)).get());
+        }
+        
+        Cluster x = this.getClusterByName(clusterName);
+        ClusterDefinition cdef = x.getClusterDefinition(-1);
+        cdef.setName(new_name);
+        ClusterState cs = x.getClusterState();
+        this.addClusterEntry(cdef, cs);
+        this.purgeClusterEntry(clusterName);
+    }
+    
+    /*
+     * Delete Cluster 
+     * Delete operation will mark the cluster to_be_deleted and then set the goal state to ATTIC
+     * Once cluster gets to ATTIC state, background daemon should purge the cluster entry.
+     */
+    public synchronized void deleteCluster(String clusterName) throws Exception { 
+    
+        if (!this.clusterExists(clusterName)) {
+            System.out.println("Cluster ["+clusterName+"] does not exist!");
+            return;
+        }
+        
+        /*
+         * Update the cluster definition with goal state to be ATTIC
+         */
+        Cluster cls = this.getClusterByName(clusterName);   
+        ClusterDefinition cdf = new ClusterDefinition();
+        cdf.setName(clusterName);
+        cdf.setGoalState(ClusterState.CLUSTER_STATE_ATTIC);
+        cls.updateClusterDefinition(cdf);
+        
+        /* 
+         * Update cluster state, mark it "to be deleted"
+         */
+        ClusterState cs = cls.getClusterState();
+        cs.setMarkForDeletionWhenInAttic(true); 
+        cls.updateClusterState(cs);
+    }
+
+    /* 
+     * Create/Update cluster definition 
+     * TODO: As nodes or role to node association changes, validate key services nodes are not removed
+    */
+    public synchronized ClusterDefinition updateCluster(String clusterName, ClusterDefinition c, boolean dry_run) throws Exception {       
+        /*
+         * Add new cluster if cluster does not exist
+         */
+        if (!clusterExists(clusterName)) {
+            return addCluster(clusterName, c, dry_run);
+        }
+        
+        /*
+         * Time being we will keep entire updated copy as new revision
+         * TODO: Check if anything has really changed??
+         */
+        Cluster cls = getClusterByName(clusterName);
+        ClusterDefinition newcd = new ClusterDefinition ();
+        newcd.setName(clusterName);
+        boolean clsDefChanged = false;
+        boolean configChanged = false;
+        if (c.getStackName() != null && !c.getStackName().equals(cls.getClusterDefinition(-1).getStackName())) {
+            newcd.setStackName(c.getStackName());
+            clsDefChanged = true;
+            configChanged = true;
+        } else {
+            newcd.setStackName(cls.getClusterDefinition(-1).getStackName());
+        }
+        if (c.getStackRevision() != null && !c.getStackRevision().equals(cls.getClusterDefinition(-1).getStackRevision())) {
+            newcd.setStackRevision(c.getStackRevision());
+            clsDefChanged = true;
+            configChanged = true;
+        } else {
+            newcd.setStackRevision(cls.getClusterDefinition(-1).getStackRevision());
+        }
+        if (c.getDescription() != null && !c.getDescription().equals(cls.getClusterDefinition(-1).getDescription())) {
+            newcd.setDescription(c.getDescription());
+            clsDefChanged = true;
+        } else {
+            newcd.setDescription(cls.getClusterDefinition(-1).getDescription());
+        }
+        if (c.getGoalState() != null && !c.getGoalState().equals(cls.getClusterDefinition(-1).getGoalState())) {
+            newcd.setGoalState(c.getGoalState());
+            clsDefChanged = true;
+        } else {
+            newcd.setGoalState(cls.getClusterDefinition(-1).getGoalState());
+        }
+        if (c.getEnabledServices() != null && !c.getEnabledServices().equals(cls.getClusterDefinition(-1).getEnabledServices())) {
+            newcd.setEnabledServices(c.getEnabledServices());
+            clsDefChanged = true;
+        } else {
+            newcd.setEnabledServices(cls.getClusterDefinition(-1).getEnabledServices());
+        }
+        
+        /*
+         * TODO: What if controller is crashed after updateClusterNodesReservation 
+         * before updating and adding new revision of cluster definition?
+         */
+        boolean updateNodesReservation = false;
+        boolean updateNodeToRolesAssociation = false;
+        if (c.getNodes() != null && !c.getNodes().equals(cls.getClusterDefinition(-1).getNodes())) {
+            newcd.setNodes(c.getNodes());
+            updateNodesReservation = true;
+            clsDefChanged = true;
+            
+        } else {
+            newcd.setNodes(cls.getClusterDefinition(-1).getNodes());
+        }
+        if (c.getRoleToNodesMap() != null && !c.getRoleToNodesMap().toString().equals(cls.getClusterDefinition(-1).getRoleToNodesMap().toString())) {
+            newcd.setRoleToNodesMap(c.getRoleToNodesMap());
+            updateNodeToRolesAssociation = true;
+            clsDefChanged = true;
+        } else {
+            newcd.setRoleToNodesMap(cls.getClusterDefinition(-1).getRoleToNodesMap());
+        }
+        
+        /*
+         * If no change in the cluster definition then return
+         */
+        if (!clsDefChanged) {
+            return cls.getClusterDefinition(-1);
+        }
+        
+        /*
+         * if Cluster goal state is ATTIC then no need to take any action other than
+         * updating the cluster definition.
+         */
+        if (cls.getClusterState().getState().equals(ClusterState.CLUSTER_STATE_ATTIC) &&
+            newcd.getGoalState().equals(ClusterState.CLUSTER_STATE_ATTIC)) {
+            ClusterState cs = cls.getClusterState();
+            cs.setLastUpdateTime(Util.getXMLGregorianCalendar(new Date()));
+            cls.updateClusterDefinition(newcd);
+            cls.updateClusterState(cs);
+            return cls.getClusterDefinition(-1);
+        }
+        
+        /*
+         * Validate the updated cluster definition
+         */
+        validateClusterDefinition(clusterName, newcd);
+        
+        /*
+         * If dry_run then return the newcd at this point
+         */
+        if (dry_run) {
+            System.out.println ("Dry run for update cluster..");
+            return newcd;
+        }
+        
+        /*
+         *  Update the new cluster definition and state
+         *  Generate the config script for puppet
+         */
+        ClusterState cs = cls.getClusterState();
+        cs.setLastUpdateTime(Util.getXMLGregorianCalendar(new Date()));
+        cls.updateClusterDefinition(newcd);
+        cls.updateClusterState(cs);
+        
+        /*
+         * Create Puppet config
+        if (configChanged || updateNodeToRolesAssociation || updateNodesReservation) {
+            String puppetConfig = this.getPuppetConfigString (newcd);
+            cls.updatePuppetConfiguration(puppetConfig);
+        } */
+        
+        /*
+         * Update the nodes reservation and node to roles association 
+         */
+        if (updateNodesReservation) {
+            updateClusterNodesReservation (cls.getName(), c);   
+        }
+        if (updateNodeToRolesAssociation) {
+            updateNodeToRolesAssociation(newcd.getNodes(), c.getRoleToNodesMap());
+        }
+        
+        /*
+         * Print puppet config
+         */
+        System.out.println(getInstallAndConfigureScript(clusterName, -1));
+        
+        /*
+         * Invoke state machine event
+         */
+        if(c.getGoalState().equals(ClusterState.CLUSTER_STATE_ACTIVE)) {
+          fsmDriver.startCluster(cls.getName());
+        } else if(c.getGoalState().equals(ClusterState.CLUSTER_STATE_INACTIVE)) {
+          fsmDriver.stopCluster(cls.getName());
+        } else if(c.getGoalState().equals(ClusterState.CLUSTER_STATE_ATTIC)) {
+          fsmDriver.stopCluster(cls.getName());
+        }
+    
+        return cls.getClusterDefinition(-1);
+    }
+
+    /* 
+     * Add new Cluster to cluster list  
+     */   
+    private ClusterDefinition addCluster(String clusterName, ClusterDefinition cdef, boolean dry_run) throws Exception {
+        
+        /*
+         * TODO: Validate the cluster definition and set the default
+         * 
+         */
+        validateClusterDefinition(clusterName, cdef);
+        
+        /*
+         * Add the defaults for optional values, if not set
+         */
+        setNewClusterDefaults(cdef);
+        
+        /*
+         * Create new cluster object
+         */
+        Date requestTime = new Date();
+        
+        ClusterState clsState = new ClusterState();
+        clsState.setCreationTime(Util.getXMLGregorianCalendar(requestTime));
+        clsState.setLastUpdateTime(Util.getXMLGregorianCalendar(requestTime));
+        clsState.setDeployTime(Util.getXMLGregorianCalendar((Date)null));          
+        if (cdef.getGoalState().equals(ClusterDefinition.GOAL_STATE_ATTIC)) {
+            clsState.setState(ClusterState.CLUSTER_STATE_ATTIC);
+        } else {
+            clsState.setState(ClusterState.CLUSTER_STATE_INACTIVE);
+        }
+        
+        /*
+         * TODO: Derive the role to nodes map based on nodes attributes
+         * then populate the node to roles association.
+         */
+        if (cdef.getRoleToNodesMap() == null) {
+            List<RoleToNodes> role2NodesList = generateRoleToNodesListBasedOnNodeAttributes (cdef);
+            cdef.setRoleToNodesMap(role2NodesList);
+        }
+        
+        /*
+         * If dry run then update roles to nodes map, if not specified explicitly
+         * and return
+         */
+        if (dry_run) {
+            return cdef;
+        }
+        
+        /*
+         * Persist the new cluster and add entry to cache
+         */
+        Cluster cls = this.addClusterEntry(cdef, clsState);
+        
+        /*
+         * Update cluster nodes reservation. 
+         */
+        if (cdef.getNodes() != null 
+            && !cdef.getGoalState().equals(ClusterDefinition.GOAL_STATE_ATTIC)) {
+            updateClusterNodesReservation (cls.getName(), cdef);
+        }
+        
+        /*
+         * Update the Node to Roles association
+         */
+        if (!cdef.getGoalState().equals(ClusterDefinition.GOAL_STATE_ATTIC)) {
+            updateNodeToRolesAssociation(cdef.getNodes(), cdef.getRoleToNodesMap());
+        }
+        
+        /*
+         * Print puppet config
+         */
+        System.out.println(getInstallAndConfigureScript(clusterName, -1));
+        
+        /*
+         * Create the cluster object with state machine & 
+         * activate it if the goal state is ACTIVE
+         * TODO: Make sure createCluster is idempotent (i.e. if object already exists
+         * then return success)
+        */
+        ClusterFSM cs = fsmDriver.createCluster(cls,cls.getLatestRevisionNumber());
+        if(cdef.getGoalState().equals(ClusterDefinition.GOAL_STATE_ACTIVE)) {          
+            cs.activate();
+        }
+        return cdef;
+    }
+
+    /*
+     * Add default values for new cluster definition 
+     */
+    private void setNewClusterDefaults(ClusterDefinition cdef) throws Exception {
+        /* 
+         * Populate the input cluster definition w/ default values
+         */
+        if (cdef.getDescription() == null) { cdef.setDescription("Ambari cluster : "+cdef.getName());
+        }
+        if (cdef.getGoalState() == null) { cdef.setGoalState(ClusterDefinition.GOAL_STATE_INACTIVE);
+        }
+        
+        /*
+         * If its new cluster, do not specify the revision, set it to null. A revision number is obtained
+         * after persisting the definition
+         */
+        cdef.setRevision(null);
+        
+        // TODO: Add the list of active services by querying pluging component.
+        if (cdef.getEnabledServices() == null) {
+            List<String> services = new ArrayList<String>();
+            services.add("ALL");
+            cdef.setEnabledServices(services);
+        }    
+    }
+    
+    /*
+     * Create RoleToNodes list based on node attributes
+     * TODO: For now just pick some nodes randomly
+     */
+    private List<RoleToNodes> generateRoleToNodesListBasedOnNodeAttributes (ClusterDefinition cdef) {
+        List<RoleToNodes> role2NodesList = new ArrayList<RoleToNodes>();
+        return role2NodesList;
+    }
+    
+    /*
+     * Validate the cluster definition
+     * TODO: Validate each role has enough nodes associated with it. 
+     */
+    private void validateClusterDefinition (String clusterName, ClusterDefinition cdef) throws Exception {
+        /*
+         * Check if name is not empty or null
+         */
+        if (cdef.getName() == null ||  cdef.getName().equals("")) {
+            String msg = "Cluster Name must be specified and must be non-empty string";
+            throw new WebApplicationException((new ExceptionResponse(msg, Response.Status.BAD_REQUEST)).get());
+        }
+        
+        if (!cdef.getName().equals(clusterName)) {
+            String msg = "Cluster Name specified in URL and cluster definition are not same";
+            throw new WebApplicationException((new ExceptionResponse(msg, Response.Status.BAD_REQUEST)).get());
+        }
+        
+        if (cdef.getNodes() == null || cdef.getNodes().equals("")) {
+            String msg = "Cluster node range must be specified and must be non-empty string";
+            throw new WebApplicationException((new ExceptionResponse(msg, Response.Status.BAD_REQUEST)).get());
+        }
+        
+        if (cdef.getStackName() == null || cdef.getStackName().equals("")) {
+            String msg = "Cluster stack must be specified and must be non-empty string";
+            throw new WebApplicationException((new ExceptionResponse(msg, Response.Status.BAD_REQUEST)).get());
+        }
+        
+        if (cdef.getStackRevision() == null || cdef.getStackRevision().equals("")) {
+            String msg = "Cluster stack revision must be specified";
+            throw new WebApplicationException((new ExceptionResponse(msg, Response.Status.BAD_REQUEST)).get());
+        }
+        
+        /*
+         * Check if the cluster stack and its parents exist
+         * getStack would throw exception if it does not find the stack
+         */
+        Stack bp = stacks.getStack(cdef.getStackName(), Integer.parseInt(cdef.getStackRevision()));
+        while (bp.getParentName() != null) {
+            bp = stacks.getStack(bp.getParentName(), bp.getParentRevision());
+        }
+        
+        
+        /*
+         * Check if nodes requested for cluster are not already allocated to other clusters
+         */
+        ConcurrentHashMap<String, Node> all_nodes = nodes.getNodes();
+        List<String> cluster_node_range = new ArrayList<String>();
+        cluster_node_range.addAll(getHostnamesFromRangeExpressions(cdef.getNodes()));
+        List<String> preallocatedhosts = new ArrayList<String>();
+        for (String n : cluster_node_range) {
+            if (all_nodes.containsKey(n) && 
+                    (all_nodes.get(n).getNodeState().getClusterName() != null || 
+                     all_nodes.get(n).getNodeState().getAllocatedToCluster()
+                    )
+                ) {
+                /* 
+                 * Following check is for a very specific case 
+                 * When controller starts w/ no persistent data in data store, it adds default clusters
+                 * and down the road restart recovery code re-validates the cluster definition when
+                 * it finds nodes already allocated. 
+                if (all_nodes.get(n).getNodeState().getClusterName() != null && 
+                    all_nodes.get(n).getNodeState().getClusterName().equals(clusterName)) { 
+                    continue; 
+                } */
+                preallocatedhosts.add(n);
+            }
+        }
+
+        if (!preallocatedhosts.isEmpty()) {
+            String msg = "Some of the nodes specified for the cluster roles are allocated to other cluster: ["+preallocatedhosts+"]";
+            throw new WebApplicationException((new ExceptionResponse(msg, Response.Status.CONFLICT)).get());
+        }
+        
+        
+        /*
+         * Check if all the nodes explicitly specified in the RoleToNodesMap belong the cluster node range specified 
+         */
+        if (cdef.getRoleToNodesMap() != null) {
+            List<String> nodes_specified_using_role_association = new ArrayList<String>();
+            for (RoleToNodes e : cdef.getRoleToNodesMap()) {
+                List<String> hosts = getHostnamesFromRangeExpressions(e.getNodes());
+                nodes_specified_using_role_association.addAll(hosts);
+                // TODO: Remove any duplicate nodes from nodes_specified_using_role_association
+            }
+            
+            nodes_specified_using_role_association.removeAll(cluster_node_range);
+            if (!nodes_specified_using_role_association.isEmpty()) {
+                String msg = "Some nodes explicityly associated with roles using RoleToNodesMap do not belong in the " +
+                             "golbal node range specified for the cluster : ["+nodes_specified_using_role_association+"]";
+                throw new WebApplicationException((new ExceptionResponse(msg, Response.Status.BAD_REQUEST)).get());
+            }
+        }
+        
+
+    }
+    
+    /*
+     * Update the nodes associated with cluster
+     */
+    private synchronized void updateClusterNodesReservation (String clusterName, ClusterDefinition clsDef) throws Exception {
+                
+        ConcurrentHashMap<String, Node> all_nodes = nodes.getNodes();
+        List<String> cluster_node_range = new ArrayList<String>();
+        cluster_node_range.addAll(getHostnamesFromRangeExpressions(clsDef.getNodes()));
+       
+        /*
+         * Reserve the nodes as specified in the node range expressions
+         * -- throw exception, if any nodes are pre-associated with other cluster
+         */    
+        List<String> nodes_currently_allocated_to_cluster = new ArrayList<String>();
+        for (Node n : nodes.getNodes().values()) {
+            if ( n.getNodeState().getClusterName() != null &&
+                 n.getNodeState().getClusterName().equals(clusterName)) {
+                nodes_currently_allocated_to_cluster.add(n.getName());
+            }
+        }
+        
+        List<String> nodes_to_allocate = new ArrayList<String>(cluster_node_range);
+        nodes_to_allocate.removeAll(nodes_currently_allocated_to_cluster);
+        List<String> nodes_to_deallocate = new ArrayList<String>(nodes_currently_allocated_to_cluster);
+        nodes_to_deallocate.removeAll(cluster_node_range);
+        
+        /*
+         * Check for any nodes that are allocated to other cluster
+         */
+        List<String> preallocatedhosts = new ArrayList<String>();
+        for (String n : nodes_to_allocate) {
+            if (all_nodes.containsKey(n) && 
+                    (all_nodes.get(n).getNodeState().getClusterName() != null || 
+                     all_nodes.get(n).getNodeState().getAllocatedToCluster()
+                    )
+                ) {
+                preallocatedhosts.add(n);
+            }
+        }
+        
+        /* 
+         * Throw exception, if some of the hosts are already allocated to other cluster
+         */
+        if (!preallocatedhosts.isEmpty()) {
+            /*
+             * TODO: Return invalid request code and return list of preallocated nodes as a part of
+             *       response element
+             */
+            String msg = "Some of the nodes specified for the cluster roles are allocated to other cluster: ["+preallocatedhosts+"]";
+            throw new WebApplicationException((new ExceptionResponse(msg, Response.Status.CONFLICT)).get());
+        }
+        
+        /*
+         * Allocate nodes to given cluster
+         */    
+        for (String node_name : nodes_to_allocate) {
+            if (all_nodes.containsKey(node_name)) { 
+                // Set the cluster name in the node 
+                synchronized (all_nodes.get(node_name)) {
+                    all_nodes.get(node_name).reserveNodeForCluster(clusterName, true);
+                }    
+            } else {
+                Date epoch = new Date(0);
+                nodes.checkAndUpdateNode(node_name, epoch);
+                Node node = nodes.getNode(node_name);
+                /*
+                 * TODO: Set agentInstalled = true, unless controller uses SSH to setup the agent
+                 */
+                node.reserveNodeForCluster(clusterName, true);
+            }
+        }
+        
+        /*
+         * deallocate nodes from a given cluster
+         * TODO: Node agent would asynchronously clean up the node and notify it through heartbeat which 
+         * would reset the clusterID associated with node
+         */
+        for (String node_name : nodes_to_deallocate) {
+            if (all_nodes.containsKey(node_name)) {
+                synchronized (all_nodes.get(node_name)) {
+                    all_nodes.get(node_name).releaseNodeFromCluster();
+                }
+            }
+        }
+    }
+
+    /**
+     * Update Node to Roles association.  
+     * If role is not explicitly associated w/ any node, then assign it w/ default role
+     * 
+     * @param clusterNodes
+     * @param roleToNodesList
+     * @throws Exception
+     */
+    private synchronized void updateNodeToRolesAssociation (String clusterNodes, List<RoleToNodes> roleToNodesList) throws Exception {
+        /*
+         * Associate roles list with node
+         */
+        if (roleToNodesList == null) {
+            return;
+        }
+        
+        /*
+         * Add list of roles to Node
+         * If node is not explicitly associated with any role then assign it w/ default role
+         */
+        for (RoleToNodes e : roleToNodesList) {
+            List<String> hosts = getHostnamesFromRangeExpressions(e.getNodes());
+            for (String host : hosts) {
+              NodeRole ns = new NodeRole(e.getRoleName(), NodeRole.NODE_SERVER_STATE_DOWN, Util.getXMLGregorianCalendar(new Date()));
+              nodes.getNodes().get(host).getNodeState().updateRoleState(ns);
+            }
+        }
+        
+        
+        /*
+         * Get the list of specified global node list for the cluster and any nodes NOT explicitly specified in the
+         * role to nodes map, assign them with default role 
+         */
+        List<String> specified_node_range = new ArrayList<String>();
+        specified_node_range.addAll(getHostnamesFromRangeExpressions(clusterNodes));
+        for (String host : specified_node_range) {
+            if (nodes.getNodes().get(host).getNodeState().getNodeRoles() == null) {
+                String cid = nodes.getNodes().get(host).getNodeState().getClusterName();
+                NodeRole ns = new NodeRole(getDefaultRoleName(cid), NodeRole.NODE_SERVER_STATE_DOWN, Util.getXMLGregorianCalendar(new Date()) );
+                nodes.getNodes().get(host).getNodeState().updateRoleState(ns);
+            } 
+        }
+    }
+
+    /*
+     * Get Cluster stack
+     */
+    public Stack getClusterStack(String clusterName, boolean expanded) throws Exception {
+        if (!this.clusterExists(clusterName)) {
+            String msg = "Cluster ["+clusterName+"] does not exist";
+            throw new WebApplicationException((new ExceptionResponse(msg, Response.Status.NOT_FOUND)).get());
+        }
+        
+        Cluster cls = this.getClusterByName(clusterName);
+        String stackName = cls.getClusterDefinition(-1).getStackName();
+        int stackRevision = Integer.parseInt(cls.getClusterDefinition(-1).getStackRevision());
+        
+        Stack bp;
+        if (!expanded) {
+            bp = stacks.getStack(stackName, stackRevision);
+        } else {
+            bp = this.flattener.flattenStack(stackName, stackRevision);
+        }
+        return bp;
+    }
+     
+    /*
+     * Get the latest cluster definition
+     */
+    public ClusterDefinition getLatestClusterDefinition(String clusterName) throws Exception {
+        return this.getClusterByName(clusterName).getClusterDefinition(-1);
+    }
+    
+    /*
+     * Get Cluster Definition given name and revision
+     */
+    public ClusterDefinition getClusterDefinition(String clusterName, int revision) throws Exception {
+        return this.getClusterByName(clusterName).getClusterDefinition(revision);
+    }
+    
+    /* 
+     * Get the cluster Information by name
+     */
+    public ClusterInformation getClusterInformation (String clusterName) throws Exception  {
+        if (!this.clusterExists(clusterName)) {
+            String msg = "Cluster ["+clusterName+"] does not exist";
+            throw new WebApplicationException((new ExceptionResponse(msg, Response.Status.NOT_FOUND)).get());
+        }
+        ClusterInformation clsInfo = new ClusterInformation();
+        clsInfo.setDefinition(this.getLatestClusterDefinition(clusterName));
+        clsInfo.setState(this.getClusterByName(clusterName).getClusterState());
+        return clsInfo;
+    }
+    
+    
+    /* 
+     * Get the cluster state
+    */
+    public ClusterState getClusterState(String clusterName) throws Exception {
+        if (!this.clusterExists(clusterName)) {
+            String msg = "Cluster ["+clusterName+"] does not exist";
+            throw new WebApplicationException((new ExceptionResponse(msg, Response.Status.NOT_FOUND)).get());
+        }
+        return this.getClusterByName(clusterName).getClusterState();
+    }
+    
+    
+    /*
+     * Get Cluster Information list i.e. cluster definition and cluster state
+     */
+    public List<ClusterInformation> getClusterInformationList(String state) throws Exception {
+      List<ClusterInformation> list = new ArrayList<ClusterInformation>();
+      List<String> clusterNames = dataStore.retrieveClusterList();
+      for (String clsName : clusterNames) {
+        Cluster cls = this.getClusterByName(clsName);
+        if (state.equals("ALL")) {
+          ClusterInformation clsInfo = new ClusterInformation();
+          clsInfo.setDefinition(cls.getClusterDefinition(-1));
+          clsInfo.setState(cls.getClusterState());
+          list.add(clsInfo);
+        } else {
+          if (cls.getClusterState().getState().equals(state)) {
+              ClusterInformation clsInfo = new ClusterInformation();
+              clsInfo.setDefinition(cls.getClusterDefinition(-1));
+              clsInfo.setState(cls.getClusterState());
+              list.add(clsInfo);
+          }
+        }
+      }
+      return list;
+    }
+    
+    /*
+     * Get the list of clusters
+     * TODO: Get the synchronized snapshot of each cluster definition? 
+     */
+    public List<Cluster> getClustersList(String state) throws Exception {
+        List<Cluster> list = new ArrayList<Cluster>();
+        List<String> clusterNames = dataStore.retrieveClusterList();
+        for (String clsName : clusterNames) {
+          Cluster cls = this.getClusterByName(clsName);
+          if (state.equals("ALL")) {
+            list.add(cls);
+          } else {
+            if (cls.getClusterState().getState().equals(state)) {
+                list.add(cls);
+            }
+          }
+        }
+        return list;
+    }
+    
+    /* 
+     * UTIL methods on entities
+     */
+    
+    /*
+     * Get the list of role names associated with node
+     */
+    public List<String> getAssociatedRoleNames(String hostname) {
+      return nodes.getNodes().get(hostname).getNodeState().getNodeRoleNames(null);
+    }
+    
+    /*
+     *  Return the default role name to be associated with specified cluster node that 
+     *  has no specific role to nodes association specified in the cluster definition
+     *  Throw exception if node is not associated to with any cluster
+     */
+    public String getDefaultRoleName(String clusterName) throws Exception {
+        Cluster c = getClusterByName(clusterName);
+        // TODO: find the default role from the clsuter stack 
+        return "slaves-role";
+    }
+    
+  /*
+   * TODO: Implement proper range expression
+   * TODO: Remove any duplicate nodes from the derived list
+   */
+  public List<String> getHostnamesFromRangeExpressions (String nodeRangeExpression) throws Exception {
+      List<String> list = new ArrayList<String>();
+      StringTokenizer st = new StringTokenizer(nodeRangeExpression, ",");
+      while (st.hasMoreTokens()) {
+        list.add(st.nextToken().trim());
+      }
+      return list;
+  }
+  
+  /*
+   * Restart recovery for clusters
+   */
+  void recoverClustersStateAfterRestart () throws Exception {
+      for (Cluster cls : this.getClustersList("ALL")) {
+          ClusterDefinition cdef = cls.getClusterDefinition(-1);
+          this.validateClusterDefinition (cdef.getName(), cdef);
+          /*
+           * Update cluster nodes reservation. 
+           */
+          if (cdef.getNodes() != null 
+              && !cdef.getGoalState().equals(ClusterDefinition.GOAL_STATE_ATTIC)) {
+              this.updateClusterNodesReservation (cls.getName(), cdef);
+          }
+          
+          /*
+           * Update the Node to Roles association
+           */
+          if (!cdef.getGoalState().equals(ClusterDefinition.GOAL_STATE_ATTIC)) {
+              this.updateNodeToRolesAssociation(cdef.getNodes(), cdef.getRoleToNodesMap());
+          }
+          
+          /*
+           * Update the state machine
+           */
+          ClusterFSM cs = fsmDriver.createCluster(cls,cls.getLatestRevisionNumber());
+          if (cdef.getGoalState().equals(ClusterState.CLUSTER_STATE_ACTIVE)) {
+              fsmDriver.startCluster(cls.getName());
+          } else if(cdef.getGoalState().equals(ClusterState.CLUSTER_STATE_INACTIVE)) {
+              fsmDriver.stopCluster(cls.getName());
+          } else if(cdef.getGoalState().equals(ClusterState.CLUSTER_STATE_ATTIC)) {
+              fsmDriver.stopCluster(cls.getName());
+          }
+      }
+  }
+  
+  
+  private void printStack(Stack stack, String file_path) throws Exception {
+      JAXBContext jc = JAXBContext.newInstance(org.apache.ambari.common.rest.entities.Stack.class);
+      Marshaller m = jc.createMarshaller();
+      m.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, Boolean.TRUE);
+      if (file_path == null) {
+          m.marshal(stack, System.out);
+      } else {
+          m.marshal(stack, new File(file_path));
+      }
+  }
+  
+  /*
+   * Get the puppet deployment script for this cluster name/revision combo
+   */
+  public String getInstallAndConfigureScript(String clusterName, int revision) throws Exception {
+      
+      ClusterDefinition c = getClusterByName (clusterName).getClusterDefinition(revision);
+      Stack stack = this.flattener.flattenStack(c.getStackName(), Integer.parseInt(c.getStackRevision()));
+      //printStack(stack, null);
+      
+      /*
+       * Generate Ambari global variables
+       */
+      String config = getStackGlobalVariablesForPuppet (stack, c);
+      
+      config = config + getStackConfigMapForPuppet (stack);
+      
+      config = config + getRoleToNodesMapForPuppet (c, stack);
+      
+      return config;
+  }
+  
+  private String getStackGlobalVariablesForPuppet (Stack stack, ClusterDefinition c) throws Exception {
+
+      String config = "\n";
+      config = config + "$ambari_cluster_name" + " = " + "\"" + c.getName() + "\"\n";
+      config = config + "\n";
+      /*
+       * TODO: Add all master host names
+      */
+      HashMap<String, String> roles = new HashMap<String, String>();
+      for (RoleToNodes rns : c.getRoleToNodesMap()) {
+          config = config + "$ambari_"+rns.getRoleName()+"_host" + " = " + "\"";
+          roles.put(rns.getRoleName(), null);
+          List<String> host_list = this.getHostnamesFromRangeExpressions(rns.getNodes());
+          if (host_list != null && host_list.get(0) != null) {
+            config = config + host_list.get(0);
+          }  
+          config = config + "\"\n";
+      }
+     
+      /* 
+       * Add non-specified roles for puppet to work correctly
+       */
+      for (Component comp : stack.getComponents()) {
+          for (Role r : comp.getRoles()) {
+              if (!roles.containsKey(r.getName())) {
+                  config = config + "$ambari_"+r.getName()+"_host" + " = " + "\"\"\n";
+              }
+          }
+      }
+      config = config + "\n";
+      
+      /*
+       * Get the default user/group and role specific user/group information
+       * Find unique users and groups for puppet definition
+       */
+      HashMap<String, UserGroup> users = new HashMap<String, UserGroup>();
+      HashMap<String, UserGroup> groups = new HashMap<String, UserGroup>(); 
+      UserGroup dg = stack.getDefault_user_group();
+      users.put(dg.getUser(), dg);
+      groups.put(dg.getGroup(), dg);
+      config = config + "\n$unique_users = { ";
+      config = config + dg.getUser() + " => { \"UID\" => \"" + dg.getUserid() + "\", \"GROUP\" => \"" + dg.getGroup() + "\" }, \n";
+      for (Component comp : stack.getComponents()) {
+          if (comp.getUser_group() != null) {
+              UserGroup ug = comp.getUser_group();
+              if (!users.containsKey(ug.getUser())) {
+                  users.put(ug.getUser(), ug);
+                  config = config + ug.getUser() + " => { \"UID\" => \"" + ug.getUserid() + "\", \"GROUP\" => \"" + ug.getGroup() + "\" }, \n";
+              }
+          }
+      }
+      config = config + "}\n";
+      
+      config = config + "\n$unique_groups = { ";
+      config = config + dg.getGroup() + " => { \"GID\" => \"" + dg.getGroupid() + "\" }, \n";
+      for (Component comp : stack.getComponents()) {
+          if (comp.getUser_group() != null) {
+              UserGroup ug = comp.getUser_group();
+              if (!groups.containsKey(ug.getGroup())) {
+                  groups.put(ug.getGroup(), ug);
+                  config = config + ug.getGroup() + " => { \"GID\" => \"" + ug.getGroupid() + "\" }, \n";
+              }
+          }
+      }
+      config = config + "}\n";
+      
+      config = config + "$ambari_default_user" + " = " + "\"" + stack.getDefault_user_group().getUser()+"\"\n";
+      config = config + "$ambari_default_group" + " = " + "\"" + stack.getDefault_user_group().getGroup()+"\"\n";
+      for (Component comp : stack.getComponents()) {
+          UserGroup ug = null;
+          if (comp.getUser_group() != null) {
+              ug = comp.getUser_group();
+              config = config + "$ambari_"+comp.getName()+"_user" + " = " + "\"" + ug.getUser()+"\"\n";
+              config = config + "$ambari_"+comp.getName()+"_group" + " = " + "\"" + ug.getGroup()+"\"\n";
+          }
+      }
+      config = config + "\n";
+      
+      for (KeyValuePair p : stack.getGlobals()) {
+         config = config + "$"+p.getName() + " = " + "\"" + p.getValue() + "\"\n";
+      }
+      config = config + "\n";
+      return config;
+  }
+  
+  private String getRoleToNodesMapForPuppet (ClusterDefinition c, Stack stack) throws Exception {
+  
+      HashMap<String, String> roles = new HashMap<String, String>();
+      String config = "\n$role_to_nodes = { ";
+      for (int i=0; i<c.getRoleToNodesMap().size(); i++) {
+          RoleToNodes roleToNodesEntry = c.getRoleToNodesMap().get(i);
+          roles.put(roleToNodesEntry.getRoleName(), null);
+          config = config + roleToNodesEntry.getRoleName()+ " => [";
+          List<String> host_list = this.getHostnamesFromRangeExpressions(roleToNodesEntry.getNodes());
+          for (int j=0; j<host_list.size(); j++) {
+              String host = host_list.get(j);
+              if (j == host_list.size()-1) {
+                  config = config + "\'"+host+"\'";
+              } else {
+                  config = config + "\'"+host+"\',";
+              }
+          }
+          config = config + "], \n";
+      }
+      
+      /* 
+       * Add non-specified roles for puppet to work correctly
+       */
+      for (Component comp : stack.getComponents()) {
+          for (Role r : comp.getRoles()) {
+              if (!roles.containsKey(r.getName())) {
+                  config = config + r.getName() + " => [ ], \n";
+              }
+          }
+      }
+      if (!roles.containsKey("client")) {
+        config = config + "client" + " => [ ], \n";
+      }
+      config = config + "} \n"; 
+      return config;
+  }
+  
+  private String getStackConfigMapForPuppet (Stack stack) throws Exception {
+      String config = "\n$hadoop_stack_conf = { ";
+      
+      /*
+       * Generate configuration map for client role from top level configuration in the stack
+       */
+      config = config + "client" + " => { "; 
+      if (stack.getConfiguration() != null && stack.getConfiguration().getCategory() != null) {
+          for (int j=0; j<stack.getConfiguration().getCategory().size(); j++) {
+              ConfigurationCategory cat = stack.getConfiguration().getCategory().get(j);
+              config = config+"\""+cat.getName()+"\" => { ";
+              if (cat.getProperty() != null) {
+                   for (int i=0; i<cat.getProperty().size(); i++) {
+                       Property p = cat.getProperty().get(i);
+                       if (i == cat.getProperty().size()-1) {
+                           config = config+ "\"" + p.getName()+"\" => \""+p.getValue()+"\" ";
+                       } else { 
+                           config = config+ "\"" + p.getName()+"\" => \""+p.getValue()+"\", ";
+                       }
+                   }
+               }
+               if (j == stack.getConfiguration().getCategory().size()-1) {
+                   config = config +" } \n";
+               } else {
+                   config = config +" }, \n";
+               }
+          }
+      }
+      config = config + "}, \n";
+     
+      /*
+       * Generate and append configuration map for other roles
+       */
+      if (stack.getComponents() != null) {
+          for (Component comp : stack.getComponents()) {
+              if (comp.getRoles() != null) {
+                  for (int k=0; k<comp.getRoles().size(); k++) {
+                      Role role = comp.getRoles().get(k);
+                      //config = config + comp.getName()+"_"+role.getName()+" => { ";
+                      config = config+role.getName()+" => { ";
+                      if (role.getConfiguration() != null && role.getConfiguration().getCategory() != null) {
+                          for (int j=0; j<role.getConfiguration().getCategory().size(); j++) {
+                              ConfigurationCategory cat = role.getConfiguration().getCategory().get(j);
+                              config = config+"\""+cat.getName()+"\" => { ";
+                              if (cat.getProperty() != null) {
+                                   for (int i=0; i<cat.getProperty().size(); i++) {
+                                       Property p = cat.getProperty().get(i);
+                                       if (i == cat.getProperty().size()-1) {
+                                           config = config+ "\"" + p.getName()+"\" => \""+p.getValue()+"\" ";
+                                       } else { 
+                                           config = config+ "\"" + p.getName()+"\" => \""+p.getValue()+"\", ";
+                                       }
+                                   }
+                               }
+                               if (j == role.getConfiguration().getCategory().size()-1) {
+                                   config = config +" } \n";
+                               } else {
+                                   config = config +" }, \n";
+                               }
+                          }
+                      }
+                      config = config + "}, \n";
+                  } 
+              }
+          }
+      }
+      config = config + "} \n";
+      return config;
+  }
+
+  /*
+   * Return the list of nodes associated with cluster given the role name and 
+   * alive state. If rolename or alive state is not specified (i.e. "") then all
+   * the nodes associated with cluster are returned.
+   */
+  public List<Node> getClusterNodes (String clusterName, String roleName, 
+                                     String alive) throws Exception {
+      
+      List<Node> list = new ArrayList<Node>();
+      Map<String,Node> nodeMap = nodes.getNodes();
+      ClusterDefinition c = operational_clusters.get(clusterName).
+          getClusterDefinition(-1);
+      if (c.getNodes() == null || c.getNodes().equals("") || 
+          getClusterByName(clusterName).getClusterState().getState().
+            equalsIgnoreCase("ATTIC")) {
+          String msg = "No nodes are reserved for the cluster. Typically" +
+            " cluster in ATTIC state does not have any nodes reserved";
+          throw new WebApplicationException((new ExceptionResponse(msg, 
+              Response.Status.NO_CONTENT)).get());
+      }
+      List<String> hosts = getHostnamesFromRangeExpressions(c.getNodes());
+      for (String host : hosts) {
+          if (!nodeMap.containsKey(host)) {
+              String msg = "Node ["+host+
+                  "] is expected to be registered w/ controller but not "+
+                  "locatable";
+              throw new WebApplicationException((new ExceptionResponse(msg, 
+                  Response.Status.INTERNAL_SERVER_ERROR)).get());
+          }
+          Node n = nodeMap.get(host);
+          if (roleName != null && !roleName.equals("")) {
+              if (n.getNodeState().getNodeRoleNames("") == null) { continue; }
+              if (!n.getNodeState().getNodeRoleNames("").contains(roleName)) { 
+                continue; 
+              }
+          }
+          
+          // Heart beat is set to epoch during node initialization.
+          GregorianCalendar cal = new GregorianCalendar(); 
+          cal.setTime(new Date());
+          XMLGregorianCalendar curTime = 
+              DatatypeFactory.newInstance().newXMLGregorianCalendar(cal);
+          if (alive.equals("") || (alive.equalsIgnoreCase("true") && 
+              Nodes.getTimeDiffInMillis(curTime, n.getNodeState().
+                  getLastHeartbeatTime()) < Nodes.NODE_NOT_RESPONDING_DURATION)
+              || (alive.equals("false") && 
+                  Nodes.getTimeDiffInMillis(curTime, 
+                      n.getNodeState().getLastHeartbeatTime()) >= 
+                      Nodes.NODE_NOT_RESPONDING_DURATION)) {
+              list.add(nodeMap.get(host));
+          }
+      }
+      return list;
+  }
+  
+  /*
+   * Returns the <key="name", value="revision"> hash table for cluster 
+   * referenced stacks. This would include any indirectly referenced parent 
+   * stacks as well.
+   */
+  public Hashtable<String, String> getClusterReferencedStacksList() 
+        throws Exception {
+      Hashtable<String, String> clusterStacks = new Hashtable<String, String>();
+      List<String> clusterNames = dataStore.retrieveClusterList();
+      for (String clsName : clusterNames) {
+          Cluster c = getClusterByName(clsName);
+          String cBPName = c.getClusterDefinition(-1).getStackName();
+          String cBPRevision = c.getClusterDefinition(-1).getStackRevision();
+          clusterStacks.put(cBPName, cBPRevision); 
+          Stack bpx = stacks.getStack(cBPName, Integer.parseInt(cBPRevision));      
+          while (bpx.getParentName() != null) {
+              bpx = stacks.getStack(bpx.getParentName(), 
+                                    bpx.getParentRevision());
+              clusterStacks.put(bpx.getName(), bpx.getRevision());
+          }
+      }
+      return clusterStacks;
+  }
+ 
+  /**
+   * Is the given stack used in any cluster?
+   * @param stackName the stack to check on
+   * @return is the stack used
+   * @throws Exception
+   */
+  public boolean isStackUsed(String stackName) throws Exception {
+    return getClusterReferencedStacksList().containsKey(stackName);
+  }
+}
diff --git a/controller/src/main/java/org/apache/ambari/controller/Controller.java b/controller/src/main/java/org/apache/ambari/controller/Controller.java
new file mode 100755
index 0000000..23d6a0f
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/controller/Controller.java
@@ -0,0 +1,133 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.controller;
+
+
+import java.io.IOException;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import org.apache.ambari.common.util.DaemonWatcher;
+import org.apache.ambari.common.util.ExceptionUtil;
+import org.mortbay.jetty.Server;
+
+import org.mortbay.jetty.servlet.Context;
+import org.mortbay.jetty.servlet.DefaultServlet;
+import org.mortbay.jetty.servlet.ServletHolder;
+import org.mortbay.resource.Resource;
+import org.mortbay.resource.ResourceCollection;
+
+import com.google.inject.Guice;
+import com.google.inject.Injector;
+import com.google.inject.Singleton;
+import com.sun.jersey.spi.container.servlet.ServletContainer;
+
+@Singleton
+public class Controller {
+  private static Log LOG = LogFactory.getLog(Controller.class);
+  public static int CONTROLLER_PORT = 4080;
+  private Server server = null;
+  public volatile boolean running = true; // true while controller runs
+  
+  public void run() {
+    server = new Server(CONTROLLER_PORT);
+
+    try {
+      Context root = new Context(server, "/", Context.SESSIONS);
+      String AMBARI_HOME = System.getenv("AMBARI_HOME");
+      root.setBaseResource(new ResourceCollection(new Resource[]
+        {
+          Resource.newResource(AMBARI_HOME+"/webapps/")
+        }));
+      ServletHolder rootServlet = root.addServlet(DefaultServlet.class, "/");
+      rootServlet.setInitOrder(1);
+      
+      ServletHolder sh = new ServletHolder(ServletContainer.class);
+      sh.setInitParameter("com.sun.jersey.config.property.resourceConfigClass", 
+        "com.sun.jersey.api.core.PackagesResourceConfig");
+      sh.setInitParameter("com.sun.jersey.config.property.packages", 
+        "org.apache.ambari.controller.rest.resources");
+      sh.setInitParameter("com.sun.jersey.config.property.WadlGeneratorConfig", 
+        "org.apache.ambari.controller.rest.config.ExtendedWadlGeneratorConfig");
+      root.addServlet(sh, "/rest/*");
+      sh.setInitOrder(2);
+
+      ServletHolder agent = new ServletHolder(ServletContainer.class);
+      agent.setInitParameter("com.sun.jersey.config.property.resourceConfigClass", 
+        "com.sun.jersey.api.core.PackagesResourceConfig");
+      agent.setInitParameter("com.sun.jersey.config.property.packages", 
+        "org.apache.ambari.controller.rest.agent");
+      agent.setInitParameter("com.sun.jersey.config.property.WadlGeneratorConfig", 
+        "org.apache.ambari.controller.rest.config.PrivateWadlGeneratorConfig");
+      root.addServlet(agent, "/agent/*");
+      agent.setInitOrder(3);
+/*    //COMMENTED THE FOLLOWING LINE TO WORK AROUND AMBARI-159
+      Constraint constraint = new Constraint();
+      constraint.setName(Constraint.__BASIC_AUTH);;
+      constraint.setRoles(new String[]{"user","admin","moderator"});
+      constraint.setAuthenticate(true);
+       
+      ConstraintMapping cm = new ConstraintMapping();
+      cm.setConstraint(constraint);
+      cm.setPathSpec("/agent/*");
+      
+      SecurityHandler security = new SecurityHandler();
+      security.setUserRealm(new HashUserRealm("Controller",
+          System.getenv("AMBARI_CONF_DIR")+"/auth.conf"));
+      security.setConstraintMappings(new ConstraintMapping[]{cm});
+
+      //root.addHandler(security);  
+*/
+      server.setStopAtShutdown(true);
+      
+      /*
+       * Start the server after controller state is recovered.
+       */
+      server.start();
+    } catch (Exception e) {
+      e.printStackTrace();
+      LOG.error(ExceptionUtil.getStackTrace(e));
+      
+    }
+  }
+  
+  public void stop() throws Exception {
+    try {
+      server.stop();
+    } catch (Exception e) {
+      LOG.error(ExceptionUtil.getStackTrace(e));
+    }
+  }
+
+  public static void main(String[] args) throws IOException {
+    Injector injector = Guice.createInjector(new ControllerModule());
+    DaemonWatcher.createInstance(System.getProperty("PID"), 9100);
+    try {
+      Clusters clusters = injector.getInstance(Clusters.class);
+      clusters.recoverClustersStateAfterRestart();
+      Controller controller = injector.getInstance(Controller.class);
+      if (controller != null) {
+        controller.run();
+      }
+    } catch(Throwable t) {
+      DaemonWatcher.bailout(1);
+    }
+  }
+}
diff --git a/controller/src/main/java/org/apache/ambari/controller/ControllerModule.java b/controller/src/main/java/org/apache/ambari/controller/ControllerModule.java
new file mode 100644
index 0000000..5bbe0a0
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/controller/ControllerModule.java
@@ -0,0 +1,49 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.controller;
+
+import org.apache.ambari.components.ComponentModule;
+import org.apache.ambari.controller.rest.agent.ControllerResource;
+import org.apache.ambari.controller.rest.resources.ClustersResource;
+import org.apache.ambari.controller.rest.resources.NodesResource;
+import org.apache.ambari.controller.rest.resources.StacksResource;
+import org.apache.ambari.resource.statemachine.ClusterImpl;
+import org.apache.ambari.resource.statemachine.RoleImpl;
+import org.apache.ambari.resource.statemachine.ServiceImpl;
+
+import com.google.inject.AbstractModule;
+import com.google.inject.assistedinject.FactoryModuleBuilder;
+
+public class ControllerModule extends AbstractModule {
+
+  @Override
+  protected void configure() {
+    install(new ComponentModule());
+    requestStaticInjection(ClustersResource.class,
+                           NodesResource.class,
+                           StacksResource.class,
+                           ControllerResource.class,
+                           RoleImpl.class,
+                           ServiceImpl.class,
+                           ClusterImpl.class);
+    install(new FactoryModuleBuilder()
+              .implement(Cluster.class,Cluster.class)
+              .build(ClusterFactory.class));
+  }
+
+}
diff --git a/controller/src/main/java/org/apache/ambari/controller/ExceptionResponse.java b/controller/src/main/java/org/apache/ambari/controller/ExceptionResponse.java
new file mode 100644
index 0000000..a857e22
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/controller/ExceptionResponse.java
@@ -0,0 +1,51 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.controller;
+
+import javax.ws.rs.core.Response;
+import javax.ws.rs.core.Response.ResponseBuilder;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import org.apache.ambari.common.util.ExceptionUtil;
+
+public class ExceptionResponse  {
+    private static Log LOG = LogFactory.getLog(ExceptionResponse.class);
+
+    Response r;
+    
+    public ExceptionResponse (Exception e) {
+        ResponseBuilder builder = Response.status(Response.Status.INTERNAL_SERVER_ERROR);
+        builder.header("ErrorMessage", e.getMessage());
+        builder.header("ErrorCode", Response.Status.INTERNAL_SERVER_ERROR.getStatusCode());
+        r = builder.build();
+        e.printStackTrace();
+        LOG.error(ExceptionUtil.getStackTrace(e));
+    }
+    
+    public ExceptionResponse (String exceptionMessage, Response.Status rs) {
+        ResponseBuilder builder = Response.status(rs);
+        builder.header("ErrorMessage",exceptionMessage);
+        builder.header("ErrorCode", rs.getStatusCode());
+        r = builder.build();
+    }
+    
+    public Response get() {
+        return this.r;
+    }
+}
diff --git a/controller/src/main/java/org/apache/ambari/controller/HeartbeatHandler.java b/controller/src/main/java/org/apache/ambari/controller/HeartbeatHandler.java
new file mode 100644
index 0000000..6ea4294
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/controller/HeartbeatHandler.java
@@ -0,0 +1,519 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.controller;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Date;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.ambari.controller.Clusters;
+import org.apache.ambari.controller.Nodes;
+import org.apache.ambari.common.rest.agent.Action;
+import org.apache.ambari.common.rest.agent.Action.Kind;
+import org.apache.ambari.common.rest.agent.ActionResult;
+import org.apache.ambari.common.rest.agent.AgentRoleState;
+import org.apache.ambari.common.rest.agent.Command;
+import org.apache.ambari.common.rest.agent.CommandResult;
+import org.apache.ambari.common.rest.agent.ConfigFile;
+import org.apache.ambari.common.rest.agent.ControllerResponse;
+import org.apache.ambari.common.rest.agent.HeartBeat;
+import org.apache.ambari.common.rest.entities.NodeRole;
+import org.apache.ambari.common.rest.entities.NodeState;
+import org.apache.ambari.common.rest.entities.Stack;
+import org.apache.ambari.components.ComponentPlugin;
+import org.apache.ambari.resource.statemachine.ClusterFSM;
+import org.apache.ambari.resource.statemachine.FSMDriverInterface;
+import org.apache.ambari.resource.statemachine.RoleFSM;
+import org.apache.ambari.resource.statemachine.RoleEvent;
+import org.apache.ambari.resource.statemachine.RoleEventType;
+import org.apache.ambari.resource.statemachine.ServiceEvent;
+import org.apache.ambari.resource.statemachine.ServiceEventType;
+import org.apache.ambari.resource.statemachine.ServiceFSM;
+import org.apache.ambari.resource.statemachine.ServiceState;
+import org.apache.ambari.resource.statemachine.StateMachineInvokerInterface;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import com.google.inject.Inject;
+import com.google.inject.Singleton;
+
+@Singleton
+public class HeartbeatHandler {
+  
+  private static Log LOG = LogFactory.getLog(HeartbeatHandler.class);
+  private final Clusters clusters;
+  private final Nodes nodes;
+  private final StateMachineInvokerInterface stateMachineInvoker;
+  private final FSMDriverInterface driver;
+  
+  static final String DEFAULT_USER = "hadoop"; //TODO: this needs to come from the stack definition or something (AMBARI-169)
+    
+  @Inject
+  HeartbeatHandler(Clusters clusters, Nodes nodes, 
+      FSMDriverInterface driver, 
+      StateMachineInvokerInterface stateMachineInvoker) {
+    this.clusters = clusters;
+    this.nodes = nodes;
+    this.driver = driver;
+    this.stateMachineInvoker = stateMachineInvoker;
+  }
+  
+  public ControllerResponse processHeartBeat(HeartBeat heartbeat) 
+      throws Exception {
+    String hostname = heartbeat.getHostname();
+    Date heartbeatTime = new Date(System.currentTimeMillis());
+    nodes.checkAndUpdateNode(hostname, heartbeatTime);
+    
+    boolean firstContact = heartbeat.getFirstContact();
+     
+    if (firstContact) {
+      //this is a new agent
+      nodes.markNodeHealthy(hostname);
+    }
+    
+    List<CommandResult> commandResult;
+    if((commandResult = failedActions(heartbeat))!=null && 
+        !commandResult.isEmpty()) {
+      //mark agent unhealthy
+      nodes.markNodeUnhealthy(hostname, commandResult);
+    }
+
+    short responseId = (short)(heartbeat.getResponseId() + 1);
+    String clusterName = null;
+    int clusterRev = 0;
+
+    List<Action> allActions = new ArrayList<Action>();
+
+    if (nodes.getHeathOfNode(hostname) == NodeState.UNHEALTHY) { 
+      //no actions please
+      return createResponse(responseId, allActions, heartbeat);
+    }
+
+    //if the command-execution takes longer than one heartbeat interval
+    //the check for idleness will prevent the same node getting more 
+    //commands. In the future this could be improved
+    //to reflect the command execution state more accurately.
+    if (heartbeat.getIdle()) {
+      
+      List<ClusterNameAndRev> clustersNodeBelongsTo = 
+          getClustersNodeBelongsTo(hostname);
+      
+      if (clustersNodeBelongsTo.isEmpty()) {
+        return createResponse(responseId, allActions, heartbeat);
+      }
+      
+      //TODO: have an API in Clusters that can return a script 
+      //pertaining to all clusters
+      String script = 
+          clusters.getInstallAndConfigureScript(
+              clustersNodeBelongsTo.get(0).getClusterName(), 
+              clustersNodeBelongsTo.get(0).getRevision());
+      
+      //send the deploy script
+      getInstallAndConfigureAction(script, allActions);
+
+      if (!installAndConfigDone(script,heartbeat)) {
+        return createResponse(responseId,allActions,heartbeat);
+      }
+
+      for (ClusterNameAndRev clusterIdAndRev : clustersNodeBelongsTo) {
+        clusterName = clusterIdAndRev.getClusterName();
+        clusterRev = clusterIdAndRev.getRevision();
+
+        //get the cluster object corresponding to the clusterId
+        Cluster cluster = clusters.getClusterByName(clusterName);
+        //get the state machine reference to the cluster
+        ClusterFSM clusterFsm = 
+            driver.getFSMClusterInstance(clusterName);
+
+        //the state machine references to the services
+        List<ServiceFSM> clusterServices = clusterFsm.getServices();
+        //go through all the components, and check which role should be started
+        for (ServiceFSM service : clusterServices) {
+          ComponentPlugin plugin = 
+              cluster.getComponentDefinition(service.getServiceName());
+          //check whether all the dependent components have started up
+          if (!dependentComponentsActive(plugin, clusterFsm)) {
+            continue;
+          }
+          List<RoleFSM> roles = service.getRoles();
+          for (RoleFSM role : roles) {
+            boolean nodePlayingRole = 
+                nodePlayingRole(hostname, role.getRoleName());
+            if (nodePlayingRole) {          
+              //check whether the agent should start any server
+              if (role.shouldStart()) {
+                Action action = 
+                    plugin.startServer(cluster.getName(), role.getRoleName());
+                fillDetailsAndAddAction(action, allActions, clusterName,
+                    clusterRev, service.getServiceName(), 
+                    role.getRoleName());
+                //check the expected state of the agent and whether the start
+                //was successful
+                if (wasStartRoleSuccessful(clusterIdAndRev, 
+                    service.getServiceName(), role.getRoleName(), heartbeat)) {
+                  //raise an event to the state machine for a successful 
+                  //role-start
+                  stateMachineInvoker.getAMBARIEventHandler()
+                  .handle(new RoleEvent(RoleEventType.START_SUCCESS, role));
+                  // Update the node state
+                  NodeRole  rolestate = new NodeRole (role.getRoleName(), NodeRole.NODE_SERVER_STATE_UP, Util.getXMLGregorianCalendar(new Date()));
+                  nodes.getNode(hostname).getNodeState().updateRoleState(rolestate);
+                }
+              }
+              //check whether the agent should stop any server
+              //note that the 'stop' is implicit - if the heartbeat response
+              //doesn't contain the fact that role should be starting/running, 
+              //the agent stops it
+              if (role.shouldStop()) {
+                //raise an event to the state machine for a successful 
+                //role-stop instance
+                if (wasStopRoleSuccessful(clusterIdAndRev, 
+                    service.getServiceName(), role.getRoleName(), heartbeat)) {
+                  stateMachineInvoker.getAMBARIEventHandler()
+                  .handle(new RoleEvent(RoleEventType.STOP_SUCCESS, role));
+                  // Update the role state accordingly 
+                  NodeRole  rolestate = new NodeRole (role.getRoleName(), NodeRole.NODE_SERVER_STATE_DOWN, Util.getXMLGregorianCalendar(new Date()));
+                  nodes.getNode(hostname).getNodeState().updateRoleState(rolestate);
+                }
+              }
+            }
+          }
+          //check/create the special component/service-level 
+          //actions (like safemode check). Only once per component.
+          checkAndCreateActions(cluster, clusterFsm, clusterIdAndRev,
+              service, heartbeat, allActions);
+        }
+      }
+    }
+    return createResponse(responseId,allActions,heartbeat);
+  }
+  
+  //TODO: this should be moved to the ClusterImpl (a dependency graph 
+  //should be created there)
+  private boolean dependentComponentsActive(ComponentPlugin plugin, 
+      ClusterFSM cluster) throws IOException {
+    String[] dependents = plugin.getRequiredComponents();
+    if (dependents == null || dependents.length == 0) {
+      return true;
+    }
+    List<ServiceFSM> componentFsms = cluster.getServices();
+    
+    for (ServiceFSM component : componentFsms) {
+      for (String dependent : dependents) {
+        if (component.getServiceName().equals(dependent)) {
+          if (!component.isActive()) {
+            return false;
+          }
+        }
+      }
+    }
+    return true;
+  }
+  
+  private ControllerResponse createResponse(short responseId, 
+      List<Action> allActions, HeartBeat heartbeat) {
+    ControllerResponse r = new ControllerResponse();
+    r.setResponseId(responseId);
+    if (allActions.size() > 0) {//TODO: REMOVE THIS (AMBARI-158)
+      Action a = new Action();
+      a.setKind(Kind.NO_OP_ACTION);
+      allActions.add(a);
+    }
+    r.setActions(allActions);
+    return r;
+  }
+  
+  private boolean installAndConfigDone(String script, HeartBeat heartbeat) {
+    if (script == null || heartbeat.getInstallScriptHash() == -1) {
+      return false;
+    }
+    if (script.hashCode() == heartbeat.getInstallScriptHash()) {
+      return true;
+    }
+    return false;
+  }
+    
+  private boolean wasStartRoleSuccessful(ClusterNameAndRev clusterIdAndRev, 
+      String component, String roleName, HeartBeat heartbeat) {
+    List<AgentRoleState> serverStates = heartbeat.getInstalledRoleStates();
+    if (serverStates == null) {
+      return false;
+    }
+
+    //TBD: create a hashmap (don't iterate for every server state)
+    for (AgentRoleState serverState : serverStates) {
+      if (serverState.getClusterId().equals(clusterIdAndRev.getClusterName()) &&
+          serverState.getClusterDefinitionRevision() == clusterIdAndRev.getRevision() &&
+          serverState.getComponentName().equals(component) &&
+          serverState.getRoleName().equals(roleName)) {
+        return true;
+      }
+    }
+    return false;
+  }
+  
+  private void getInstallAndConfigureAction(String script, 
+      List<Action> allActions) {
+    ConfigFile file = new ConfigFile();
+    file.setData(script);
+    //file.setOwner(DEFAULT_USER); //It should be the user that is running the ambari agent
+    
+    Action action = new Action();
+    action.setFile(file);
+    action.setKind(Kind.INSTALL_AND_CONFIG_ACTION);
+    //in the action ID send the hashCode of the script content so that 
+    //the controller can check how the installation went when a heartbeat
+    //response is sent back
+    action.setId(Integer.toString(script.hashCode()));
+    allActions.add(action);
+  }
+  
+  private boolean wasStopRoleSuccessful(ClusterNameAndRev clusterIdAndRev, 
+      String component, String roleName, HeartBeat heartbeat) {
+    List<AgentRoleState> serverStates = heartbeat.getInstalledRoleStates();
+    if (serverStates == null) {
+      return true;
+    }
+    boolean stopped = true;
+    //TBD: create a hashmap (don't iterate for every server state)
+    for (AgentRoleState serverState : serverStates) {
+      if (serverState.getClusterId().equals(clusterIdAndRev.getClusterName()) &&
+          serverState.getClusterDefinitionRevision() == clusterIdAndRev.getRevision() &&
+          serverState.getComponentName().equals(component) &&
+          serverState.getRoleName().equals(roleName)) {
+        stopped = false;
+      }
+    }
+    return stopped;
+  }
+  
+  private ActionResult getActionResult(HeartBeat heartbeat, String id) {
+    List<ActionResult> actionResults = heartbeat.getActionResults();
+    if (actionResults == null) {
+      return null;
+    }
+    for (ActionResult result : actionResults) {
+      if (result.getId().equals(id)) {
+        return result;
+      }
+    }
+    return null;
+  }
+  
+  private List<ClusterNameAndRev> getClustersNodeBelongsTo(String hostname) 
+      throws Exception {
+    String clusterName = nodes.getNode(hostname)
+        .getNodeState().getClusterName();
+    if (clusterName != null) {
+      int clusterRev = clusters.
+          getClusterByName(clusterName).getLatestRevisionNumber();
+      List<ClusterNameAndRev> l = new ArrayList<ClusterNameAndRev>();
+      l.add(new ClusterNameAndRev(clusterName, clusterRev));
+      return l;
+    }
+    return new ArrayList<ClusterNameAndRev>(); //empty
+  }  
+  
+  enum SpecialServiceIDs {
+      SERVICE_AVAILABILITY_CHECK_ID, SERVICE_PRESTART_CHECK_ID,
+      CREATE_STRUCTURE_ACTION_ID
+  }
+  
+  
+  static class ClusterNameAndRev implements 
+  Comparable<ClusterNameAndRev> {
+    String clusterName;
+    int revision;
+    ClusterNameAndRev(String clusterName, int revision) {
+      this.clusterName = clusterName;
+      this.revision = revision;
+    }
+    String getClusterName() {
+      return clusterName;
+    }
+    int getRevision() {
+      return revision;
+    }
+    @Override
+    public int hashCode() {
+      //note we only consider cluster names (one node can't have
+      //more than one version of components of the same cluster name 
+      //installed)
+      return clusterName.hashCode();
+    }
+    @Override
+    public boolean equals(Object obj) {
+      if (this == obj) {
+        return true;
+      }
+
+      if (obj == null || getClass() != obj.getClass()) {
+        return false;
+      }
+      //note we only compare cluster names (one node can't have
+      //more than one version of components of the same cluster name 
+      //installed)
+      return this.clusterName.equals(((ClusterNameAndRev)obj).getClusterName());
+    }
+    @Override
+    public int compareTo(ClusterNameAndRev o) {
+      return o.getClusterName().compareTo(getClusterName());
+    }
+  }
+
+  static String getSpecialActionID(ClusterNameAndRev clusterNameAndRev, 
+      String component, String role, SpecialServiceIDs serviceId) {
+    String id = clusterNameAndRev.getClusterName() +"-"+
+      clusterNameAndRev.getRevision() +"-"+ component + "-";
+    if (role != null) {
+      id += role + "-";
+    }
+    id += serviceId.toString();
+    return id;
+  }
+  
+  private void checkAndCreateActions(Cluster cluster,
+      ClusterFSM clusterFsm, ClusterNameAndRev clusterIdAndRev, 
+      ServiceFSM service, HeartBeat heartbeat, 
+      List<Action> allActions) throws Exception {
+    ComponentPlugin plugin = 
+        cluster.getComponentDefinition(service.getServiceName());
+    //see whether the service is in the STARTED state, and if so,
+    //check whether there is any action-result that indicates success
+    //of the availability check (safemode, etc.)
+    if (service.getServiceState() == ServiceState.STARTED) {
+      String role = plugin.runCheckRole();  
+      if (nodePlayingRole(heartbeat.getHostname(), role)) {
+        String id = getSpecialActionID(clusterIdAndRev, service.getServiceName(), 
+            role, SpecialServiceIDs.SERVICE_AVAILABILITY_CHECK_ID);
+        ActionResult result = getActionResult(heartbeat, id);
+        if (result != null) {
+          //this action ran
+          //TODO: this needs to be generalized so that it handles the case
+          //where the service is not available for a couple of checkservice
+          //invocations
+          if (result.getCommandResult().getExitCode() == 0) {
+            stateMachineInvoker.getAMBARIEventHandler().handle(
+                new ServiceEvent(ServiceEventType.AVAILABLE_CHECK_SUCCESS,
+                    service));
+          } else {
+            stateMachineInvoker.getAMBARIEventHandler().handle(
+                new ServiceEvent(ServiceEventType.AVAILABLE_CHECK_FAILURE,
+                    service));
+          }
+        } else {
+          Action action = plugin.checkService(cluster.getName(), role);
+          fillActionDetails(action, clusterIdAndRev.getClusterName(),
+              clusterIdAndRev.getRevision(),service.getServiceName(), role);
+          action.setId(id);
+          action.setKind(Action.Kind.RUN_ACTION);
+          addAction(action, allActions);
+        }
+      }
+    }
+    
+    if (service.getServiceState() == ServiceState.PRESTART) {
+      String role = plugin.runPreStartRole();
+      if (nodePlayingRole(heartbeat.getHostname(), role)) {
+        String id = getSpecialActionID(clusterIdAndRev, service.getServiceName(), 
+            role, SpecialServiceIDs.SERVICE_PRESTART_CHECK_ID);
+        ActionResult result = getActionResult(heartbeat, id);
+        if (result != null) {
+          //this action ran
+          if (result.getCommandResult().getExitCode() == 0) {
+            stateMachineInvoker.getAMBARIEventHandler().handle(
+                new ServiceEvent(ServiceEventType.PRESTART_SUCCESS,
+                    service));
+          } else {
+            stateMachineInvoker.getAMBARIEventHandler().handle(
+                new ServiceEvent(ServiceEventType.PRESTART_FAILURE,
+                    service));
+          }
+        } else {
+          Action action = plugin.preStartAction(cluster.getName(), role);
+          fillActionDetails(action, clusterIdAndRev.getClusterName(),
+              clusterIdAndRev.getRevision(),service.getServiceName(), role);
+          action.setId(id);
+          action.setKind(Action.Kind.RUN_ACTION);
+          addAction(action, allActions);
+        }
+      }
+    }
+  }
+  
+  private boolean nodePlayingRole(String host, String role) 
+      throws Exception {
+    //TODO: iteration on every call seems avoidable ..
+    List<String> nodeRoles = nodes.getNodeRoles(host);
+    return nodeRoles.contains(role);
+  }
+  
+  private void addAction(Action action, List<Action> allActions) {
+    if (action != null) {
+      allActions.add(action);
+    }
+  }
+  
+  private void fillActionDetails(Action action, String clusterId, 
+      long clusterDefRev, String component, String role) throws Exception {
+    if (action == null) {
+      return;
+    }
+    // TODO: Should cluster store the flattened/expanded stack to avoid 
+    // expanding it every time?
+    Stack stack = this.clusters.getClusterStack(clusterId, true);
+    action.setClusterId(clusterId);
+    action.setClusterDefinitionRevision(clusterDefRev);
+    action.setComponent(component);
+    action.setRole(role);
+    action.setUser(stack.getComponentByName(component).getUser_group().getUser());
+    action.setCleanUpCommand(new Command("foobar","",new String[]{"foobar"}));//TODO: this needs fixing at some point
+    String workDir = role.equals(component + "-client") ? 
+        (clusterId + "-client") : (clusterId + "-" + role);
+    action.setWorkDirectoryComponent(workDir);
+  }
+  
+  private void fillDetailsAndAddAction(Action action, List<Action> allActions, 
+      String clusterId, 
+      long clusterDefRev, String component, String role) throws Exception {
+    fillActionDetails(action, clusterId, clusterDefRev, component, role);
+    addAction(action, allActions);
+  }
+  
+  private List<CommandResult> failedActions(HeartBeat heartbeat) {
+    // for now, we mark a node unhealthy if there was any action failure at all
+    List<ActionResult> results = heartbeat.getActionResults();
+    if (results == null) {
+      return null;
+    }
+    List<CommandResult> failures = new ArrayList<CommandResult>();
+    for (ActionResult result : results) {
+      if (result.getCommandResult() != null 
+          && result.getCommandResult().getExitCode() != 0) {
+        failures.add(result.getCommandResult());
+      }
+    }
+    return failures;
+  }
+  
+}
diff --git a/controller/src/main/java/org/apache/ambari/controller/Nodes.java b/controller/src/main/java/org/apache/ambari/controller/Nodes.java
new file mode 100644
index 0000000..524f2e7
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/controller/Nodes.java
@@ -0,0 +1,239 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.controller;
+
+import java.util.ArrayList;
+import java.util.Date;
+import java.util.GregorianCalendar;
+import java.util.List;
+import java.util.concurrent.ConcurrentHashMap;
+
+import javax.ws.rs.WebApplicationException;
+import javax.ws.rs.core.Response;
+import javax.xml.datatype.DatatypeFactory;
+import javax.xml.datatype.XMLGregorianCalendar;
+
+import org.apache.ambari.common.rest.agent.CommandResult;
+import org.apache.ambari.common.rest.entities.Node;
+import org.apache.ambari.common.rest.entities.NodeRole;
+import org.apache.ambari.common.rest.entities.NodeState;
+
+import com.google.inject.Singleton;
+
+@Singleton
+public class Nodes {
+        
+    public static final String AGENT_DEPLOYMENT_STATE_TOBE_INSTALLED = "AGENT_TOBE_INSTALLED";
+    public static final String AGENT_DEPLOYMENT_STATE_INSTALLED = "AGENT_INSTALLED";
+    
+    
+    public static final short NODE_HEARTBEAT_INTERVAL_IN_MINUTES = 5;
+    public static final short NODE_MAX_MISSING_HEARBEAT_INTERVALS = 3;
+    public static final long  NODE_NOT_RESPONDING_DURATION = NODE_HEARTBEAT_INTERVAL_IN_MINUTES * 
+                                                             NODE_MAX_MISSING_HEARBEAT_INTERVALS * 60 * 1000;
+    
+    // node name to Node object hashmap
+    protected ConcurrentHashMap<String, Node> nodes = new ConcurrentHashMap<String, Node>();
+
+    public ConcurrentHashMap<String, Node> getNodes () {
+        return nodes;
+    }
+    
+    /*
+     * Get Nodes 
+     * TODO: simplify logic? 
+     */
+    public List<Node> getNodesByState (String allocatedx, String alivex) throws Exception {
+        /*
+         * Convert string to boolean states
+         */
+        Boolean allocated = true; Boolean alive = true;
+        if (allocatedx.equalsIgnoreCase("false")) { allocated = false; }
+        if (alivex.equalsIgnoreCase("false")) { alive = false; }
+        
+        //System.out.println("allocated:<"+allocated+">");
+        //System.out.println("alive:<"+alive+">");
+        //System.out.println("allocatedx:<"+allocatedx+">");
+        //System.out.println("alivex:<"+alivex+">");
+        
+        List<Node> list = new ArrayList<Node>();
+        GregorianCalendar cal = new GregorianCalendar(); 
+        cal.setTime(new Date());
+        XMLGregorianCalendar curTime = DatatypeFactory.newInstance().newXMLGregorianCalendar(cal);
+        
+        for (Node n : this.nodes.values()) {
+            if (allocatedx.equals("") && alivex.equals("")) {
+                list.add(n); 
+                continue;
+            }
+            if (allocatedx.equals("") && alive) {
+                if (getTimeDiffInMillis(curTime, n.getNodeState().getLastHeartbeatTime()) < NODE_NOT_RESPONDING_DURATION) {
+                    list.add(n);
+                    continue;
+                }
+            }
+            if (allocatedx.equals("") && !alive) {        
+                if (getTimeDiffInMillis(curTime, n.getNodeState().getLastHeartbeatTime()) >= NODE_NOT_RESPONDING_DURATION) {
+                    list.add(n);
+                    continue;
+                }
+            }
+            if (alivex.equals("") && allocated ) {
+                if (n.getNodeState().getAllocatedToCluster()) {
+                    list.add(n);
+                    continue;
+                }
+            }
+            if (alivex.equals("") && !allocated) {
+                if (!n.getNodeState().getAllocatedToCluster()) {
+                    list.add(n);
+                    continue;
+                }
+            }
+            if (allocated && alive) {
+                if (n.getNodeState().getAllocatedToCluster() && 
+                    getTimeDiffInMillis(curTime, n.getNodeState().getLastHeartbeatTime()) < NODE_NOT_RESPONDING_DURATION) {
+                    list.add(n);
+                    continue;
+                }
+            }
+            if (allocated && !alive) {
+                if (n.getNodeState().getAllocatedToCluster() && 
+                    getTimeDiffInMillis(curTime, n.getNodeState().getLastHeartbeatTime()) >= NODE_NOT_RESPONDING_DURATION) {
+                    list.add(n);
+                    continue;
+                }
+            }
+            if (!allocated && alive) {
+                if (!n.getNodeState().getAllocatedToCluster() && 
+                    getTimeDiffInMillis(curTime, n.getNodeState().getLastHeartbeatTime()) < NODE_NOT_RESPONDING_DURATION) {
+                    list.add(n);
+                    continue;
+                }
+            }
+            if (!allocated && !alive) {
+                if (!n.getNodeState().getAllocatedToCluster() && 
+                    getTimeDiffInMillis(curTime, n.getNodeState().getLastHeartbeatTime()) >= NODE_NOT_RESPONDING_DURATION) {
+                    list.add(n);
+                    continue;
+                }
+            }
+        }
+        
+        return list;
+    }
+    
+    /*
+     * Get the node
+     */
+    public Node getNode (String name) throws Exception {
+        if (!this.nodes.containsKey(name)) {
+            String msg = "Node ["+name+"] does not exist";
+            throw new WebApplicationException((new ExceptionResponse(msg, Response.Status.NOT_FOUND)).get());
+        }
+        return this.nodes.get(name);
+    }
+    
+    /*
+     * Register new node
+     */
+    public synchronized void checkAndUpdateNode (String name, Date hearbeatTime) throws Exception {
+        Node node = checkAndAddNodes(name);
+        node.getNodeState().setLastHeartbeatTime(hearbeatTime);      
+    }
+    
+    /*
+     * Mark a node as unhealthy
+     */
+    public synchronized void markNodeUnhealthy (String name,List<CommandResult>results) 
+        throws Exception {
+        Node node = checkAndAddNodes(name);
+        node.getNodeState().setHealth(NodeState.UNHEALTHY);      
+        node.getNodeState().setFailedCommandResults(results);
+    }
+    
+    /*
+     * Mark a node as healthy
+     */
+    public synchronized void markNodeHealthy (String name) throws Exception {
+        Node node = checkAndAddNodes(name);
+        node.getNodeState().setHealth(NodeState.HEALTHY);  
+        node.getNodeState().setFailedCommandResults(null);
+    }
+    
+    /*
+     * Get the health of the node
+     */
+    public synchronized boolean getHeathOfNode(String name) {
+      Node node = checkAndAddNodes(name);
+      return node.getNodeState().getHealth();
+    }
+    
+    /*
+     * Get the node's roles
+     */
+    public synchronized List<String> getNodeRoles(String host) 
+        throws Exception {
+        return getNode(host).getNodeState().getNodeRoleNames("");
+    }
+    
+    /*
+     * Get time difference
+     */
+    public static long getTimeDiffInMillis (XMLGregorianCalendar t2, 
+                                            XMLGregorianCalendar t1
+                                            ) throws Exception {
+        return t2.toGregorianCalendar().getTimeInMillis() -  
+            t1.toGregorianCalendar().getTimeInMillis();
+    }
+    
+    private Node checkAndAddNodes(String name) {
+      Node node = this.nodes.get(name);
+      
+      if (node == null) {
+          node = new Node(name);
+          getNodes().put(name, node);
+      }
+      return node;
+    }
+    
+    public static void main (String args[]) {
+       XMLGregorianCalendar t1;
+       XMLGregorianCalendar t2;
+       
+       try {
+           GregorianCalendar t1g = new GregorianCalendar();
+           t1g.setTime(new Date());
+           t1 = DatatypeFactory.newInstance().newXMLGregorianCalendar(t1g);
+           
+           Thread.sleep(500);
+           
+           GregorianCalendar t2g = new GregorianCalendar();
+           t2g.setTime(new Date());
+           t2 = DatatypeFactory.newInstance().newXMLGregorianCalendar(t2g);
+           
+           System.out.println("TIME ["+Nodes.getTimeDiffInMillis(t2, t1)+"]");
+           
+           System.out.println("TIME1 ["+t1.toString()+"]");
+           System.out.println("TIME2 ["+t2.toString()+"]");
+           
+       } catch (Exception e) {
+           e.printStackTrace();
+       }
+    }
+}
diff --git a/controller/src/main/java/org/apache/ambari/controller/StackFlattener.java b/controller/src/main/java/org/apache/ambari/controller/StackFlattener.java
new file mode 100644
index 0000000..413e2e3
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/controller/StackFlattener.java
@@ -0,0 +1,269 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.controller;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.TreeMap;
+import java.util.TreeSet;
+
+import javax.ws.rs.WebApplicationException;
+
+import org.apache.ambari.common.rest.entities.Component;
+import org.apache.ambari.common.rest.entities.ComponentDefinition;
+import org.apache.ambari.common.rest.entities.Configuration;
+import org.apache.ambari.common.rest.entities.ConfigurationCategory;
+import org.apache.ambari.common.rest.entities.Property;
+import org.apache.ambari.common.rest.entities.RepositoryKind;
+import org.apache.ambari.common.rest.entities.Role;
+import org.apache.ambari.common.rest.entities.Stack;
+import org.apache.ambari.common.rest.entities.UserGroup;
+import org.apache.ambari.components.ComponentPlugin;
+import org.apache.ambari.components.ComponentPluginFactory;
+
+import com.google.inject.Inject;
+
+/**
+ * This class flattens a stack and its ancestors into a single stack. The 
+ * resulting stack has a client configuration at the top and a fully expanded
+ * configuration for each of the roles. The configuration at the components 
+ * level (other than being pushed down into the appropriate roles) is also
+ * removed. Finally the "ambari" category is removed from the role configs.
+ */
+public class StackFlattener {
+
+  private static final String META_CATEGORY = "ambari";
+  
+  private final Stacks stacks;
+  private final ComponentPluginFactory plugins;
+
+  private UserGroup flattenUserGroup(List<Stack> stacks) {
+      UserGroup default_user_group = null;
+      for(int i=stacks.size()-1; i>=0; --i) {
+          Stack stack = stacks.get(i);
+          default_user_group = stack.getDefault_user_group();
+          if (default_user_group == null) {
+              continue; 
+          }
+      }
+      return default_user_group;
+  }
+  
+  private List<RepositoryKind> flattenRepositories(List<Stack> stacks) {
+    Map<String, List<String>> repositories = 
+        new TreeMap<String, List<String>>();
+    for(int i=stacks.size()-1; i>=0; --i) {
+      Stack stack = stacks.get(i);
+      List<RepositoryKind> kindList = stack.getPackageRepositories();
+      if (kindList != null) {
+        for(RepositoryKind kind: kindList) {
+          List<String> list = repositories.get(kind.getKind());
+          if (list == null) {
+            list = new ArrayList<String>();
+            repositories.put(kind.getKind(), list);
+          }
+          list.addAll(kind.getUrls());
+        }
+      }
+    }
+    
+    // translate it into a list of repositorykinds
+    List<RepositoryKind> result = new ArrayList<RepositoryKind>();
+    for(Map.Entry<String, List<String>> item: repositories.entrySet()) {
+      RepositoryKind kind = new RepositoryKind();
+      kind.setKind(item.getKey());
+      kind.setUrls(item.getValue());
+      result.add(kind);
+    }
+    return result;
+  }
+
+  /**
+   * Build the list of stacks in the inheritance tree.
+   * The base stack is at the front of the list and the final one is last
+   * @param stackName the name of the final stack
+   * @param stackRevision the revision of the final stack
+   * @return the lists of stacks
+   * @throws IOException 
+   * @throws WebApplicationException 
+   */
+  private List<Stack> getStackList(String stackName, 
+                                   int stackRevision
+                                   ) throws WebApplicationException, 
+                                            IOException {
+    Stack stack = stacks.getStack(stackName, stackRevision);
+    List<Stack> result = new ArrayList<Stack>();
+    while (stack != null) {
+      result.add(0, stack);
+      String parentName = stack.getParentName();
+      int parentRev = stack.getParentRevision();
+      if (parentName != null) {
+        stack = stacks.getStack(parentName, parentRev);
+      } else {
+        stack = null;
+      }
+    }
+    return result;
+  }
+
+  private Set<String> getComponents(List<Stack> stacks) {
+    Set<String> result = new TreeSet<String>();
+    for(Stack stack: stacks) {
+      for(Component comp: stack.getComponents()) {
+        result.add(comp.getName());
+      }
+    }
+    return result;
+  }
+  
+  /**
+   * Merge the given Configuration into the map that we are building.
+   * @param map the map to update
+   * @param conf the configuration to merge in
+   * @param dropMeta should we drop the meta category
+   */
+  private void mergeInConfiguration(Map<String, Map<String, Property>> map,
+                                    Configuration conf, boolean dropMeta) {
+    if (conf != null) {
+      for (ConfigurationCategory category: conf.getCategory()) {
+        if (!dropMeta || !META_CATEGORY.equals(category.getName())) {
+          Map<String, Property> categoryMap = map.get(category.getName());
+          if (categoryMap == null) {
+            categoryMap = new TreeMap<String, Property>();
+            map.put(category.getName(), categoryMap);
+          }
+          for (Property prop: category.getProperty()) {
+            categoryMap.put(prop.getName(), prop);
+          }
+        }
+      }
+    }
+  }
+
+  private Configuration buildConfig(Map<String, Map<String, Property>> map) {
+    Configuration conf = new Configuration();
+    List<ConfigurationCategory> categories = conf.getCategory();
+    for(String categoryName: map.keySet()) {
+      ConfigurationCategory category = new ConfigurationCategory();
+      categories.add(category);
+      category.setName(categoryName);
+      List<Property> properties = category.getProperty();
+      for (Property property: map.get(categoryName).values()) {
+        properties.add(property);
+      }
+    }
+    return conf;
+  }
+  
+  private Configuration buildClientConfiguration(List<Stack> stacks) {
+    Map<String, Map<String, Property>> newConfig =
+        new TreeMap<String, Map<String, Property>>();
+    for(Stack stack: stacks) {
+      mergeInConfiguration(newConfig, stack.getConfiguration(), false);
+    }
+    return buildConfig(newConfig);
+  }
+
+  private Configuration flattenConfiguration(List<Stack> stacks,
+                                             String componentName,
+                                             String roleName) {
+    Map<String, Map<String, Property>> newConfig =
+        new TreeMap<String, Map<String,Property>>();
+    for(Stack stack: stacks) {
+      mergeInConfiguration(newConfig, stack.getConfiguration(), true);
+      for (Component component: stack.getComponents()) {
+        if (component.getName().equals(componentName)) {
+          mergeInConfiguration(newConfig, component.getConfiguration(), true);
+          List<Role> roleList = component.getRoles();
+          if (roleList != null) {
+            for (Role role: roleList) {
+              if (role.getName().equals(roleName)) {
+                mergeInConfiguration(newConfig, role.getConfiguration(), true);
+              }
+            }
+          }
+        }
+      }
+    }
+    return buildConfig(newConfig);
+  }
+
+  private Component flattenComponent(String name, 
+                                     List<Stack> stacks) throws IOException {
+    Component result = null;
+    for(Stack stack: stacks) {
+      for(Component comp: stack.getComponents()) {
+        if (comp.getName().equals(name)) {
+          if (result != null) {
+            result.mergeInto(comp);
+          } else {
+            result = new Component();
+            result.setDefinition(new ComponentDefinition());
+            result.mergeInto(comp);
+          }
+        }
+      }
+    }
+    // we don't want the component config
+    result.setConfiguration(null);
+    List<Role> roles = new ArrayList<Role>();
+    result.setRoles(roles);
+    ComponentPlugin plugin = plugins.getPlugin(result.getDefinition());
+    for(String roleName: plugin.getActiveRoles()) {
+      Role role = new Role();
+      roles.add(role);
+      role.setName(roleName);
+      role.setConfiguration(flattenConfiguration(stacks, name, roleName));
+    }
+    return result;
+  }
+
+  @Inject
+  StackFlattener(Stacks stacks, ComponentPluginFactory plugins) {
+    this.stacks = stacks;
+    this.plugins = plugins;
+  }
+
+  public Stack flattenStack(String stackName, int stackRevision
+                            ) throws WebApplicationException, IOException {
+    List<Stack> stacks = getStackList(stackName, stackRevision);
+    Stack result = new Stack(stacks.get(stacks.size()-1));
+    result.setParentName(null);
+    result.setPackageRepositories(flattenRepositories(stacks));
+    result.setDefault_user_group(flattenUserGroup(stacks));
+    List<Component> components = new ArrayList<Component>();
+    result.setComponents(components);
+    for(String componentName: getComponents(stacks)) {
+      components.add(flattenComponent(componentName, stacks));
+    }
+    result.setConfiguration(buildClientConfiguration(stacks));
+    /*
+     * Set the default stack level user/group info, if it is not set 
+     * at the component level.
+     */
+    for (Component comp : components) {
+        if (comp.getUser_group() == null) {
+            comp.setUser_group(result.getDefault_user_group());
+        }
+    }
+    return result;
+  }
+}
diff --git a/controller/src/main/java/org/apache/ambari/controller/Stacks.java b/controller/src/main/java/org/apache/ambari/controller/Stacks.java
new file mode 100644
index 0000000..ad6e92d
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/controller/Stacks.java
@@ -0,0 +1,276 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.controller;
+
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.InputStreamReader;
+import java.io.Reader;
+
+import java.net.URL;
+import java.nio.charset.Charset;
+import java.util.ArrayList;
+import java.util.Date;
+import java.util.List;
+import java.util.concurrent.ConcurrentHashMap;
+
+import javax.ws.rs.WebApplicationException;
+import javax.ws.rs.core.Response;
+import javax.xml.bind.JAXBContext;
+import javax.xml.bind.Unmarshaller;
+
+import org.apache.ambari.common.rest.entities.Stack;
+import org.apache.ambari.common.rest.entities.StackInformation;
+import org.apache.ambari.common.rest.entities.Component;
+import org.apache.ambari.common.rest.entities.Property;
+import org.apache.ambari.datastore.DataStoreFactory;
+import org.apache.ambari.datastore.DataStore;
+import org.codehaus.jettison.json.JSONException;
+import org.codehaus.jettison.json.JSONObject;
+
+import com.google.inject.Inject;
+import com.google.inject.Singleton;
+
+@Singleton
+public class Stacks {
+
+  private final DataStore dataStore;
+
+    @Inject
+    Stacks(DataStoreFactory dataStore) throws IOException {
+      this.dataStore = dataStore.getInstance();
+      recoverStacksAfterRestart();
+    }
+    
+    /*
+     * Stack name -> latest revision is always cached for each stack.
+     * 
+     */
+    protected ConcurrentHashMap<String, Integer> stacks = new ConcurrentHashMap<String, Integer>();
+    
+    
+    /*
+     * Check if stack exists. Names and latest version number is always 
+     * cached in the memory
+     */
+    public boolean stackExists(String stackName) throws IOException {
+        if (!this.stacks.containsKey(stackName) &&
+             !this.dataStore.stackExists(stackName)) {  
+            return false;
+        } 
+        return true;
+    }
+    
+    public int getStackLatestRevision(String stackName) {
+        return this.stacks.get(stackName).intValue();
+    }
+    
+    /*
+     * Get stack. If revision = -1 then return latest revision
+     */
+    public Stack getStack(String stackName, int revision
+                          ) throws WebApplicationException, IOException {
+        
+        if (!stackExists(stackName)) {
+            String msg = "Stack ["+stackName+"] is not defined";
+            throw new WebApplicationException ((new ExceptionResponse(msg, Response.Status.NOT_FOUND)).get());
+        }
+        
+        /*
+         * If revision is -1, then return the latest revision
+         */  
+        Stack bp = null;
+        if (revision < 0) {
+            bp = dataStore.retrieveStack(stackName, getStackLatestRevision(stackName));
+        } else {
+            if ( revision > getStackLatestRevision(stackName)) {  
+                String msg = "Stack ["+stackName+"], revision ["+revision+"] does not exist";
+                throw new WebApplicationException ((new ExceptionResponse(msg, Response.Status.NOT_FOUND)).get());
+            }
+            bp = dataStore.retrieveStack(stackName, revision);
+        }
+        return bp;  
+    }
+     
+    /*
+     * Add or update the stack
+     */
+    public Stack addStack(String stackName, Stack bp) throws Exception {
+        /*
+         * Validate and set the defaults add the stack as new revision
+         */
+        validateAndSetStackDefaults(stackName, bp);
+        int latestStackRevision = dataStore.storeStack(stackName, bp);
+        this.stacks.put(stackName, new Integer(latestStackRevision));
+        return bp;
+    }
+    
+    /*
+     * Import the default stack from the URL location
+     */
+    public Stack importDefaultStack (String stackName, String locationURL) throws IOException {
+        Stack stack;
+        URL stackUrl;
+        try {
+            stackUrl = new URL(locationURL);
+            InputStream is = stackUrl.openStream();
+            
+            /* JSON FORMAT READER
+            ObjectMapper m = new ObjectMapper();
+            stack = m.readValue(is, Stack.class);
+            */
+            JAXBContext jc = JAXBContext.newInstance(org.apache.ambari.common.rest.entities.Stack.class);
+            Unmarshaller u = jc.createUnmarshaller();
+            stack = (Stack)u.unmarshal(is);
+            return addStack(stackName, stack);
+        } catch (WebApplicationException we) {
+            throw we;
+        } catch (Exception e) {
+            throw new WebApplicationException ((new ExceptionResponse(e)).get());
+        }
+    }
+   
+    /*
+     * Validate the stack before importing into controller
+     */
+    public void validateAndSetStackDefaults(String stackName, Stack stack) throws Exception {
+        
+        if (stack.getName() == null || stack.getName().equals("")) {
+            stack.setName(stackName);
+        } else if (!stack.getName().equals(stackName)) { 
+            String msg = "Name of stack in resource URL and stack definition does not match!";
+            throw new WebApplicationException ((new ExceptionResponse(msg, Response.Status.BAD_REQUEST)).get());
+        }
+        
+        if (stack.getRevision() == null || stack.getRevision().equals("") ||
+            stack.getRevision().equalsIgnoreCase("null")) {
+            stack.setRevision("-1");
+        }
+        if (stack.getParentName() != null && 
+            (stack.getParentName().equals("") || stack.getParentName().equalsIgnoreCase("null"))) {
+            stack.setParentName(null);
+        }
+        /*
+         * Set the creation time 
+         */
+        stack.setCreationTime(Util.getXMLGregorianCalendar(new Date()));
+    }
+    
+    /*
+     *  Get the list of stack revisions
+     */
+    public List<StackInformation> getStackRevisions(String stackName) throws Exception {
+        List<StackInformation> list = new ArrayList<StackInformation>();
+        if (!this.stacks.containsKey(stackName)) {
+            String msg = "Stack ["+stackName+"] does not exist";
+            throw new WebApplicationException ((new ExceptionResponse(msg, Response.Status.NOT_FOUND)).get());
+        }
+        
+        for (int rev=0; rev<=this.stacks.get(stackName); rev++) {
+            // Get the stack
+            Stack bp = dataStore.retrieveStack(stackName, rev);
+            StackInformation bpInfo = new StackInformation();
+            bpInfo.setCreationTime(bp.getCreationTime());
+            bpInfo.setName(bp.getName());
+            bpInfo.setRevision(bp.getRevision());
+            bpInfo.setParentName(bp.getParentName());
+            bpInfo.setParentRevision(bp.getParentRevision());
+            List<String> componentNameVersions = new ArrayList<String>();
+            for (Component com : bp.getComponents()) {
+                String comNameVersion = ""+com.getName()+"-"+com.getVersion();
+                componentNameVersions.add(comNameVersion);
+            }
+            bpInfo.setComponent(componentNameVersions);
+            list.add(bpInfo);
+        }
+        return list;
+    }
+    
+    /*
+     * Return list of stack names
+     */
+    public List<StackInformation> getStackList() throws Exception {
+        List<StackInformation> list = new ArrayList<StackInformation>();
+        for (String bpName : this.stacks.keySet()) {
+            // Get the latest stack
+            Stack bp = dataStore.retrieveStack(bpName, -1);
+            StackInformation bpInfo = new StackInformation();
+            // TODO: get the creation and update times from stack
+            bpInfo.setCreationTime(bp.getCreationTime());
+            bpInfo.setName(bp.getName());
+            bpInfo.setRevision(bp.getRevision());
+            bpInfo.setParentName(bp.getParentName());
+            bpInfo.setParentRevision(bp.getParentRevision());
+            List<String> componentNameVersions = new ArrayList<String>();
+            for (Component com : bp.getComponents()) {
+                componentNameVersions.add(com.getName());
+            }
+            bpInfo.setComponent(componentNameVersions);
+            list.add(bpInfo);
+        }
+        return list;
+    }
+    
+    /*
+     * Delete the stack including all its versions.
+     * The caller must ensure that no cluster uses this stack.
+     */
+    public void deleteStack(String stackName) throws Exception {
+        dataStore.deleteStack(stackName);
+        this.stacks.remove(stackName);
+    }
+    
+    /*
+     * UTIL methods
+     */
+    public Property getProperty(String key, String value) {
+        Property p = new Property();
+        p.setName(key);
+        p.setValue(value);
+        return p;
+    }
+    
+    private static String readAll(Reader rd) throws IOException {
+        StringBuilder sb = new StringBuilder();
+        int cp;
+        while ((cp = rd.read()) != -1) {
+            sb.append((char) cp);
+        }
+        return sb.toString();
+    }
+
+    public static JSONObject readJsonFromUrl(String url) throws IOException, JSONException {
+        InputStream is = new URL(url).openStream();
+        try {
+            BufferedReader rd = new BufferedReader(new InputStreamReader(is, Charset.forName("UTF-8")));
+            String jsonText = readAll(rd);
+            JSONObject json = new JSONObject(jsonText);
+            return json;
+        } finally {
+            is.close();
+        }
+    }
+
+    private void recoverStacksAfterRestart() throws IOException {
+        List<String> stackList = dataStore.retrieveStackList();
+        for (String stackName : stackList) {
+            this.stacks.put(stackName, dataStore.retrieveLatestStackRevisionNumber(stackName));
+        }
+    }
+}
diff --git a/controller/src/main/java/org/apache/ambari/controller/Util.java b/controller/src/main/java/org/apache/ambari/controller/Util.java
new file mode 100644
index 0000000..83fe03a
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/controller/Util.java
@@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.controller;
+
+import java.util.Date;
+import java.util.GregorianCalendar;
+
+import javax.xml.datatype.DatatypeConfigurationException;
+import javax.xml.datatype.DatatypeFactory;
+import javax.xml.datatype.XMLGregorianCalendar;
+
+public class Util {
+    
+    public static XMLGregorianCalendar getXMLGregorianCalendar (Date date)  {
+        if (date == null) {
+            return null;
+        }
+        GregorianCalendar cal = new GregorianCalendar();
+        cal.setTime(date);
+        try {
+            return DatatypeFactory.newInstance().newXMLGregorianCalendar(cal);  
+        } catch (NullPointerException ne) {
+        } catch (DatatypeConfigurationException de) {
+        }
+        return null;
+    }
+    
+    public static String getInstallAndConfigureCommand() {
+      return "puppet --apply"; //TODO: this needs to be 'pluggable'/configurable
+    }
+
+}
diff --git a/controller/src/main/java/org/apache/hms/controller/rest/config/ContextProvider.java b/controller/src/main/java/org/apache/ambari/controller/rest/agent/AgentJAXBContextResolver.java
old mode 100755
new mode 100644
similarity index 61%
rename from controller/src/main/java/org/apache/hms/controller/rest/config/ContextProvider.java
rename to controller/src/main/java/org/apache/ambari/controller/rest/agent/AgentJAXBContextResolver.java
index 2ac7580..5dc9159
--- a/controller/src/main/java/org/apache/hms/controller/rest/config/ContextProvider.java
+++ b/controller/src/main/java/org/apache/ambari/controller/rest/agent/AgentJAXBContextResolver.java
@@ -1,4 +1,4 @@
-/*
+/**
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
  * distributed with this work for additional information
@@ -16,33 +16,44 @@
  * limitations under the License.
  */
 
-package org.apache.hms.controller.rest.config;
+package org.apache.ambari.controller.rest.agent;
 
 import javax.ws.rs.ext.ContextResolver;
 import javax.ws.rs.ext.Provider;
 import javax.xml.bind.JAXBContext;
-
-import org.apache.hms.common.entity.command.Command;
-import org.apache.hms.common.entity.manifest.ClusterManifest;
+import javax.xml.bind.JAXBException;
 
 import com.sun.jersey.api.json.JSONConfiguration;
 import com.sun.jersey.api.json.JSONJAXBContext;
+import org.apache.ambari.common.rest.agent.*;
 
 @Provider
-public class ContextProvider implements ContextResolver<JAXBContext> {
+public class AgentJAXBContextResolver implements ContextResolver<JAXBContext> {
+  private final JAXBContext context;
+  private static final Class<?>[] types = {
+      Action.class,
+      ActionResult.class,
+      ActionResults.class,
+      AgentRoleState.class,
+      Command.class, 
+      CommandResult.class,
+      ConfigFile.class,
+      ControllerResponse.class,
+      HardwareProfile.class,
+      HeartBeat.class
+      };
 
-  private JAXBContext context;
-  private Class[] types = { ClusterManifest.class, Command.class };
-
-  public ContextProvider() throws Exception {
-    this.context = new JSONJAXBContext(JSONConfiguration.natural().build(), types);
+  public AgentJAXBContextResolver() throws JAXBException {
+    this.context = new JSONJAXBContext(JSONConfiguration.natural().build(), 
+                                       types);
   }
 
   public JAXBContext getContext(Class<?> objectType) {
-    for (Class type : types) {
-      if (type.equals(objectType))
+    for(Class<?> c : types) {
+      if(c==objectType) {
         return context;
+      }
     }
     return null;
-  } 
-}
\ No newline at end of file
+  }
+}
diff --git a/controller/src/main/java/org/apache/ambari/controller/rest/agent/ControllerApplication.java b/controller/src/main/java/org/apache/ambari/controller/rest/agent/ControllerApplication.java
new file mode 100644
index 0000000..493ae04
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/controller/rest/agent/ControllerApplication.java
@@ -0,0 +1,19 @@
+package org.apache.ambari.controller.rest.agent;
+
+import java.util.HashSet;
+import java.util.Set;
+
+import javax.ws.rs.core.Application;
+
+import org.apache.ambari.controller.rest.config.ExtendedWadlGeneratorConfig;
+
+public class ControllerApplication extends Application {
+  @Override
+  public Set<Class<?>> getClasses() {
+      final Set<Class<?>> classes = new HashSet<Class<?>>();
+      // register root resources/providers
+      classes.add(ControllerResource.class);
+      classes.add(ExtendedWadlGeneratorConfig.class);
+      return classes;
+  }
+}
diff --git a/controller/src/main/java/org/apache/ambari/controller/rest/agent/ControllerResource.java b/controller/src/main/java/org/apache/ambari/controller/rest/agent/ControllerResource.java
new file mode 100644
index 0000000..d34648a
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/controller/rest/agent/ControllerResource.java
@@ -0,0 +1,469 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.controller.rest.agent;
+
+import java.net.InetAddress;
+import java.net.UnknownHostException;
+import java.util.ArrayList;
+import java.util.List;
+
+import javax.ws.rs.Consumes;
+import javax.ws.rs.DefaultValue;
+import javax.ws.rs.GET;
+import javax.ws.rs.POST;
+import javax.ws.rs.Path;
+import javax.ws.rs.Produces;
+import javax.ws.rs.QueryParam;
+import javax.ws.rs.WebApplicationException;
+import javax.ws.rs.core.MediaType;
+
+import org.apache.ambari.common.rest.agent.Action;
+import org.apache.ambari.common.rest.agent.Action.Kind;
+import org.apache.ambari.common.rest.agent.Action.Signal;
+import org.apache.ambari.common.rest.agent.ActionResult;
+import org.apache.ambari.common.rest.agent.AgentRoleState;
+import org.apache.ambari.common.rest.agent.Command;
+import org.apache.ambari.common.rest.agent.CommandResult;
+import org.apache.ambari.common.rest.agent.ControllerResponse;
+import org.apache.ambari.common.rest.agent.ConfigFile;
+import org.apache.ambari.common.rest.agent.HardwareProfile;
+import org.apache.ambari.common.rest.agent.HeartBeat;
+import org.apache.ambari.common.util.ExceptionUtil;
+import org.apache.ambari.controller.HeartbeatHandler;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import com.google.inject.Inject;
+
+/** 
+ * Controller Resource represents Ambari controller.
+ * It provides API for Ambari agents to get the cluster configuration changes
+ * as well as report the node attributes and state of services running the on 
+ * the cluster nodes
+ */
+@Path("controller")
+public class ControllerResource {
+  private static HeartbeatHandler hh;
+	private static Log LOG = LogFactory.getLog(ControllerResource.class);
+	
+	@Inject
+	static void setHandler(HeartbeatHandler handler) {
+	  hh = handler;
+	}
+
+  /** 
+   * Update state of the node (Internal API to be used by Ambari agent).
+   *  
+   * @response.representation.200.doc This API is invoked by Ambari agent running
+   *  on a cluster to update the state of various services running on the node.
+   * @response.representation.200.mediaType application/json
+   * @response.representation.406.doc Error in heartbeat message format
+   * @response.representation.408.doc Request Timed out
+   * @param message Heartbeat message
+   * @throws Exception 
+   */
+  @Path("heartbeat/{hostname}")
+  @POST
+  @Consumes(MediaType.APPLICATION_JSON)
+  @Produces({MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML})
+  public ControllerResponse heartbeat(HeartBeat message) 
+      throws WebApplicationException {
+    ControllerResponse controllerResponse = getControllerResponse();
+    try {
+      controllerResponse = hh.processHeartBeat(message);
+    } catch (Exception e) {
+      LOG.info(ExceptionUtil.getStackTrace(e));
+      throw new WebApplicationException(500);
+    }
+
+//    controllerResponse.setResponseId("id-00002");    
+//    String script = "import os\nos._exit(0)";
+//    String[] param = { "cluster", "role" };
+//    Command command = new Command("root", script, param);
+//
+//    Command cleanUp = new Command("root", script, param);
+//    
+//    Action action = new Action();
+//    action.setUser("hdfs");
+//    action.setKind(Kind.STOP_ACTION);
+//    action.setSignal(Signal.KILL);
+//    action.setClusterId("cluster-001");
+//    action.setClusterDefinitionRevision(1);
+//    action.setComponent("hdfs");
+//    action.setRole("datanode");
+//    action.setId("action-001");
+//
+//    Action action2 = new Action();
+//    action2.setUser("hdfs");
+//    action2.setKind(Kind.START_ACTION);
+//    action2.setId("action-002");
+//    action2.setClusterId("cluster-002");
+//    action2.setCommand(command);
+//    action2.setCleanUpCommand(cleanUp);
+//    action2.setClusterDefinitionRevision(1);
+//    action2.setComponent("hdfs");
+//    action2.setRole("datanode");
+//
+//    Action action3 = new Action();
+//    action3.setUser("hdfs");
+//    action3.setKind(Kind.RUN_ACTION);
+//    action3.setId("action-003");
+//    action3.setClusterId("cluster-002");
+//    action3.setClusterDefinitionRevision(1);
+//    action3.setComponent("hdfs");
+//    action3.setRole("datanode");
+//    action3.setCommand(command);
+//    action3.setCleanUpCommand(cleanUp);
+//
+//    Action action4 = new Action();
+//    action4.setId("action-004");
+//    action4.setClusterId("cluster-002");
+//    action4.setClusterDefinitionRevision(1);
+//    action4.setUser("hdfs");
+//    action4.setKind(Kind.WRITE_FILE_ACTION);
+//    action4.setComponent("hdfs");    
+//    action4.setRole("namenode");
+//    String owner ="hdfs";
+//    String group = "hadoop";
+//    String permission = "0700";
+//    String path = "$prefix/config";
+//    String umask = "022";
+//    String data = "Content of the file";
+//    action4.setFile(new ConfigFile(owner, group, permission, path, umask, data));
+//    
+//    List<Action> actions = new ArrayList<Action>();
+//    actions.add(action);
+//    actions.add(action2);
+//    actions.add(action3);
+//    actions.add(action4);
+//    controllerResponse.setActions(actions);
+    return controllerResponse;
+  }
+
+  /**
+   * Sample Ambari heartbeat message
+   * 
+   * @response.representation.200.example 
+   * {
+       "responseId": "-1",
+       "timestamp": "1318955147616",
+       "hostname": "host.example.com",
+       "hardwareProfile": {
+           "coreCount": "8",
+           "diskCount": "4",
+           "ramSize": "16442752",
+           "cpuSpeed": "2003",
+           "netSpeed": "1000",
+           "cpuFlags": "vmx est tm2..."
+       },
+       "installedRoleStates": [
+           {
+               "clusterId": "cluster-003",
+               "clusterDefinitionRevision": "2",
+               "componentName": "hdfs",
+               "roleName": "datanode",
+               "serverStatus": "STARTED"
+           }
+       ],
+       "actionResults": [
+           {
+               "clusterId": "cluster-001",
+               "id": "action-001",
+               "kind": "STOP_ACTION",
+               "clusterDefinitionRevision": "1"
+           },
+           {
+               "clusterId": "cluster-002",
+               "kind": "START_ACTION",
+               "commandResult": {
+                   "exitCode": "0",
+                   "stdout": "stdout",
+                   "stderr": "stderr"
+               },
+               "cleanUpCommandResult": {
+                   "exitCode": "0",
+                   "stdout": "stdout",
+                   "stderr": "stderr"
+               },
+               "component": "hdfs",
+               "role": "datanode",
+               "clusterDefinitionRevision": "2"
+           }
+       ],
+       "idle": "false"
+     }
+   * @response.representation.200.doc Print example of Ambari heartbeat message
+   * @response.representation.200.mediaType application/json
+   * @param stackId Stack ID
+   * @return Heartbeat message
+   */
+  @Path("heartbeat/sample")
+  @GET
+  @Produces(MediaType.APPLICATION_JSON)
+  public HeartBeat getHeartBeat(@DefaultValue("stack-123") 
+                                @QueryParam("stackId") String stackId) {
+    try {
+      InetAddress addr = InetAddress.getLocalHost();
+      List<ActionResult> actionResults = new ArrayList<ActionResult>();      
+
+      ActionResult actionResult = new ActionResult();
+      actionResult.setClusterDefinitionRevision(1);
+      actionResult.setId("action-001");
+      actionResult.setClusterId("cluster-001");
+      actionResult.setKind(Kind.STOP_ACTION);
+
+      ActionResult actionResult2 = new ActionResult();
+      actionResult2.setClusterDefinitionRevision(2);
+      actionResult2.setClusterId("cluster-002");
+      actionResult2.setCommandResult(new CommandResult(0, "stdout", "stderr"));
+      actionResult2.setCleanUpResult(new CommandResult(0, "stdout", "stderr"));
+      actionResult2.setKind(Kind.START_ACTION);
+      actionResult2.setComponent("hdfs");
+      actionResult2.setRole("datanode");
+
+      actionResults.add(actionResult);
+      actionResults.add(actionResult2);
+
+      HardwareProfile hp = new HardwareProfile();
+      hp.setCoreCount(8);
+      hp.setCpuFlags("fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr sse4_1 lahf_lm");
+      hp.setCpuSpeed(2003);
+      hp.setDiskCount(4);
+      hp.setNetSpeed(1000);
+      hp.setRamSize(16442752);
+      
+      List<AgentRoleState> agentRoles = new ArrayList<AgentRoleState>(2);
+      AgentRoleState agentRole1 = new AgentRoleState();
+      agentRole1.setClusterDefinitionRevision(2);
+      agentRole1.setClusterId("cluster-003");
+      agentRole1.setComponentName("hdfs");
+      agentRole1.setRoleName("datanode");
+      agentRole1.setServerStatus(AgentRoleState.State.STARTED);
+      agentRoles.add(agentRole1);
+      
+      HeartBeat hb = new HeartBeat();
+      hb.setResponseId((short)-1);
+      hb.setTimestamp(System.currentTimeMillis());
+      hb.setHostname(addr.getHostName());
+      hb.setActionResults(actionResults);
+      hb.setHardwareProfile(hp);
+      hb.setInstalledRoleStates(agentRoles);
+      hb.setIdle(false);
+      return hb;
+    } catch (UnknownHostException e) {
+      throw new WebApplicationException(e);
+    }
+  }
+  
+  /**
+   * Sample controller to agent response message
+   * 
+   * @response.representation.200.example 
+   * {
+      "responseId": "2",
+      "actions": [
+        {
+            "kind": "CREATE_STRUCTURE_ACTION",
+            "clusterId": "cluster-001",
+            "id": "action-000",
+            "component": "hdfs",
+            "role": "datanode",
+            "clusterDefinitionRevision": "0"
+        },
+        {
+            "kind": "STOP_ACTION",
+            "clusterId": "cluster-001",
+            "user": "hdfs",
+            "id": "action-001",
+            "component": "hdfs",
+            "role": "datanode",
+            "signal": "KILL",
+            "clusterDefinitionRevision": "2"
+        },
+        {
+            "kind": "START_ACTION",
+            "clusterId": "cluster-001",
+            "user": "hdfs",
+            "id": "action-002",
+            "component": "hdfs",
+            "role": "datanode",
+            "command": {
+                "script": "import os\nos._exit(0)",
+                "param": [
+                    "cluster",
+                    "role"
+                ],
+                "user": "root"
+            },
+            "cleanUpCommand": {
+                "script": "import os\nos._exit(0)",
+                "param": [
+                    "cluster",
+                    "role"
+                ],
+                "user": "root"
+            },
+            "clusterDefinitionRevision": "3"
+        },
+        {
+            "kind": "RUN_ACTION",
+            "clusterId": "cluster-001",
+            "user": "hdfs",
+            "id": "action-003",
+            "component": "hdfs",
+            "role": "datanode",
+            "command": {
+                "script": "import os\nos._exit(0)",
+                "param": [
+                    "cluster",
+                    "role"
+                ],
+                "user": "root"
+            },
+            "cleanUpCommand": {
+                "script": "import os\nos._exit(0)",
+                "param": [
+                    "cluster",
+                    "role"
+                ],
+                "user": "root"
+            },
+            "clusterDefinitionRevision": "3"
+        },
+        {
+            "kind": "WRITE_FILE_ACTION",
+            "clusterId": "cluster-001",
+            "user": "hdfs",
+            "id": "action-004",
+            "component": "hdfs",
+            "role": "datanode",
+            "clusterDefinitionRevision": "4",
+            "file": {
+                "data": "Content of the file",
+                "umask": "022",
+                "path": "config",
+                "owner": "hdfs",
+                "group": "hadoop",
+                "permission": "0700"
+            }
+        },
+        {
+            "kind": "DELETE_STRUCTURE_ACTION",
+            "clusterId": "cluster-001",
+            "user": "hdfs",
+            "id": "action-005",
+            "component": "hdfs",
+            "role": "datanode",
+            "clusterDefinitionRevision": "0"
+        }
+      ]
+    }
+   * @response.representation.200.doc Print an example of Controller Response to Agent
+   * @response.representation.200.mediaType application/json
+   * @return ControllerResponse A list of command to execute on agent
+   */
+  @Path("response/sample")
+  @GET
+  @Produces("application/json")
+  public ControllerResponse getControllerResponse() {
+    ControllerResponse controllerResponse = new ControllerResponse();
+    controllerResponse.setResponseId((short)2);    
+    
+    String script = "import os\nos._exit(0)";
+    String[] param = { "cluster", "role" };
+
+    Command command = new Command("root", script, param);
+    Command cleanUp = new Command("root", script, param);
+    
+    Action action = new Action();
+    action.setClusterId("cluster-001");
+    action.setId("action-000");
+    action.setKind(Kind.CREATE_STRUCTURE_ACTION);
+    action.setComponent("hdfs");
+    action.setRole("datanode");
+    
+    Action action1 = new Action();
+    action1.setClusterDefinitionRevision(2);
+
+    action1.setUser("hdfs");
+    action1.setComponent("hdfs");
+    action1.setRole("datanode");
+    action1.setKind(Kind.STOP_ACTION);
+    action1.setSignal(Signal.KILL);
+    action1.setClusterId("cluster-001");
+    action1.setId("action-001");
+
+    Action action2 = new Action();
+    action2.setClusterDefinitionRevision(3);
+    action2.setKind(Kind.START_ACTION);
+    action2.setId("action-002");
+    action2.setClusterId("cluster-001");
+    action2.setCommand(command);
+    action2.setCleanUpCommand(cleanUp);
+    action2.setUser("hdfs");
+    action2.setComponent("hdfs");
+    action2.setRole("datanode");
+    
+    Action action3 = new Action();
+    action3.setClusterDefinitionRevision(3);
+    action3.setUser("hdfs");
+    action3.setKind(Kind.RUN_ACTION);
+    action3.setId("action-003");
+    action3.setClusterId("cluster-001");
+    action3.setCommand(command);
+    action3.setCleanUpCommand(cleanUp);
+    action3.setUser("hdfs");
+    action3.setComponent("hdfs");
+    action3.setRole("datanode");
+    
+    Action action4 = new Action();
+    action4.setId("action-004");
+    action4.setClusterId("cluster-001");
+    action4.setClusterDefinitionRevision(4);
+    action4.setKind(Kind.WRITE_FILE_ACTION);
+    action4.setUser("hdfs");
+    action4.setComponent("hdfs");
+    action4.setRole("datanode");
+    String owner ="hdfs";
+    String group = "hadoop";
+    String permission = "0700";
+    String path = "config";
+    String umask = "022";
+    String data = "Content of the file";
+    action4.setFile(new ConfigFile(owner, group, permission, path, umask, data));
+    
+    Action action5 = new Action();
+    action5.setKind(Kind.DELETE_STRUCTURE_ACTION);
+    action5.setId("action-005");
+    action5.setClusterId("cluster-001");
+    action5.setUser("hdfs");
+    action5.setComponent("hdfs");
+    action5.setRole("datanode");
+    
+    List<Action> actions = new ArrayList<Action>();
+    actions.add(action);
+    actions.add(action1);
+    actions.add(action2);
+    actions.add(action3);
+    actions.add(action4);
+    actions.add(action5);
+    controllerResponse.setActions(actions);
+    return controllerResponse;
+  }
+}
diff --git a/controller/src/main/java/org/apache/ambari/controller/rest/agent/WadlResource.java b/controller/src/main/java/org/apache/ambari/controller/rest/agent/WadlResource.java
new file mode 100644
index 0000000..74cd674
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/controller/rest/agent/WadlResource.java
@@ -0,0 +1,85 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.controller.rest.agent;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+
+import javax.ws.rs.GET;
+import javax.ws.rs.Path;
+import javax.ws.rs.Produces;
+import javax.ws.rs.core.Context;
+import javax.ws.rs.core.Response;
+import javax.ws.rs.core.UriInfo;
+import javax.xml.bind.Marshaller;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import com.sun.jersey.server.wadl.WadlApplicationContext;
+import com.sun.jersey.spi.resource.Singleton;
+import com.sun.research.ws.wadl.Application;
+
+@Produces({"application/vnd.sun.wadl+xml", "application/xml"})
+@Singleton
+@Path("wadl")
+public class WadlResource {
+ 
+    private static final Log LOG = LogFactory.getLog(WadlResource.class);
+ 
+    private static final String XML_HEADERS = "com.sun.xml.bind.xmlHeaders";
+ 
+    private WadlApplicationContext wadlContext;
+ 
+    private Application application;
+ 
+    private byte[] wadlXmlRepresentation;
+ 
+    public WadlResource(@Context WadlApplicationContext wadlContext) {
+        this.wadlContext = wadlContext;
+        this.application = wadlContext.getApplication();
+    }
+
+    /**
+     * Display REST API in human readable format
+     * @param uriInfo
+     * @return WADL XML Representation of REST API
+     */
+    @GET
+    public synchronized Response getWadl(@Context UriInfo uriInfo) {
+        if (wadlXmlRepresentation == null) {
+            if (application.getResources().getBase() == null) {
+                application.getResources().setBase(uriInfo.getBaseUri().toString());
+            }
+            try {
+                final Marshaller marshaller = wadlContext.getJAXBContext().createMarshaller();
+                marshaller.setProperty(XML_HEADERS, "<?xml-stylesheet type='text/xsl' href='/wadl.xsl'?>");
+                marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true);
+                final ByteArrayOutputStream os = new ByteArrayOutputStream();
+                marshaller.marshal(application, os);
+                wadlXmlRepresentation = os.toByteArray();
+                os.close();
+            } catch (Exception e) {
+                LOG.warn("Could not marshal wadl Application.");
+                return javax.ws.rs.core.Response.ok(application).build();
+            }
+        }
+        return Response.ok(new ByteArrayInputStream(wadlXmlRepresentation)).build();
+    }
+}
diff --git a/controller/src/main/java/org/apache/ambari/controller/rest/config/Examples.java b/controller/src/main/java/org/apache/ambari/controller/rest/config/Examples.java
new file mode 100644
index 0000000..432a00d
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/controller/rest/config/Examples.java
@@ -0,0 +1,111 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.ambari.controller.rest.config;
+
+import java.util.ArrayList;
+import java.util.Date;
+import java.util.List;
+
+import org.apache.ambari.common.rest.entities.ClusterDefinition;
+import org.apache.ambari.common.rest.entities.ClusterInformation;
+import org.apache.ambari.common.rest.entities.ClusterState;
+import org.apache.ambari.common.rest.entities.Node;
+import org.apache.ambari.common.rest.entities.NodeAttributes;
+import org.apache.ambari.common.rest.entities.NodeRole;
+import org.apache.ambari.common.rest.entities.NodeState;
+import org.apache.ambari.common.rest.entities.RoleToNodes;
+import org.apache.ambari.common.rest.entities.Stack;
+import org.apache.ambari.controller.Util;
+import org.apache.ambari.common.rest.entities.StackInformation;
+
+public class Examples {
+	public static final ClusterInformation CLUSTER_INFORMATION = new ClusterInformation();
+	public static final ClusterDefinition CLUSTER_DEFINITION = new ClusterDefinition();
+	public static final ClusterState CLUSTER_STATE = new ClusterState();
+    public static final List<String> activeServices = new ArrayList<String>();
+    public static final List<RoleToNodes> rnm = new ArrayList<RoleToNodes>();
+    public static final List<Node> NODES = new ArrayList<Node>();
+    public static final Stack STACK = new Stack();
+    public static final StackInformation STACK_INFORMATION = new StackInformation();
+    public static final Node NODE = new Node();
+	
+	static {
+		CLUSTER_DEFINITION.setName("example-name");
+        CLUSTER_DEFINITION.setName("blue.dev.Cluster123");
+        CLUSTER_DEFINITION.setStackName("cluster123");
+        CLUSTER_DEFINITION.setStackRevision("0");
+        CLUSTER_DEFINITION.setDescription("cluster123 - development cluster");
+        CLUSTER_DEFINITION.setGoalState(ClusterState.CLUSTER_STATE_ATTIC);
+        activeServices.add("hdfs");
+        activeServices.add("mapred");
+        CLUSTER_DEFINITION.setEnabledServices(activeServices);
+        
+        String nodes = "jt-nodex,nn-nodex,hostname-1x,hostname-2x,hostname-3x,"+
+                       "hostname-4x,node-2x,node-3x,node-4x";  
+        CLUSTER_DEFINITION.setNodes(nodes);
+		CLUSTER_INFORMATION.setDefinition(CLUSTER_DEFINITION);
+        
+		RoleToNodes rnme = new RoleToNodes();
+        rnme.setRoleName("jobtracker-role");
+        rnme.setNodes("jt-nodex");
+        rnm.add(rnme);
+        
+        rnme = new RoleToNodes();
+        rnme.setRoleName("namenode-role");
+        rnme.setNodes("nn-nodex");
+        rnm.add(rnme);
+        
+        rnme = new RoleToNodes();
+        rnme.setRoleName("slaves-role");
+        rnme.setNodes("hostname-1x,hostname-2x,hostname-3x,"+
+                       "hostname-4x,node-2x,node-3x,node-4x");
+        rnm.add(rnme);
+        CLUSTER_DEFINITION.setRoleToNodesMap(rnm);
+        
+        CLUSTER_STATE.setState("ATTIC");
+        try {
+			CLUSTER_STATE.setCreationTime(Util.getXMLGregorianCalendar(new Date()));
+			CLUSTER_STATE.setDeployTime(Util.getXMLGregorianCalendar(new Date()));
+		} catch (Exception e) {
+		}
+        NODE.setName("localhost");
+        NodeAttributes nodeAttributes = new NodeAttributes();
+        nodeAttributes.setCPUCores((short)1);
+        nodeAttributes.setDISKUnits((short)4);
+        nodeAttributes.setRAMInGB(6);
+        NODE.setNodeAttributes(nodeAttributes);
+        NodeState nodeState = new NodeState();
+        nodeState.setClusterName("cluster-123");
+        List<NodeRole> roles = new ArrayList<NodeRole>();
+        NodeRole ns1 = new NodeRole("jobtracker-role", NodeRole.NODE_SERVER_STATE_DOWN, Util.getXMLGregorianCalendar(new Date()));
+        NodeRole ns2 = new NodeRole("namenode-role", NodeRole.NODE_SERVER_STATE_DOWN, Util.getXMLGregorianCalendar(new Date()));
+        roles.add(ns1); roles.add(ns2);
+        nodeState.setNodeRoles(roles);
+        NODE.setNodeState(nodeState);
+        NODES.add(NODE);
+        
+        STACK.setName("stack");
+        STACK.setRevision("1");
+        STACK_INFORMATION.setName("HDP");
+        STACK_INFORMATION.setRevision("1");
+        List<String> components = new ArrayList<String>();
+        components.add("hdfs");
+        components.add("mapreduce");
+        STACK_INFORMATION.setComponent(components);
+	}
+}
diff --git a/controller/src/main/java/org/apache/ambari/controller/rest/config/ExtendedWadlGeneratorConfig.java b/controller/src/main/java/org/apache/ambari/controller/rest/config/ExtendedWadlGeneratorConfig.java
new file mode 100644
index 0000000..8c77d84
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/controller/rest/config/ExtendedWadlGeneratorConfig.java
@@ -0,0 +1,46 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.controller.rest.config;
+
+import java.util.List;
+
+import com.sun.jersey.api.wadl.config.WadlGeneratorConfig;
+import com.sun.jersey.api.wadl.config.WadlGeneratorDescription;
+import com.sun.jersey.server.wadl.WadlGenerator;
+import com.sun.jersey.server.wadl.generators.WadlGeneratorApplicationDoc;
+import com.sun.jersey.server.wadl.generators.WadlGeneratorGrammarsSupport;
+import com.sun.jersey.server.wadl.generators.resourcedoc.WadlGeneratorResourceDocSupport;
+
+/**
+ * This subclass of {@link WadlGeneratorConfig} defines/configures 
+ * {@link WadlGenerator}s to be used for generating WADL.
+ */
+public class ExtendedWadlGeneratorConfig extends WadlGeneratorConfig {
+
+  @Override
+  public List<WadlGeneratorDescription> configure() {
+    return generator( WadlGeneratorApplicationDoc.class )
+      .prop( "applicationDocsStream", "application-doc.xml" )
+      .generator(WadlGeneratorGrammarsSupport.class)
+      .prop("grammarsStream", "application-grammars.xml")
+      .generator(WadlGeneratorResourceDocSupport.class)
+      .prop("resourceDocStream", "resourcedoc.xml")
+      .descriptions();
+    }
+}
diff --git a/controller/src/main/java/org/apache/ambari/controller/rest/config/PrivateWadlGeneratorConfig.java b/controller/src/main/java/org/apache/ambari/controller/rest/config/PrivateWadlGeneratorConfig.java
new file mode 100644
index 0000000..e45530d
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/controller/rest/config/PrivateWadlGeneratorConfig.java
@@ -0,0 +1,46 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.controller.rest.config;
+
+import java.util.List;
+
+import com.sun.jersey.api.wadl.config.WadlGeneratorConfig;
+import com.sun.jersey.api.wadl.config.WadlGeneratorDescription;
+import com.sun.jersey.server.wadl.WadlGenerator;
+import com.sun.jersey.server.wadl.generators.WadlGeneratorApplicationDoc;
+import com.sun.jersey.server.wadl.generators.WadlGeneratorGrammarsSupport;
+import com.sun.jersey.server.wadl.generators.resourcedoc.WadlGeneratorResourceDocSupport;
+
+/**
+ * This subclass of {@link WadlGeneratorConfig} defines/configures 
+ * {@link WadlGenerator}s to be used for generating WADL.
+ */
+public class PrivateWadlGeneratorConfig extends WadlGeneratorConfig {
+
+  @Override
+  public List<WadlGeneratorDescription> configure() {
+    return generator( WadlGeneratorApplicationDoc.class )
+      .prop( "applicationDocsStream", "application-agent-doc.xml" )
+      .generator(WadlGeneratorGrammarsSupport.class)
+      .prop("grammarsStream", "application-grammars.xml")
+      .generator(WadlGeneratorResourceDocSupport.class)
+      .prop("resourceDocStream", "resourcedoc-agent.xml")
+      .descriptions();
+    }
+}
diff --git a/controller/src/main/java/org/apache/ambari/controller/rest/config/WadlDocGenerator.java b/controller/src/main/java/org/apache/ambari/controller/rest/config/WadlDocGenerator.java
new file mode 100644
index 0000000..81b5614
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/controller/rest/config/WadlDocGenerator.java
@@ -0,0 +1,192 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.controller.rest.config;
+
+import java.io.File;
+import java.util.logging.Level;
+import java.util.logging.Logger;
+
+import javax.ws.rs.core.MediaType;
+import javax.xml.bind.JAXBContext;
+import javax.xml.bind.Unmarshaller;
+
+import org.apache.ambari.client.ClusterStack;
+
+import org.apache.ambari.controller.rest.resources.StacksResource;
+import org.apache.ambari.controller.rest.resources.ClustersResource;
+
+import com.sun.jersey.api.model.AbstractMethod;
+import com.sun.jersey.api.model.AbstractResource;
+import com.sun.jersey.api.model.AbstractResourceMethod;
+import com.sun.jersey.api.model.Parameter;
+import com.sun.jersey.server.wadl.WadlGenerator;
+import com.sun.jersey.server.wadl.generators.resourcedoc.ResourceDocAccessor;
+import com.sun.jersey.server.wadl.generators.resourcedoc.model.MethodDocType;
+import com.sun.jersey.server.wadl.generators.resourcedoc.model.ResourceDocType;
+import com.sun.research.ws.wadl.Application;
+import com.sun.research.ws.wadl.Doc;
+import com.sun.research.ws.wadl.Method;
+import com.sun.research.ws.wadl.Param;
+import com.sun.research.ws.wadl.RepresentationType;
+import com.sun.research.ws.wadl.Request;
+import com.sun.research.ws.wadl.Resource;
+import com.sun.research.ws.wadl.Resources;
+import com.sun.research.ws.wadl.Response;
+
+/**
+ * This {@link WadlDocGenerator} shows how the custom information added by the
+ * {@link ExampleDocProcessor} to the resourcedoc.xml can be processed by this
+ * {@link WadlDocGenerator} and is used to extend the generated application.wadl<br>
+ * 
+ * @version $Id$
+ */
+public class WadlDocGenerator implements WadlGenerator {
+
+    private static final Logger LOG = Logger.getLogger( WadlDocGenerator.class.getName() );
+
+    private WadlGenerator _delegate;
+    private File _resourceDocFile;
+    private ResourceDocAccessor _resourceDoc;
+
+    /* (non-Javadoc)
+     * @see com.sun.jersey.server.impl.wadl.WadlGenerator#setWadlGeneratorDelegate(com.sun.jersey.server.impl.wadl.WadlGenerator)
+     */
+    @Override
+    public void setWadlGeneratorDelegate( WadlGenerator delegate ) {
+        _delegate = delegate;
+    }
+
+    /* (non-Javadoc)
+     * @see com.sun.jersey.server.impl.wadl.WadlGenerator#getRequiredJaxbContextPath()
+     */
+    public String getRequiredJaxbContextPath() {
+        return _delegate.getRequiredJaxbContextPath();
+    }
+
+    public void setResourceDocFile( File resourceDocFile ) {
+        _resourceDocFile = resourceDocFile;
+    }
+
+    public void init() throws Exception {
+        _delegate.init();
+        final ResourceDocType resourceDoc = loadFile( _resourceDocFile, ResourceDocType.class, ResourceDocType.class,
+        		Examples.class);
+        _resourceDoc = new ResourceDocAccessor( resourceDoc );
+    }
+
+    private <T> T loadFile( File fileToLoad, Class<T> targetClass, Class<?> ... classesToBeBound ) {
+        if ( fileToLoad == null ) {
+            throw new IllegalArgumentException( "The resource-doc file to load is not set!" );
+        }
+        try {
+            final JAXBContext c = JAXBContext.newInstance( classesToBeBound );
+            final Unmarshaller m = c.createUnmarshaller();
+            return targetClass.cast( m.unmarshal( fileToLoad ) );
+        } catch( Exception e ) {
+            LOG.log( Level.SEVERE, "Could not unmarshal file " + fileToLoad, e );
+            throw new RuntimeException( "Could not unmarshal file " + fileToLoad, e );
+        }
+    }
+
+    /**
+     * @return
+     * @see com.sun.jersey.server.impl.wadl.WadlGenerator#createApplication()
+     */
+    public Application createApplication() {
+        return _delegate.createApplication();
+    }
+
+    /**
+     * @param resource
+     * @param resourceMethod
+     * @return
+     * @see com.sun.jersey.server.impl.wadl.WadlGenerator#createMethod(com.sun.jersey.api.model.AbstractResource, com.sun.jersey.api.model.AbstractResourceMethod)
+     */
+    public Method createMethod( AbstractResource r,
+            AbstractResourceMethod m ) {
+        final Method result = _delegate.createMethod( r, m );
+        final MethodDocType methodDoc = _resourceDoc.getMethodDoc( r.getResourceClass(), m.getMethod() );
+        return result;
+    }
+
+    /**
+     * @param arg0
+     * @param arg1
+     * @return
+     * @see com.sun.jersey.server.impl.wadl.WadlGenerator#createRequest(com.sun.jersey.api.model.AbstractResource, com.sun.jersey.api.model.AbstractResourceMethod)
+     */
+    public Request createRequest( AbstractResource arg0,
+            AbstractResourceMethod arg1 ) {
+        return _delegate.createRequest( arg0, arg1 );
+    }
+
+    /**
+     * @param arg0
+     * @param arg1
+     * @param arg2
+     * @return
+     * @see com.sun.jersey.server.impl.wadl.WadlGenerator#createParam(com.sun.jersey.api.model.AbstractResource, com.sun.jersey.api.model.AbstractMethod, com.sun.jersey.api.model.Parameter)
+     */
+    public Param createParam( AbstractResource arg0,
+            AbstractMethod arg1, Parameter arg2 ) {
+        return _delegate.createParam( arg0, arg1, arg2 );
+    }
+
+    /**
+     * @param arg0
+     * @param arg1
+     * @param arg2
+     * @return
+     * @see com.sun.jersey.server.impl.wadl.WadlGenerator#createRequestRepresentation(com.sun.jersey.api.model.AbstractResource, com.sun.jersey.api.model.AbstractResourceMethod, javax.ws.rs.core.MediaType)
+     */
+    public RepresentationType createRequestRepresentation(
+            AbstractResource arg0, AbstractResourceMethod arg1, MediaType arg2 ) {
+        return _delegate.createRequestRepresentation( arg0, arg1, arg2 );
+    }
+
+    /**
+     * @param arg0
+     * @param arg1
+     * @return
+     * @see com.sun.jersey.server.impl.wadl.WadlGenerator#createResource(com.sun.jersey.api.model.AbstractResource, java.lang.String)
+     */
+    public Resource createResource( AbstractResource arg0, String arg1 ) {
+        return _delegate.createResource( arg0, arg1 );
+    }
+
+    /**
+     * @return
+     * @see com.sun.jersey.server.impl.wadl.WadlGenerator#createResources()
+     */
+    public Resources createResources() {
+        return _delegate.createResources();
+    }
+
+    /**
+     * @param arg0
+     * @param arg1
+     * @return
+     * @see com.sun.jersey.server.impl.wadl.WadlGenerator#createResponse(com.sun.jersey.api.model.AbstractResource, com.sun.jersey.api.model.AbstractResourceMethod)
+     */
+    public Response createResponse( AbstractResource arg0,
+            AbstractResourceMethod arg1 ) {
+        return _delegate.createResponse( arg0, arg1 );
+    }
+
+}
\ No newline at end of file
diff --git a/controller/src/main/java/org/apache/ambari/controller/rest/resources/ClustersResource.java b/controller/src/main/java/org/apache/ambari/controller/rest/resources/ClustersResource.java
new file mode 100644
index 0000000..321ca16
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/controller/rest/resources/ClustersResource.java
@@ -0,0 +1,351 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.ambari.controller.rest.resources;
+
+import java.util.List;
+
+import org.apache.ambari.common.rest.entities.ClusterDefinition;
+import org.apache.ambari.common.rest.entities.ClusterInformation;
+import org.apache.ambari.common.rest.entities.ClusterState;
+import org.apache.ambari.common.rest.entities.Node;
+import org.apache.ambari.common.rest.entities.Stack;
+import org.apache.ambari.controller.Clusters;
+import org.apache.ambari.controller.ExceptionResponse;
+import org.apache.ambari.controller.rest.config.Examples;
+
+import com.google.inject.Inject;
+
+import javax.ws.rs.DELETE;
+import javax.ws.rs.GET;
+import javax.ws.rs.Consumes;
+import javax.ws.rs.DefaultValue;
+import javax.ws.rs.PUT;
+import javax.ws.rs.Path;
+import javax.ws.rs.PathParam;
+import javax.ws.rs.Produces;
+import javax.ws.rs.QueryParam;
+import javax.ws.rs.WebApplicationException;
+import javax.ws.rs.core.MediaType;
+import javax.ws.rs.core.Response;
+
+
+/**
+ * Clusters Resource represents the collection of Hadoop clusters in a data center
+ */
+@Path("clusters")
+public class ClustersResource {
+    
+    private static Clusters clusters;
+    
+    @Inject
+    static void init(Clusters clus) {
+      clusters = clus;
+    }
+    
+    /** 
+     * Get the list of clusters.
+     *
+     *  State: "ALL"           : All the clusters (irrespective of their state), 
+     *         "ACTIVE"        : All the active state clusters
+     *         "INACTIVE"      : All the inactive state clusters
+     *         "ATTIC"         : All the retired i.e. ATTIC state clusters
+     *  @response.representation.200.doc       Return ClusterInformation
+     *  @response.representation.200.mediaType application/json application/xml
+     *  @response.representation.200.example   {@link Examples#CLUSTER_INFORMATION}
+     *  @response.representation.204.doc       No cluster defined
+     *  @response.representation.500.doc       Internal Server Error
+     *  @param  state      The state of the cluster
+     *  @param  search     Optional search expression to return list of matching 
+     *                     clusters
+     *  @return            Returns the list of clusters based on specified state 
+     *                     and optional search criteria.
+     *  @throws Exception  throws Exception 
+     */
+    @GET
+    @Path("")
+    @Produces({MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML})
+    public List<ClusterInformation> getClusterList(
+                                 @DefaultValue("ALL") @QueryParam("state") String state,
+                                 @DefaultValue("") @QueryParam("search") String search) throws Exception {
+        List<ClusterInformation> searchResults = null;
+        try {
+            searchResults = clusters.getClusterInformationList(state);
+            if (searchResults.isEmpty()) {
+                throw new WebApplicationException(Response.Status.NO_CONTENT);
+            }   
+        }catch (WebApplicationException we) {
+            throw we;
+        }catch (Exception e) {
+            throw new WebApplicationException((new ExceptionResponse(e)).get());
+        } 
+        return searchResults;
+    }
+    
+    
+    /** 
+     * Get the information of a specified Hadoop Cluster. Information includes Cluster definition 
+     * and the cluster state.
+     * 
+     *  @response.representation.200.doc        Get the definition & current state of the specified Hadoop cluster
+     *  @response.representation.200.mediaType  application/json application/xml
+     *  @response.representation.200.example    {@link Examples#STACK}
+     *  @response.representation.404.doc        Specified cluster does not exist
+     *  
+     *  @param      clusterName                 Name of the cluster
+     *  @return                                 Returns the Cluster Information
+     *  @throws     WebApplicationException     Throws exception 
+     */
+    @GET
+    @Path("{clusterName}")
+    @Produces({MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML})
+    public ClusterInformation getClusterDefinition(@PathParam("clusterName") String clusterName) throws WebApplicationException {
+        try {
+            return clusters.getClusterInformation(clusterName);
+        }catch (WebApplicationException we) {
+            throw we;
+        }catch (Exception e) {
+            throw new WebApplicationException((new ExceptionResponse(e)).get());
+        }       
+    }
+    
+    /** 
+     * Add/Update cluster definition. If cluster does not exist it will created.
+     *  
+     *  While creating new cluster, cluster definition must specify name, stack name 
+     *  and nodes associated with the cluster. 
+     *  Default values for new cluster definition parameters, if not specified
+     *    -- goalstate          = "INACTIVE"  (optionally, it can be set to ACTIVE)
+     *    -- stack revision = latest revision
+     *    -- RoleToNodes        = If explicit association is not specified then Ambari
+     *                            will determine the optimal role to nodes association. 
+     *                            User can view it by running the command in dry_run.
+     *    -- active services    = "ALL" i.e. if not specified all the configured 
+     *                            services will be activated
+     *    -- description        = Default description will be associated
+     *    -- dry_run            = false
+     *  
+     *  
+     *  For new cluster to be in active state cluster definition needs to be 
+     *  complete & valid e.g. number of nodes associated are sufficient for 
+     *  each role, specified stack for cluster configuration should exist 
+     *  etc. 
+     *  
+     *  Updating existing cluster definition would require only specific sub elements
+     *  of cluster definition to be specified. 
+     * 
+     * @response.representation.200.doc         Returns new or updated cluster definition.
+     * @response.representation.200.mediaType   application/json application/xml
+     * @response.representation.200.example     {@link Examples#CLUSTER_DEFINITION}
+     * @response.representation.400.doc         Bad request (See "ErrorMessage" in the response
+     *                                          http header describing specific error condition).
+     * 
+     * @param   clusterName                     Name of the cluster
+     * @param   dry_run                         Run without actual execution
+     * @param   cluster                         Cluster definition to be created new or updated existing one
+     *                                          Cluster name can not be updated through this API.
+     * @return                                  Returns updated cluster definition
+     * @throws  Exception                       throws Exception
+     */ 
+    @PUT
+    @Path("{clusterName}")
+    @Consumes({MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML})
+    @Produces({MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML})
+    public ClusterDefinition updateClusterDefinition(
+           @PathParam("clusterName") String clusterName,
+           @DefaultValue("false") @QueryParam("dry_run") boolean dry_run,
+           ClusterDefinition cluster) throws Exception {    
+        try {
+            return clusters.updateCluster(clusterName, cluster, dry_run);
+        }catch (WebApplicationException we) {
+            throw we;
+        }catch (Exception e) {
+            throw new WebApplicationException((new ExceptionResponse(e)).get());
+        }     
+    }
+     
+    /** 
+     * Rename the cluster.
+     * 
+     * @response.representation.200.doc         Rename the cluster. This operation is allowed only
+     *                                          when cluster is in ATTIC state
+     * @response.representation.200.mediaType   application/json application/xml
+     * @response.representation.400.doc         Bad request (See "ErrorMessage" in the response
+     *                                          http header describing specific error condition).
+     * @response.representation.406.doc         Not Acceptable. Cluster is not in ATTIC state.
+     * @response.representation.404.doc         Cluster does not exist
+     * 
+     * @param   clusterName                     Existing name of the cluster
+     * @param   new_name                        New name of the cluster
+     * @throws  Exception                       throws Exception 
+     */ 
+    @PUT
+    @Path("{clusterName}/rename")
+    @Consumes({MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML})
+    public Response renameCluster(
+           @PathParam("clusterName") String clusterName,
+           @DefaultValue("") @QueryParam("new_name") String new_name) throws Exception {    
+        try {
+            clusters.renameCluster(clusterName, new_name);
+            return Response.ok().build();
+        }catch (WebApplicationException we) {
+            throw we;
+        }catch (Exception e) {
+            throw new WebApplicationException((new ExceptionResponse(e)).get());
+        }     
+    }
+    
+    /** 
+     * Delete the the cluster.
+     * 
+     * @response.representation.200.doc Delete operation will lead the cluster 
+     *                                  to "ATTIC" state and then the cluster 
+     *                                  definition is purged from the controller 
+     *                                  repository when all the nodes are released 
+     *                                  from the cluster. It is asynchronous operation.
+     *                                  In "ATTIC" state all the 
+     *                                  cluster services would be stopped and 
+     *                                  nodes are released. All the cluster data 
+     *                                  will be lost.
+     *  
+     *  @param  clusterName             Name of the cluster
+     *  @throws Exception               throws Exception
+     */
+    @DELETE
+    @Path("{clusterName}")
+    @Consumes({MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML})
+    public Response deleteCluster(@PathParam("clusterName") String clusterName) throws Exception {
+        try {
+            clusters.deleteCluster(clusterName);
+            return Response.ok().build();
+        }catch (WebApplicationException we) {
+            throw we;
+        }catch (Exception e) {
+            throw new WebApplicationException((new ExceptionResponse(e)).get());
+        }    
+    }
+    
+    /** 
+     * Get the cluster state.
+     *  
+     *  @response.representation.200.doc            This provides the run time state of the 
+     *                                              cluster. Cluster state is derived based 
+     *                                              on the state of various services running on the cluster.
+     *                                              Representative cluster states:
+     *                                                  "ACTIVE"  : Hadoop stack is deployed on cluster nodes and 
+     *                                                              required cluster services are running
+     *                                                  "INACTIVE : No cluster services are running. Hadoop stack 
+     *                                                              may or may not be deployed on the cluster nodes
+     *                                                  "ATTIC"   : Only cluster definition is available. No nodes are 
+     *                                                              reserved for the cluster in this state.
+     *  @response.representation.200.mediaType   application/json
+     *  @response.representation.200.example     {@link Examples#CLUSTER_STATE}
+     *  @response.representation.404.doc         Cluster does not exist
+     *  
+     *  @param  clusterName             Name of the cluster
+     *  @return                         Returns cluster state object.
+     *  @throws Exception               throws Exception   
+     */
+    @GET
+    @Path("{clusterName}/state")
+    @Produces({MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML})
+    public ClusterState getClusterState(@PathParam("clusterName") String clusterName) throws Exception {
+        try {
+            return clusters.getClusterState(clusterName);
+        }catch (WebApplicationException we) {
+            throw we;
+        }catch (Exception e) {
+            throw new WebApplicationException((new ExceptionResponse(e)).get());
+        }    
+    }
+    
+    /** 
+     * Get list of nodes associated with the cluster.
+     *  
+     *  @response.representation.200.doc Get list of nodes associated with the cluster.
+     *  The "alive" is a boolean variable that 
+     *  specify the type of nodes to return based on their state i.e. live or 
+     *  dead. Live nodes are the ones that are consistently heart beating with 
+     *  the controller. If both live and dead nodes need to be returned 
+     *  then do not specify the alive query parameter
+     *  @response.representation.200.mediaType application/json
+     *  @response.representation.200.example {@link Examples#NODES}
+     *  @response.representation.204.doc    No content; No nodes are associated with the cluster
+     *  @response.representation.500.doc    Internal Server Error; No nodes are associated with the cluster
+     *                                      (See "ErrorMessage" in the response http header describing specific error condition).
+     *  
+     *  @param  clusterName Name of the cluster
+     *  @param  role        Optionally specify the role name to get the nodes 
+     *                      associated with the service role
+     *  @param  alive       Boolean value (true/false) to specify, if nodes to be 
+     *                      returned are alive or dead. if this query parameter is 
+     *                      is not specified them all the nodes associated with cluster
+     *                      are returned) 
+     *  @return             List of nodes
+     *  @throws Exception   throws Exception
+     */
+    @GET
+    @Path("{clusterName}/nodes")
+    @Produces({MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML})
+    public List<Node> getNodes (@PathParam("clusterName") String clusterName,
+                                @DefaultValue("") @QueryParam("role") String role,
+                                @DefaultValue("") @QueryParam("alive") String alive) throws Exception {    
+        try {
+            List<Node> list = clusters.getClusterNodes(clusterName, role, alive);
+            
+            if (list.isEmpty()) {
+                String msg = "No nodes found!";
+                throw new WebApplicationException((new ExceptionResponse(msg, Response.Status.NO_CONTENT)).get());
+            }
+            return list;
+        }catch (WebApplicationException we) {
+            throw we;
+        }catch (Exception e) {
+            throw new WebApplicationException((new ExceptionResponse(e)).get());
+        }   
+    }
+    
+    /** 
+     * Get the stack associated with cluster
+     *  
+     *  @response.representation.200.doc Get the stack associated with cluster
+     *  @response.representation.200.mediaType   application/json
+     *  @response.representation.200.example {@link Examples#STACK}
+     *  @response.representgation.404.doc        Cluster does not exist
+     *  
+     *  @param  clusterName Name of the cluster
+     *  @param  expanded    Optionally specify the boolean value to indicate if 
+     *                      to retrieved the cluster level stack or the fully
+     *                      derived stack in-lining the parent stacks 
+     *                      associated with the service role
+     *  @return             Stack
+     *  @throws Exception   throws Exception
+     */
+    @GET
+    @Path("{clusterName}/stack")
+    @Produces({MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML})
+    public Stack getClusterStack (@PathParam("clusterName") String clusterName,
+                                @DefaultValue("true") @QueryParam("expanded") boolean expanded) throws Exception {    
+        try {
+            return clusters.getClusterStack(clusterName, expanded);
+        }catch (WebApplicationException we) {
+            throw we;
+        }catch (Exception e) {
+            throw new WebApplicationException((new ExceptionResponse(e)).get());
+        }   
+    }
+}
diff --git a/controller/src/main/java/org/apache/ambari/controller/rest/resources/ContextProvider.java b/controller/src/main/java/org/apache/ambari/controller/rest/resources/ContextProvider.java
new file mode 100644
index 0000000..945d717
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/controller/rest/resources/ContextProvider.java
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.controller.rest.resources;
+
+import javax.ws.rs.ext.ContextResolver;
+import javax.ws.rs.ext.Provider;
+import javax.xml.bind.JAXBContext;
+import javax.xml.bind.JAXBException;
+
+import org.apache.ambari.common.rest.entities.*;
+
+import com.sun.jersey.api.json.JSONConfiguration;
+import com.sun.jersey.api.json.JSONJAXBContext;
+
+@Provider
+public class ContextProvider implements ContextResolver<JAXBContext> {
+
+  private final JAXBContext context;
+  private Class<?>[] types = { };
+  /*
+  private Class<?>[] types = { ClusterDefinition.class,
+                               ClusterInformation.class,
+                               ClusterState.class,
+                               Component.class,
+                               ComponentDefinition.class,
+                               Configuration.class,
+                               ConfigurationCategory.class,
+                               Node.class,
+                               NodeAttributes.class,
+                               NodeServer.class,
+                               NodeState.class,
+                               Property.class,
+                               RepositoryKind.class,
+                               Role.class,
+                               RoleToNodes.class,
+                               Stack.class,
+                               StackInformation.class
+  }; */
+
+  public ContextProvider() throws JAXBException {
+    this.context = new JSONJAXBContext(JSONConfiguration.natural().build(), 
+                                       types);
+  }
+
+  public JAXBContext getContext(Class<?> objectType) {
+    for (Class<?> type : types) {
+      if (type.equals(objectType))
+        return context;
+    }
+    return null;
+  } 
+}
\ No newline at end of file
diff --git a/controller/src/main/java/org/apache/ambari/controller/rest/resources/NodesResource.java b/controller/src/main/java/org/apache/ambari/controller/rest/resources/NodesResource.java
new file mode 100644
index 0000000..bc346a1
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/controller/rest/resources/NodesResource.java
@@ -0,0 +1,113 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.ambari.controller.rest.resources;
+
+import java.util.List;
+
+import javax.ws.rs.DefaultValue;
+import javax.ws.rs.GET;
+import javax.ws.rs.Path;
+import javax.ws.rs.PathParam;
+import javax.ws.rs.Produces;
+import javax.ws.rs.QueryParam;
+import javax.ws.rs.WebApplicationException;
+import javax.ws.rs.core.MediaType;
+import javax.ws.rs.core.Response;
+
+import org.apache.ambari.common.rest.entities.Node;
+import org.apache.ambari.controller.ExceptionResponse;
+import org.apache.ambari.controller.Nodes;
+import org.apache.ambari.controller.rest.config.Examples;
+
+import com.google.inject.Inject;
+
+
+/** Nodes Resource represents collection of cluster nodes.
+ */
+@Path("nodes")
+public class NodesResource {
+            
+    private static Nodes nodes;
+    
+    @Inject
+    static void init(Nodes n) {
+      nodes = n;
+    }
+
+    /** Get list of nodes
+     *
+     *  The "allocated and "alive" are the boolean variables that specify the type of nodes to return based on their state i.e. if they are already allocated to any cluster and live or dead. 
+     *  Live nodes are the ones that are consistently heart beating with the controller. If both "allocated" and "alive" are set to NULL then all the nodes are returned.  
+     *  
+     * @response.representation.200.doc       Successful. 
+     * @response.representation.200.mediaType application/json
+     * @response.representation.404.doc       Node does not exist
+     * @response.representation.200.example   {@link Examples#NODE}
+     * 
+     *  @param  allocated               Boolean value to specify, if nodes to be returned are allocated/reserved for some cluster (specify null to return both allocated and unallocated nodes)
+     *  @param  alive                   Boolean value to specify, if nodes to be returned are alive or dead or both (specify null to return both live and dead nodes) 
+     *  @return                         List of nodes
+     *  @throws Exception               throws Exception
+     */
+    @GET
+    @Path("")
+    @Produces({MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML})
+    public List<Node> getNodesList (@DefaultValue("") @QueryParam("allocated") String allocated,
+                                    @DefaultValue("") @QueryParam("alive") String alive) throws Exception {
+        List<Node> list;
+        try {
+            list = nodes.getNodesByState(allocated, alive);
+            if (list.isEmpty()) {
+                throw new WebApplicationException(Response.Status.NO_CONTENT);
+            }   
+            return list;
+        }catch (WebApplicationException we) {
+            throw we;
+        }catch (Exception e) {
+            throw new WebApplicationException((new ExceptionResponse(e)).get());
+        } 
+    }
+
+    /*
+     * Get specified Node information
+     */
+    /** 
+     * Get the node information that includes, service states, node attributes etc.
+     * 
+     * @response.representation.200.doc       Successful. 
+     * @response.representation.200.mediaType application/json
+     * @response.representation.404.doc       Node does not exist
+     * @response.representation.200.example   {@link Examples#NODE}
+     *  
+     * @param hostname          Fully qualified hostname
+     * @return                  Returns the node information
+     * @throws Exception        throws Exception
+     */
+    @GET
+    @Path("{hostname}")
+    @Produces({MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML})
+    public Node getNode (@PathParam("hostname") String hostname) throws Exception {
+        try {
+            return nodes.getNode(hostname);
+        }catch (WebApplicationException we) {
+            throw we;
+        }catch (Exception e) {
+            throw new WebApplicationException((new ExceptionResponse(e)).get());
+        } 
+    }
+}
diff --git a/controller/src/main/java/org/apache/ambari/controller/rest/resources/StacksResource.java b/controller/src/main/java/org/apache/ambari/controller/rest/resources/StacksResource.java
new file mode 100644
index 0000000..f5250eb
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/controller/rest/resources/StacksResource.java
@@ -0,0 +1,217 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.ambari.controller.rest.resources;
+
+import java.util.List;
+
+import javax.ws.rs.Consumes;
+import javax.ws.rs.DELETE;
+import javax.ws.rs.DefaultValue;
+import javax.ws.rs.GET;
+import javax.ws.rs.PUT;
+import javax.ws.rs.Path;
+import javax.ws.rs.PathParam;
+import javax.ws.rs.Produces;
+import javax.ws.rs.QueryParam;
+import javax.ws.rs.WebApplicationException;
+import javax.ws.rs.core.MediaType;
+import javax.ws.rs.core.Response;
+
+import org.apache.ambari.common.rest.entities.Stack;
+import org.apache.ambari.common.rest.entities.StackInformation;
+import org.apache.ambari.controller.Clusters;
+import org.apache.ambari.controller.Stacks;
+import org.apache.ambari.controller.ExceptionResponse;
+import org.apache.ambari.controller.rest.config.Examples;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import com.google.inject.Inject;
+
+/** 
+ * StackResource represents a Hadoop stack to be installed on a 
+ * cluster. Stacks define a collection of Hadoop components that are
+ * installed together on a cluster and their configuration.
+ */
+@Path("stacks")
+public class StacksResource {
+ 
+    private static Log LOG = LogFactory.getLog(StacksResource.class);
+    private static Stacks stacks;
+    private static Clusters clusters;
+    
+    @Inject
+    static void init(Stacks s, Clusters c) {
+      stacks = s;
+      clusters = c;
+    }
+    
+    /** 
+     * Get the list of stacks
+     * 
+     * @response.representation.200.doc         Successful
+     * @response.representation.200.mediaType   application/json application/xml
+     * @response.representation.200.example     {@link Examples#STACK_INFORMATION}
+     * @response.representation.204.doc         List is empty.
+     *  
+     * @return Returns the list of StackInformation items
+     * @throws Exception
+     */
+    @GET
+    @Path("")
+    @Produces({MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML})
+    public List<StackInformation> listStacks() throws Exception {
+        List<StackInformation> list;
+        try {
+            list = stacks.getStackList();
+            if (list.isEmpty()) {
+                throw new WebApplicationException(Response.Status.NO_CONTENT);
+            } 
+            return list;
+        }catch (WebApplicationException we) {
+            throw we;
+        }catch (Exception e) {
+            LOG.error("Caught error in get stacks", e);
+            throw new WebApplicationException((new ExceptionResponse(e)).get());
+        } 
+    }
+    
+    /** 
+     * Get a stack
+     * 
+     * @response.representation.200.doc       Get a stack
+     * @response.representation.200.mediaType application/json application/xml
+     * @response.representation.200.example   {@link Examples#CLUSTER_INFORMATION}
+     *  
+     * @param  stackName       Name of the stack
+     * @param  revision        The optional stack revision, if not specified get the latest revision
+     * @return                 stack definition
+     * @throws Exception       throws Exception 
+     */
+    @GET
+    @Path("{stackName}")
+    @Produces({MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML})
+    public Stack getStack(@PathParam("stackName") String stackName, 
+                                  @DefaultValue("-1") @QueryParam("revision") String revision) throws Exception {     
+        try {
+            return stacks.getStack(stackName, Integer.parseInt(revision));
+        }catch (WebApplicationException we) {
+            throw we;
+        }catch (Exception e) {
+            throw new WebApplicationException((new ExceptionResponse(e)).get());
+        }      
+    }
+    
+    /** 
+     * Get a stack's revisions
+     * 
+     * @response.representation.200.doc       Get stack revisions
+     * @response.representation.200.mediaType application/json application/xml
+     * @response.representation.200.example   {@link Examples#STACK_INFORMATION}
+     *  
+     * @param  stackName       Name of the stack
+     * 
+     * @return                 List of stack revisions
+     * @throws Exception       throws Exception
+     */
+    @GET
+    @Path("{stackName}/revisions")
+    @Produces({MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML})
+    public List<StackInformation> getstackRevisions(@PathParam("stackName") String stackName) throws Exception {     
+        try {
+            List<StackInformation> list = stacks.getStackRevisions(stackName);
+            if (list.isEmpty()) {
+                throw new WebApplicationException(Response.Status.NO_CONTENT);
+            }
+            return list;
+        }catch (WebApplicationException we) {
+            throw we;
+        }catch (Exception e) {
+            throw new WebApplicationException((new ExceptionResponse(e)).get());
+        }      
+    }
+    
+    /** 
+     * Delete the stack
+     * 
+     * @response.representation.200.doc       Delete a stack
+     * @response.representation.200.mediaType application/json application/xml
+     *  
+     * @param  stackName        Name of the stack
+     * @param  revision         Revision of the stack
+     * @throws Exception        throws Exception (TBD)
+     */
+    @DELETE
+    @Path("{stackName}")
+    @Consumes({MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML})
+    public Response deletestack(@PathParam("stackName") String stackName) throws Exception {     
+        try {
+            if (clusters.isStackUsed(stackName)) {
+              throw new WebApplicationException(new ExceptionResponse(stackName+ 
+                  " is still used by one or more clusters.",
+                  Response.Status.BAD_REQUEST).get());
+            }
+            stacks.deleteStack(stackName);
+            return Response.ok().build();
+        }catch (WebApplicationException we) {
+            throw we;
+        }catch (Exception e) {
+            throw new WebApplicationException((new ExceptionResponse(e)).get());
+        }    
+    }
+    
+    /** 
+     * Create/Update the stack.
+     *
+     * If named stack does not exist already, then it is created with revision zero.
+     * If named stack exists, then it is updated as new revision.
+     * Optional locationURL query parameter can specify the location of the repository of
+     * of stacks. If specified then stack is downloaded from the repository.
+     *
+     * @response.representation.200.doc         Successfully created the new or updated the existing stack.
+     * @response.representation.200.mediaType   application/json application/xml
+     * @response.representation.200.example     {@link Examples#STACK}
+     * @response.representation.404.doc         Specified stack does not exist. In case of creating new one, 
+     *                                          it is not found in repository where in case of update, it does not
+     *                                          exist.    
+     * 
+     * @param stackName Name of the stack
+     * @param locationURL   URL pointing to the location of the stack
+     * @param stack         Input stack object specifying the stack definition
+     * @return              Returns the new revision of the stack
+     * @throws Exception    throws Exception
+     */
+    @PUT
+    @Path("{stackName}")
+    @Consumes({MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML})
+    public Stack updateStack(@PathParam("stackName") String stackName, 
+                                     @DefaultValue("") @QueryParam("url") String locationURL,
+                                     Stack stack) throws Exception {
+        try {
+            if (locationURL == null || locationURL.equals("")) {
+                return stacks.addStack(stackName, stack);
+            } else {
+                return stacks.importDefaultStack (stackName, locationURL);
+            }
+        }catch (WebApplicationException we) {
+            throw we;
+        }catch (Exception e) {
+            throw new WebApplicationException((new ExceptionResponse(e)).get());
+        } 
+    } 
+}
diff --git a/controller/src/main/java/org/apache/ambari/controller/rest/resources/WadlResource.java b/controller/src/main/java/org/apache/ambari/controller/rest/resources/WadlResource.java
new file mode 100644
index 0000000..31e37e5
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/controller/rest/resources/WadlResource.java
@@ -0,0 +1,87 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.controller.rest.resources;
+
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+
+import javax.ws.rs.GET;
+import javax.ws.rs.Path;
+import javax.ws.rs.Produces;
+import javax.ws.rs.core.Context;
+import javax.ws.rs.core.Response;
+import javax.ws.rs.core.UriInfo;
+import javax.xml.bind.Marshaller;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import com.sun.jersey.server.wadl.WadlApplicationContext;
+import com.sun.jersey.spi.resource.Singleton;
+import com.sun.research.ws.wadl.Application;
+
+@Produces({"application/vnd.sun.wadl+xml", "application/xml"})
+@Singleton
+@Path("wadl")
+public class WadlResource {
+ 
+    private static final Log LOG = LogFactory.getLog(WadlResource.class);
+ 
+    private static final String XML_HEADERS = "com.sun.xml.bind.xmlHeaders";
+ 
+    private WadlApplicationContext wadlContext;
+ 
+    private Application application;
+ 
+    private byte[] wadlXmlRepresentation;
+ 
+    public WadlResource(@Context WadlApplicationContext wadlContext) {
+        this.wadlContext = wadlContext;
+        this.application = wadlContext.getApplication();
+    }
+
+    /**
+     * Display REST API in human readable format
+     * @response.representation.200.doc       This page.
+     * @response.representation.200.mediaType application/xml
+     * @param uriInfo
+     * @return WADL XML Representation of REST API
+     */
+    @GET
+    public synchronized Response getWadl(@Context UriInfo uriInfo) {
+        if (wadlXmlRepresentation == null) {
+            if (application.getResources().getBase() == null) {
+                application.getResources().setBase(uriInfo.getBaseUri().toString());
+            }
+            try {
+                final Marshaller marshaller = wadlContext.getJAXBContext().createMarshaller();
+                marshaller.setProperty(XML_HEADERS, "<?xml-stylesheet type='text/xsl' href='/wadl.xsl'?>");
+                marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true);
+                final ByteArrayOutputStream os = new ByteArrayOutputStream();
+                marshaller.marshal(application, os);
+                wadlXmlRepresentation = os.toByteArray();
+                os.close();
+            } catch (Exception e) {
+                LOG.warn("Could not marshal wadl Application.");
+                return javax.ws.rs.core.Response.ok(application).build();
+            }
+        }
+        return Response.ok(new ByteArrayInputStream(wadlXmlRepresentation)).build();
+    }
+}
diff --git a/controller/src/main/java/org/apache/ambari/datastore/DataStore.java b/controller/src/main/java/org/apache/ambari/datastore/DataStore.java
new file mode 100644
index 0000000..9d92d0f
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/datastore/DataStore.java
@@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.datastore;
+
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.ambari.common.rest.entities.ClusterState;
+import org.apache.ambari.common.rest.entities.Stack;
+import org.apache.ambari.common.rest.entities.ClusterDefinition;
+
+/**
+ * Abstraction that stores the Ambari state.
+ */
+public interface DataStore {
+    
+    /*
+     * Shutdown the data store. It will stop the data store service
+     */
+    public void close () throws IOException;
+    
+    /**
+     * Check if cluster exists
+     */
+    public boolean clusterExists(String clusterName) throws IOException;
+    
+    /**
+     * Get Latest cluster Revision Number
+     */
+    public int retrieveLatestClusterRevisionNumber(String clusterName) throws IOException;
+    
+    /**
+     * Store the cluster state
+     */
+    public void storeClusterState (String clusterName, ClusterState clsState) throws IOException;
+    
+    /**
+     * Store the cluster state
+     */
+    public ClusterState retrieveClusterState (String clusterName) throws IOException;
+
+    /**
+     * Store the cluster definition.
+     *
+     * Return the revision number for new or updated cluster definition
+     * If cluster revision is not null then, check if existing revision being updated in the store is same.
+     */
+    public int storeClusterDefinition (ClusterDefinition clusterDef) throws IOException;
+    
+    /**
+     * Retrieve the cluster definition given the cluster name and revision number
+     * If revision number is less than zero, then return latest cluster definition
+     */
+    public ClusterDefinition retrieveClusterDefinition (String clusterName, int revision) throws IOException;
+    
+    /**
+     * Retrieve list of existing cluster names
+     */
+    public List<String> retrieveClusterList () throws IOException;
+      
+    /**
+     * Delete cluster entry
+     */
+    public void deleteCluster (String clusterName) throws IOException;
+    
+    /**
+     * Store the stack configuration.
+     * If stack does not exist, create new one else create new revision
+     * Return the new stack revision 
+     */
+    public int storeStack (String stackName, Stack stack) throws IOException;
+    
+    /**
+     * Retrieve stack with specified revision number
+     * If revision number is less than zero, then return latest cluster definition
+     */
+    public Stack retrieveStack (String stackName, int revision) throws IOException;
+    
+    /**
+     * Retrieve list of stack names
+     * @return
+     * @throws IOException
+     */
+    public List<String> retrieveStackList() throws IOException;
+    
+    /**
+     * Get Latest stack Revision Number
+     */
+    public int retrieveLatestStackRevisionNumber(String stackName) throws IOException;
+    
+    /**
+     * Delete stack
+     */
+    public void deleteStack(String stackName) throws IOException;
+
+    /**
+     * Check if stack exists
+     */
+    boolean stackExists(String stackName) throws IOException;
+    
+}
diff --git a/controller/src/main/java/org/apache/ambari/datastore/DataStoreFactory.java b/controller/src/main/java/org/apache/ambari/datastore/DataStoreFactory.java
new file mode 100644
index 0000000..2acd405
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/datastore/DataStoreFactory.java
@@ -0,0 +1,48 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.datastore;
+
+import java.io.IOException;
+import java.net.URI;
+
+import org.apache.ambari.configuration.Configuration;
+
+import com.google.inject.Inject;
+
+public class DataStoreFactory {
+
+  private final DataStore ds;
+  
+  @Inject
+  DataStoreFactory(Configuration conf) throws IOException {
+    URI uri = conf.getDataStore();
+    String scheme = uri.getScheme();
+    if ("zk".equals(scheme)) {
+      String auth = uri.getAuthority();
+      ds = new ZookeeperDS(auth);
+    } else if ("test".equals(scheme)) {
+      ds = new StaticDataStore();
+    } else {
+      throw new IllegalArgumentException("Unknown data store " + scheme);
+    }
+  }
+  
+  public DataStore getInstance() {
+    return ds;
+  }
+}
diff --git a/controller/src/main/java/org/apache/ambari/datastore/StaticDataStore.java b/controller/src/main/java/org/apache/ambari/datastore/StaticDataStore.java
new file mode 100644
index 0000000..9ae9f16
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/datastore/StaticDataStore.java
@@ -0,0 +1,227 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.datastore;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeMap;
+
+import javax.xml.bind.JAXBContext;
+import javax.xml.bind.JAXBException;
+import javax.xml.bind.Unmarshaller;
+
+import org.apache.ambari.common.rest.entities.ClusterDefinition;
+import org.apache.ambari.common.rest.entities.ClusterState;
+import org.apache.ambari.common.rest.entities.Stack;
+
+import com.google.inject.Singleton;
+import com.sun.jersey.api.json.JSONJAXBContext;
+import com.sun.jersey.api.json.JSONUnmarshaller;
+
+/**
+ * A data store that uses in-memory maps and some preset values for testing.
+ */
+@Singleton
+class StaticDataStore implements DataStore {
+
+  private Map<String, List<ClusterDefinition>> clusters = 
+      new TreeMap<String, List<ClusterDefinition>>();
+
+  private Map<String, List<Stack>> stacks =
+      new TreeMap<String, List<Stack>>();
+  
+  private Map<String, ClusterState> clusterStates =
+      new TreeMap<String, ClusterState>();
+
+  private static final JAXBContext jaxbContext;
+  private static final JSONJAXBContext jsonContext;
+  static {
+    try {
+      jaxbContext = JAXBContext.
+          newInstance("org.apache.ambari.common.rest.entities");
+      jsonContext = 
+          new JSONJAXBContext("org.apache.ambari.common.rest.entities");
+    } catch (JAXBException e) {
+      throw new RuntimeException("Can't create jaxb context", e);
+    }
+  }
+
+  StaticDataStore() throws IOException {
+    /*
+    addStackFile("org/apache/ambari/stacks/hadoop-security-0.xml", 
+                 "hadoop-security");
+    addStackFile("org/apache/ambari/stacks/cluster123-0.xml", "cluster123");
+    addStackFile("org/apache/ambari/stacks/cluster124-0.xml", "cluster124");
+    */
+    addStackJsonFile("org/apache/ambari/stacks/puppet1-0.json", "puppet1");
+    addStackJsonFile("org/apache/ambari/stacks/horton-0.json", "horton");
+    addClusterFile("org/apache/ambari/clusters/cluster123.xml", "cluster123");
+  }
+
+  private void addStackFile(String filename, 
+                            String stackName) throws IOException {
+    InputStream in = ClassLoader.getSystemResourceAsStream(filename);
+    if (in == null) {
+      throw new IllegalArgumentException("Can't find resource for " + filename);
+    }
+    try {
+      Unmarshaller um = jaxbContext.createUnmarshaller();
+      Stack stack = (Stack) um.unmarshal(in);
+      storeStack(stackName, stack);
+    } catch (JAXBException je) {
+      throw new IOException("Can't parse " + filename, je);
+    }
+  }
+
+  private void addStackJsonFile(String filename, 
+                                String stackName) throws IOException {
+    InputStream in = ClassLoader.getSystemResourceAsStream(filename);
+    if (in == null) {
+      throw new IllegalArgumentException("Can't find resource for " + filename);
+    }
+    try {
+      JSONUnmarshaller um = jsonContext.createJSONUnmarshaller();
+      Stack stack = um.unmarshalFromJSON(in, Stack.class);
+      storeStack(stackName, stack);
+    } catch (JAXBException je) {
+      throw new IOException("Can't parse " + filename, je);
+    }
+  }
+
+  private void addClusterFile(String filename,
+                              String clusterName) throws IOException {
+    InputStream in = ClassLoader.getSystemResourceAsStream(filename);
+    if (in == null) {
+      throw new IllegalArgumentException("Can't find resource for " + filename);
+    }
+    try {
+      Unmarshaller um = jaxbContext.createUnmarshaller();
+      ClusterDefinition cluster = (ClusterDefinition) um.unmarshal(in);
+      cluster.setName(clusterName);
+      storeClusterDefinition(cluster);
+    } catch (JAXBException je) {
+      throw new IOException("Can't parse " + filename, je);
+    }    
+  }
+
+  @Override
+  public void close() throws IOException {
+    // PASS
+  }
+
+  @Override
+  public boolean clusterExists(String clusterName) throws IOException {
+    return clusters.containsKey(clusterName);
+  }
+
+  @Override
+  public int retrieveLatestClusterRevisionNumber(String clusterName)
+      throws IOException {
+    return clusters.get(clusterName).size()-1;
+  }
+
+  @Override
+  public void storeClusterState(String clusterName, 
+                                ClusterState clsState) throws IOException {
+    clusterStates.put(clusterName, clsState);
+  }
+
+  @Override
+  public ClusterState retrieveClusterState(String clusterName)
+      throws IOException {
+    return clusterStates.get(clusterName);
+  }
+
+  @Override
+  public int storeClusterDefinition(ClusterDefinition clusterDef
+                                    ) throws IOException {
+    String name = clusterDef.getName();
+    List<ClusterDefinition> list = clusters.get(name);
+    if (list == null) {
+      list = new ArrayList<ClusterDefinition>();
+      clusters.put(name, list);
+    }
+    list.add(clusterDef);
+    return list.size() - 1;
+  }
+
+  @Override
+  public ClusterDefinition retrieveClusterDefinition(String clusterName,
+      int revision) throws IOException {
+    return clusters.get(clusterName).get(revision);
+  }
+
+  @Override
+  public List<String> retrieveClusterList() throws IOException {
+    return new ArrayList<String>(clusters.keySet());
+  }
+
+  @Override
+  public void deleteCluster(String clusterName) throws IOException {
+    clusters.remove(clusterName);
+  }
+
+  @Override
+  public int storeStack(String stackName, Stack stack) throws IOException {
+    List<Stack> list = stacks.get(stackName);
+    if (list == null) {
+      list = new ArrayList<Stack>();
+      stacks.put(stackName, list);
+    }
+    int index = list.size();
+    stack.setName(stackName);
+    stack.setRevision(Integer.toString(index));
+    list.add(stack);
+    return index;
+  }
+
+  @Override
+  public Stack retrieveStack(String stackName, 
+                             int revision) throws IOException {
+    List<Stack> history = stacks.get(stackName);
+    if (revision == -1) {
+      revision = history.size() - 1;
+    }
+    return history.get(revision);
+  }
+
+  @Override
+  public List<String> retrieveStackList() throws IOException {
+    return new ArrayList<String>(stacks.keySet());
+  }
+
+  @Override
+  public int retrieveLatestStackRevisionNumber(String stackName
+                                               ) throws IOException {
+    return stacks.get(stackName).size() - 1;
+  }
+
+  @Override
+  public void deleteStack(String stackName) throws IOException {
+    stacks.remove(stackName);
+  }
+
+  @Override
+  public boolean stackExists(String stackName) throws IOException {
+    return stacks.containsKey(stackName);
+  }
+
+}
diff --git a/controller/src/main/java/org/apache/ambari/datastore/ZookeeperDS.java b/controller/src/main/java/org/apache/ambari/datastore/ZookeeperDS.java
new file mode 100644
index 0000000..65079b8
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/datastore/ZookeeperDS.java
@@ -0,0 +1,440 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.datastore;
+
+import java.io.IOException;
+import java.util.List;
+
+import org.apache.ambari.common.rest.entities.ClusterDefinition;
+import org.apache.ambari.common.rest.entities.ClusterState;
+import org.apache.ambari.common.rest.entities.Stack;
+import org.apache.ambari.common.util.JAXBUtil;
+import org.apache.zookeeper.CreateMode;
+import org.apache.zookeeper.KeeperException;
+import org.apache.zookeeper.WatchedEvent;
+import org.apache.zookeeper.Watcher;
+import org.apache.zookeeper.ZooDefs.Ids;
+import org.apache.zookeeper.ZooKeeper;
+import org.apache.zookeeper.data.Stat;
+
+/**
+ * Implementation of the data store based on Zookeeper.
+ */
+class ZookeeperDS implements DataStore, Watcher {
+
+  private static final String ZOOKEEPER_ROOT_PATH="/ambari";
+  private static final String ZOOKEEPER_CLUSTERS_ROOT_PATH =
+      ZOOKEEPER_ROOT_PATH + "/clusters";
+  private static final String ZOOKEEPER_STACKS_ROOT_PATH = 
+      ZOOKEEPER_ROOT_PATH + "/stacks";
+
+  private ZooKeeper zk;
+  private String credential = null;
+  private boolean zkCoonected = false;
+
+  ZookeeperDS(String authority) {
+    try {
+      /*
+       * Connect to ZooKeeper server
+       */
+      zk = new ZooKeeper(authority, 600000, this);
+      if(credential != null) {
+        zk.addAuthInfo("digest", credential.getBytes());
+      }
+
+      while (!this.zkCoonected) {
+        System.out.println("Waiting for ZK connection!");
+        Thread.sleep(2000);
+      }
+
+      /*
+       * Create top level directories
+       */
+      createDirectory (ZOOKEEPER_ROOT_PATH, new byte[0], true);
+      createDirectory (ZOOKEEPER_CLUSTERS_ROOT_PATH, new byte[0], true);
+      createDirectory (ZOOKEEPER_STACKS_ROOT_PATH, new byte[0], true);
+    } catch (Exception e) {
+      e.printStackTrace();
+    }
+  }
+
+  @Override
+  public void close() throws IOException {
+    // PASS
+  }
+
+  @Override
+  public boolean clusterExists(String clusterName) throws IOException {
+    try {
+      if (zk.exists(ZOOKEEPER_CLUSTERS_ROOT_PATH+"/"+clusterName, false) 
+            == null) {
+        return false;
+      }
+    } catch (Exception e) {
+      throw new IOException(e);
+    }
+    return true;
+  }
+
+  @Override
+  public synchronized int storeClusterDefinition(ClusterDefinition clusterDef
+      ) throws IOException {  
+    /*
+     * Update the cluster node
+     */
+    try {
+      Stat stat = new Stat();
+      String clusterPath = ZOOKEEPER_CLUSTERS_ROOT_PATH+"/" + 
+                           clusterDef.getName();
+      int newRev = 0;
+      String clusterRevisionPath = clusterPath+"/"+newRev;
+      String clusterLatestRevisionNumberPath = clusterPath + 
+          "/latestRevisionNumber";
+      if (zk.exists(clusterPath, false) == null) {
+        /* 
+         * create cluster path with revision 0, create cluster latest revision
+         * node storing the latest revision of cluster definition.
+         */
+        createDirectory (clusterPath, new byte[0], false);
+        createDirectory (clusterRevisionPath, 
+                         JAXBUtil.write(clusterDef), false);
+        createDirectory (clusterLatestRevisionNumberPath, 
+                         (new Integer(newRev)).toString().getBytes(), false);
+      }else {
+        String latestRevision = 
+            new String (zk.getData(clusterLatestRevisionNumberPath, false, 
+                                   stat));
+        newRev = Integer.parseInt(latestRevision) + 1;
+        clusterRevisionPath = clusterPath + "/" + newRev;
+        /*
+         * If client passes the revision number of the checked out cluster 
+         * definition following code checks if you are updating the same version
+         * that you checked out.
+         */
+        if (clusterDef.getRevision() != null) {
+          if (!latestRevision.equals(clusterDef.getRevision())) {
+            throw new IOException ("Latest cluster definition does not match "+
+                                   "the one client intends to modify!");
+          }  
+        } 
+        createDirectory(clusterRevisionPath, JAXBUtil.write(clusterDef), false);
+        zk.setData(clusterLatestRevisionNumberPath, 
+                   (new Integer(newRev)).toString().getBytes(), -1);
+      }
+      return newRev;
+    } catch (KeeperException e) {
+      throw new IOException (e);
+    } catch (InterruptedException e1) {
+      throw new IOException (e1);
+    }
+  }
+
+  @Override
+  public synchronized void storeClusterState(String clusterName, 
+                                             ClusterState clsState
+                                             ) throws IOException {
+    /*
+     * Update the cluster state
+     */
+    try {
+      String clusterStatePath = 
+          ZOOKEEPER_CLUSTERS_ROOT_PATH+"/"+clusterName+"/state";
+      if (zk.exists(clusterStatePath, false) == null) {
+        // create node for the cluster state
+        createDirectory (clusterStatePath, JAXBUtil.write(clsState), false);
+      }else {
+        zk.setData(clusterStatePath, JAXBUtil.write(clsState), -1);
+      }
+    } catch (KeeperException e) {
+      throw new IOException (e);
+    } catch (InterruptedException e1) {
+      throw new IOException (e1);
+    }
+
+  }
+
+  @Override
+  public ClusterDefinition retrieveClusterDefinition(String clusterName, 
+                                             int revision) throws IOException {
+    try {
+      Stat stat = new Stat();
+      String clusterRevisionPath;
+      if (revision < 0) {   
+        String clusterLatestRevisionNumberPath = 
+           ZOOKEEPER_CLUSTERS_ROOT_PATH+"/"+clusterName+"/latestRevisionNumber";
+        String latestRevisionNumber = 
+          new String (zk.getData(clusterLatestRevisionNumberPath, false, stat));
+        clusterRevisionPath = 
+          ZOOKEEPER_CLUSTERS_ROOT_PATH+"/"+clusterName+"/"+latestRevisionNumber;       
+      } else {
+        clusterRevisionPath = 
+            ZOOKEEPER_CLUSTERS_ROOT_PATH+"/"+clusterName+"/"+revision;
+      }
+      ClusterDefinition cdef = JAXBUtil.read(zk.getData(clusterRevisionPath, 
+          false, stat), ClusterDefinition.class); 
+      return cdef;
+    } catch (Exception e) {
+      throw new IOException (e);
+    }
+  }
+
+  @Override
+  public ClusterState retrieveClusterState(String clusterName
+                                           ) throws IOException {
+    try {
+      Stat stat = new Stat();
+      String clusterStatePath = 
+          ZOOKEEPER_CLUSTERS_ROOT_PATH+"/"+clusterName+"/state";
+      ClusterState clsState = 
+          JAXBUtil.read(zk.getData(clusterStatePath, false, stat), 
+                        ClusterState.class); 
+      return clsState;
+    } catch (Exception e) {
+      throw new IOException (e);
+    }
+  }
+
+  @Override
+  public int retrieveLatestClusterRevisionNumber(String clusterName
+                                                 ) throws IOException {
+    int revisionNumber;
+    try {
+      Stat stat = new Stat();
+      String clusterLatestRevisionNumberPath = 
+          ZOOKEEPER_CLUSTERS_ROOT_PATH+"/"+clusterName+"/latestRevisionNumber";
+      String latestRevisionNumber = 
+          new String (zk.getData(clusterLatestRevisionNumberPath, false, stat));
+      revisionNumber = Integer.parseInt(latestRevisionNumber);
+    } catch (Exception e) {
+      throw new IOException (e);
+    }
+    return revisionNumber;
+  }
+
+  @Override
+  public List<String> retrieveClusterList() throws IOException {
+    try {
+      List<String> children = zk.getChildren(ZOOKEEPER_CLUSTERS_ROOT_PATH, 
+                                             false);
+      return children;
+    } catch (KeeperException e) {
+      throw new IOException (e);
+    } catch (InterruptedException e) {
+      throw new IOException (e);
+    }
+  }
+
+  @Override
+  public void deleteCluster(String clusterName) throws IOException {
+    String clusterPath = ZOOKEEPER_CLUSTERS_ROOT_PATH+"/"+clusterName;
+    List<String> children;
+    try {
+      children = zk.getChildren(clusterPath, false);
+      // Delete all the children and then the parent node
+      for (String childPath : children) {
+        try {
+          zk.delete(childPath, -1);
+        } catch (KeeperException.NoNodeException ke) {
+        } catch (Exception e) { throw new IOException (e); }
+      }
+      zk.delete(clusterPath, -1);
+    } catch (KeeperException.NoNodeException ke) {
+      return;
+    } catch (Exception e) {
+      throw new IOException (e);
+    }
+  }
+
+  @Override
+  public int storeStack(String stackName, Stack stack) throws IOException {
+    try {
+      Stat stat = new Stat();
+      String stackPath = ZOOKEEPER_STACKS_ROOT_PATH+"/"+stackName;
+      int newRev = 0;
+      String stackRevisionPath = stackPath+"/"+newRev;
+      String stackLatestRevisionNumberPath = stackPath+"/latestRevisionNumber";
+      if (zk.exists(stackPath, false) == null) {
+        /* 
+         * create stack path with revision 0, create stack latest revision node
+         * to store the latest revision of stack definition.
+         */
+        createDirectory (stackPath, new byte[0], false);
+        stack.setRevision(new Integer(newRev).toString());
+        createDirectory (stackRevisionPath, JAXBUtil.write(stack), false);
+        createDirectory (stackLatestRevisionNumberPath, 
+            (new Integer(newRev)).toString().getBytes(), false);
+      }else {
+        String latestRevision = 
+            new String (zk.getData(stackLatestRevisionNumberPath, false, stat));
+        newRev = Integer.parseInt(latestRevision) + 1;
+        stackRevisionPath = stackPath + "/" + newRev;
+        /*
+         * TODO: like cluster definition client can pass optionally the checked 
+         * out version number
+         * Following code checks if you are updating the same version that you 
+         * checked out.
+         * if (stack.getRevision() != null) {
+         *   if (!latestRevision.equals(stack.getRevision())) {
+         *     throw new IOException ("Latest cluster definition does not " + 
+         *                           "match the one client intends to modify!");
+         *   }  
+         * } 
+         */
+        stack.setRevision(new Integer(newRev).toString());
+        createDirectory (stackRevisionPath, JAXBUtil.write(stack), false);
+        zk.setData(stackLatestRevisionNumberPath, 
+                   (new Integer(newRev)).toString().getBytes(), -1);
+      }
+      return newRev;
+    } catch (KeeperException e) {
+      throw new IOException (e);
+    } catch (InterruptedException e1) {
+      throw new IOException (e1);
+    }
+  }
+
+  @Override
+  public Stack retrieveStack(String stackName, int revision)
+      throws IOException {
+    try {
+      Stat stat = new Stat();
+      String stackRevisionPath;
+      if (revision < 0) {   
+        String stackLatestRevisionNumberPath = 
+            ZOOKEEPER_STACKS_ROOT_PATH+"/"+stackName+"/latestRevisionNumber";
+        String latestRevisionNumber = 
+            new String (zk.getData(stackLatestRevisionNumberPath, false, stat));
+        stackRevisionPath = 
+            ZOOKEEPER_STACKS_ROOT_PATH+"/"+stackName+"/"+latestRevisionNumber;       
+      } else {
+        stackRevisionPath = 
+            ZOOKEEPER_STACKS_ROOT_PATH+"/"+stackName+"/"+revision;
+      }
+      Stack stack = JAXBUtil.read(zk.getData(stackRevisionPath, false, stat), 
+          Stack.class); 
+      return stack;
+    } catch (Exception e) {
+      throw new IOException (e);
+    }
+  }
+
+  @Override
+  public List<String> retrieveStackList() throws IOException {
+    try {
+      List<String> children = zk.getChildren(ZOOKEEPER_STACKS_ROOT_PATH, false);
+      return children;
+    } catch (KeeperException e) {
+      throw new IOException (e);
+    } catch (InterruptedException e) {
+      throw new IOException (e);
+    }
+  }
+
+  @Override
+  public int retrieveLatestStackRevisionNumber(String stackName
+                                               ) throws IOException { 
+    int revisionNumber;
+    try {
+      Stat stat = new Stat();
+      String stackLatestRevisionNumberPath = 
+          ZOOKEEPER_STACKS_ROOT_PATH+"/"+stackName+"/latestRevisionNumber";
+      String latestRevisionNumber = 
+          new String (zk.getData(stackLatestRevisionNumberPath, false, stat));
+      revisionNumber = Integer.parseInt(latestRevisionNumber);
+    } catch (Exception e) {
+      throw new IOException (e);
+    }
+    return revisionNumber;
+  }
+
+  @Override
+  public void deleteStack(String stackName) throws IOException {
+    String stackPath = ZOOKEEPER_STACKS_ROOT_PATH+"/"+stackName;
+    List<String> children;
+    try {
+      children = zk.getChildren(stackPath, false);
+      // Delete all the children and then the parent node
+      for (String childPath : children) {
+        try {
+          zk.delete(childPath, -1);
+        } catch (KeeperException.NoNodeException ke) {
+        } catch (Exception e) { throw new IOException (e); }
+      }
+      zk.delete(stackPath, -1);
+    } catch (KeeperException.NoNodeException ke) {
+      return;
+    } catch (Exception e) {
+      throw new IOException (e);
+    }
+  }
+
+  @Override
+  public boolean stackExists(String stackName) throws IOException {
+    try {
+      if (zk.exists(ZOOKEEPER_STACKS_ROOT_PATH+"/"+stackName, false) == null) {
+        return false;
+      }
+    } catch (Exception e) {
+      throw new IOException(e);
+    }
+    return true;
+  }
+
+  @Override
+  public void process(WatchedEvent event) {
+    if (event.getType() == Event.EventType.None) {
+      // We are are being told that the state of the
+      // connection has changed
+      switch (event.getState()) {
+      case SyncConnected:
+        // In this particular example we don't need to do anything
+        // here - watches are automatically re-registered with 
+        // server and any watches triggered while the client was 
+        // disconnected will be delivered (in order of course)
+        this.zkCoonected = true;
+        break;
+      case Expired:
+        // It's all over
+        //running = false;
+        //commandHandler.stop();
+        break;
+      }
+    }
+
+  }
+
+  private void createDirectory(String path, byte[] initialData, 
+                               boolean ignoreIfExists
+                               ) throws KeeperException, InterruptedException {
+    try {
+      zk.create(path, initialData, Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
+      if(credential!=null) {
+        zk.setACL(path, Ids.CREATOR_ALL_ACL, -1);
+      }
+      System.out.println("Created path : <" + path +">");
+    } catch (KeeperException.NodeExistsException e) {
+      if (!ignoreIfExists) {
+        System.out.println("Path already exists <"+path+">");
+        throw e;
+      }
+    } catch (KeeperException.AuthFailedException e) {
+      System.out.println("Failed to authenticate for path <"+path+">");
+      throw e;
+    }
+  }
+}
diff --git a/controller/src/main/java/org/apache/ambari/resource/statemachine/ClusterEvent.java b/controller/src/main/java/org/apache/ambari/resource/statemachine/ClusterEvent.java
new file mode 100644
index 0000000..64e7b99
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/resource/statemachine/ClusterEvent.java
@@ -0,0 +1,42 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.ambari.resource.statemachine;
+
+import org.apache.ambari.event.AbstractEvent;
+
+public class ClusterEvent extends AbstractEvent<ClusterEventType> {
+  private ClusterFSM cluster;
+  private ServiceFSM service;
+  public ClusterEvent(ClusterEventType type, ClusterFSM cluster) {
+    super(type);
+    this.cluster = cluster;
+  }
+  //Need this to create an event that has details about the service
+  //that moved into a different state
+  public ClusterEvent(ClusterEventType type, ClusterFSM cluster, ServiceFSM service) {
+    super(type);
+    this.cluster = cluster;
+    this.service = service;
+  }
+  public ClusterFSM getCluster() {
+    return cluster;
+  }
+  public ServiceFSM getServiceCausingTransition() {
+    return service;
+  }
+}
diff --git a/controller/src/main/java/org/apache/ambari/resource/statemachine/ClusterEventType.java b/controller/src/main/java/org/apache/ambari/resource/statemachine/ClusterEventType.java
new file mode 100644
index 0000000..f07859d
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/resource/statemachine/ClusterEventType.java
@@ -0,0 +1,46 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.ambari.resource.statemachine;
+
+public enum ClusterEventType {
+  
+  //Producer:Client, Cluster
+  START,
+
+  //Producer:Client, Cluster
+  STOP,
+
+  //Producer: Service
+  START_SUCCESS,
+  
+  //Producer: Service
+  START_FAILURE,
+  
+  //Producer: Service
+  STOP_SUCCESS,
+  
+  //Producer: Service
+  STOP_FAILURE,
+  
+  //Producer: Client
+  RELEASE_NODES,
+  
+  //Producer: Client
+  ADD_NODES
+  
+}
\ No newline at end of file
diff --git a/controller/src/main/java/org/apache/ambari/resource/statemachine/ClusterFSM.java b/controller/src/main/java/org/apache/ambari/resource/statemachine/ClusterFSM.java
new file mode 100644
index 0000000..bfd897b
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/resource/statemachine/ClusterFSM.java
@@ -0,0 +1,29 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.ambari.resource.statemachine;
+
+import java.util.List;
+import java.util.Map;
+
+public interface ClusterFSM {
+  public List<ServiceFSM> getServices();
+  public Map<String, String> getServiceStates();
+  public String getClusterState();
+  public void activate();
+  public void deactivate();
+}
diff --git a/controller/src/main/java/org/apache/ambari/resource/statemachine/ClusterImpl.java b/controller/src/main/java/org/apache/ambari/resource/statemachine/ClusterImpl.java
new file mode 100644
index 0000000..b71e579
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/resource/statemachine/ClusterImpl.java
@@ -0,0 +1,275 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.ambari.resource.statemachine;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Date;
+import java.util.EnumSet;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.locks.Lock;
+import java.util.concurrent.locks.ReadWriteLock;
+import java.util.concurrent.locks.ReentrantReadWriteLock;
+
+import javax.xml.datatype.DatatypeConfigurationException;
+
+import org.apache.ambari.common.rest.entities.ClusterState;
+import org.apache.ambari.common.state.MultipleArcTransition;
+import org.apache.ambari.common.state.SingleArcTransition;
+import org.apache.ambari.common.state.StateMachine;
+import org.apache.ambari.common.state.StateMachineFactory;
+import org.apache.ambari.components.ComponentPlugin;
+import org.apache.ambari.controller.Cluster;
+import org.apache.ambari.controller.Util;
+import org.apache.ambari.event.EventHandler;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+
+import com.google.inject.Inject;
+
+public class ClusterImpl implements ClusterFSM, EventHandler<ClusterEvent> {
+
+  /* The state machine for the cluster looks like:
+   * INACTIVE or FAIL --START--> STARTING --START_SUCCESS from all services--> ACTIVE
+   *                                --START_FAILURE from any service--> FAIL
+   * ACTIVE or FAIL --STOP--> STOPPING --STOP_SUCCESS from all services--> INACTIVE
+   *                             --STOP_FAILURE from any service--> FAIL
+   */
+
+  private static final StateMachineFactory
+  <ClusterImpl,ClusterStateFSM,ClusterEventType,ClusterEvent> stateMachineFactory 
+          = new StateMachineFactory<ClusterImpl,ClusterStateFSM,ClusterEventType,
+          ClusterEvent>(ClusterStateFSM.INACTIVE)
+  
+  .addTransition(ClusterStateFSM.INACTIVE, ClusterStateFSM.STARTING, 
+      ClusterEventType.START, new StartClusterTransition())
+
+  .addTransition(ClusterStateFSM.FAIL, ClusterStateFSM.STARTING, 
+      ClusterEventType.START, new StartClusterTransition())
+      
+  .addTransition(ClusterStateFSM.STARTING, EnumSet.of(ClusterStateFSM.ACTIVE, 
+      ClusterStateFSM.STARTING), ClusterEventType.START_SUCCESS, 
+      new ServiceStartedTransition())
+      
+  .addTransition(ClusterStateFSM.STARTING, ClusterStateFSM.FAIL, 
+      ClusterEventType.START_FAILURE)
+      
+  .addTransition(ClusterStateFSM.ACTIVE, ClusterStateFSM.STOPPING, 
+      ClusterEventType.STOP, new StopClusterTransition())
+      
+  .addTransition(ClusterStateFSM.FAIL, ClusterStateFSM.STOPPING, 
+      ClusterEventType.STOP, new StopClusterTransition())
+      
+  .addTransition(ClusterStateFSM.STOPPING, EnumSet.of(ClusterStateFSM.INACTIVE,
+      ClusterStateFSM.STOPPING), ClusterEventType.STOP_SUCCESS,
+      new ServiceStoppedTransition())
+      
+  .addTransition(ClusterStateFSM.STOPPING, ClusterStateFSM.FAIL, 
+      ClusterEventType.STOP_FAILURE)
+      
+  .addTransition(ClusterStateFSM.INACTIVE, ClusterStateFSM.INACTIVE, 
+      ClusterEventType.STOP_SUCCESS)
+      
+  .installTopology();
+  
+  private List<ServiceFSM> services;
+  private Cluster cls;
+  private StateMachine<ClusterStateFSM, ClusterEventType, ClusterEvent> 
+          stateMachine;
+  private Lock readLock;
+  private Lock writeLock;
+  private Iterator<ServiceFSM> iterator;
+  private static Log LOG = LogFactory.getLog(ClusterImpl.class);
+  private static StateMachineInvokerInterface stateMachineInvoker;
+  @Inject
+  public static void setInvoker(StateMachineInvokerInterface sm) {
+    stateMachineInvoker = sm;
+  }
+  public ClusterImpl(Cluster cluster, int revision) throws IOException {
+    ReadWriteLock readWriteLock = new ReentrantReadWriteLock();
+    this.readLock = readWriteLock.readLock();
+    this.writeLock = readWriteLock.writeLock();
+    this.stateMachine = stateMachineFactory.make(this);
+    List<ServiceFSM> serviceImpls = new ArrayList<ServiceFSM>();
+    for (String service :
+      cluster.getClusterDefinition(revision).getEnabledServices()) {
+      if(hasActiveRoles(cluster, service)){
+        ServiceImpl serviceImpl = new ServiceImpl(
+            cluster.getComponentDefinition(service).getActiveRoles(), 
+            this, 
+            service);
+        
+        serviceImpls.add(serviceImpl);
+      }
+    }
+    this.cls = cluster;
+    this.services = serviceImpls;
+  }
+  
+  private static boolean hasActiveRoles(Cluster cluster, String serviceName)
+      throws IOException {
+    ComponentPlugin plugin = cluster.getComponentDefinition(serviceName);
+    String[] roles = plugin.getActiveRoles();
+    return roles.length > 0;
+  }
+  
+  public ClusterStateFSM getState() {
+    return stateMachine.getCurrentState();
+  }
+  
+  @Override
+  public void handle(ClusterEvent event) {
+    getStateMachine().doTransition(event.getType(), event);
+  }
+
+  @Override
+  public List<ServiceFSM> getServices() {
+    return services;
+  }
+  
+  public StateMachine getStateMachine() {
+    return stateMachine;
+  }
+  
+  private ServiceFSM getFirstService() {
+    //this call should reset the iterator
+    iterator = services.iterator();
+    if (iterator.hasNext()) {
+      return iterator.next();
+    }
+    return null;
+  }
+  
+  private ServiceFSM getNextService() {
+    if (iterator.hasNext()) {
+      return iterator.next();
+    }
+    return null;
+  }
+  
+  @Override
+  public String getClusterState() {
+    return getState().toString();
+  }
+  
+  static class StartClusterTransition implements 
+  SingleArcTransition<ClusterImpl, ClusterEvent>  {
+
+    @Override
+    public void transition(ClusterImpl operand, ClusterEvent event) {
+      ServiceFSM service = operand.getFirstService();
+      if (service != null) {
+        stateMachineInvoker.getAMBARIEventHandler().handle(
+            new ServiceEvent(ServiceEventType.START, service));
+      }
+    }
+    
+  }
+  
+  static class StopClusterTransition implements
+  SingleArcTransition<ClusterImpl, ClusterEvent>  {
+    
+    @Override
+    public void transition(ClusterImpl operand, ClusterEvent event) {
+      //TODO: do it in the reverse order of startup
+      ServiceFSM service = operand.getFirstService();
+      if (service != null) {
+        stateMachineInvoker.getAMBARIEventHandler().handle(
+            new ServiceEvent(ServiceEventType.STOP, service));
+      }
+    }
+  }
+  
+  static class ServiceStoppedTransition implements
+  MultipleArcTransition<ClusterImpl, ClusterEvent, ClusterStateFSM> {
+
+    @Override
+    public ClusterStateFSM transition(ClusterImpl operand, ClusterEvent event) {
+      //check whether all services stopped, and if not remain in the STOPPING
+      //state, else move to the INACTIVE state
+      ServiceFSM service = operand.getNextService();
+      if (service != null) {
+        stateMachineInvoker.getAMBARIEventHandler().handle(new ServiceEvent(
+            ServiceEventType.STOP, service));
+        return ClusterStateFSM.STOPPING;
+      }
+      operand.updateClusterState(ClusterState.CLUSTER_STATE_INACTIVE);
+      return ClusterStateFSM.INACTIVE;
+    }
+    
+  }
+  
+  static class ServiceStartedTransition implements 
+  MultipleArcTransition<ClusterImpl, ClusterEvent, ClusterStateFSM>  {
+    @Override
+    public ClusterStateFSM transition(ClusterImpl operand, ClusterEvent event){
+      //check whether all services started, and if not remain in the STARTING
+      //state, else move to the ACTIVE state
+      ServiceFSM service = operand.getNextService();
+      if (service != null) {
+        stateMachineInvoker.getAMBARIEventHandler().handle(new ServiceEvent(
+            ServiceEventType.START, service));
+        return ClusterStateFSM.STARTING;
+      }
+      operand.updateClusterState(ClusterState.CLUSTER_STATE_ACTIVE);
+      return ClusterStateFSM.ACTIVE;
+    }
+    
+  }
+
+  @Override
+  public Map<String, String> getServiceStates() {
+    Map<String, String> serviceStateMap = new HashMap<String,String>();
+    for (ServiceFSM s : services) {
+      serviceStateMap.put(s.getServiceName(), s.getServiceState().toString());
+    }
+    return serviceStateMap;
+  }
+
+  @Override
+  public void activate() {
+    stateMachineInvoker.getAMBARIEventHandler().handle(
+        new ClusterEvent(ClusterEventType.START, this));
+  }
+
+  @Override
+  public void deactivate() {
+    stateMachineInvoker.getAMBARIEventHandler().handle(
+        new ClusterEvent(ClusterEventType.STOP, this));
+  }
+  
+  private void updateClusterState (String x) {
+      
+      try {
+        ClusterState cs = this.cls.getClusterState();
+        cs.setLastUpdateTime(Util.getXMLGregorianCalendar(new Date()));
+        cs.setState(x);
+        this.cls.updateClusterState(cs);
+      } catch (IOException e) {
+        /*
+         * TODO: Should we bring down the controller? 
+         */
+        System.out.println ("Unbale to update/persist the cluster state change. Shutting down the controller!");
+        e.printStackTrace();
+        System.exit(-1);  
+      }
+  }
+}
diff --git a/controller/src/main/java/org/apache/ambari/resource/statemachine/ClusterStateFSM.java b/controller/src/main/java/org/apache/ambari/resource/statemachine/ClusterStateFSM.java
new file mode 100644
index 0000000..64aac64
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/resource/statemachine/ClusterStateFSM.java
@@ -0,0 +1,22 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.ambari.resource.statemachine;
+
+public enum ClusterStateFSM {
+  INACTIVE, STARTING, ACTIVE, FAIL, STOPPING, 
+}
\ No newline at end of file
diff --git a/controller/src/main/java/org/apache/ambari/resource/statemachine/FSMDriver.java b/controller/src/main/java/org/apache/ambari/resource/statemachine/FSMDriver.java
new file mode 100644
index 0000000..2d444fe
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/resource/statemachine/FSMDriver.java
@@ -0,0 +1,70 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.ambari.resource.statemachine;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.ambari.controller.Cluster;
+
+import com.google.inject.Singleton;
+
+@Singleton
+public class FSMDriver implements FSMDriverInterface {
+  private Map<String, ClusterFSM> clusters = 
+      Collections.synchronizedMap(new HashMap<String,ClusterFSM>());
+  @Override
+  public ClusterFSM createCluster(Cluster cluster, int revision) 
+      throws IOException {
+    ClusterFSM clusterFSM = new ClusterImpl(cluster, revision);
+    clusters.put(cluster.getName(), clusterFSM);
+    return clusterFSM;
+  }
+  @Override
+  public void startCluster(String clusterId) {
+    ClusterFSM clusterFSM = clusters.get(clusterId);
+    if (clusterFSM != null) {
+      clusterFSM.activate();
+    }
+  }
+  @Override
+  public void stopCluster(String clusterId) {
+    ClusterFSM clusterFSM = clusters.get(clusterId);
+    if (clusterFSM != null) {
+      clusterFSM.deactivate();
+    }
+  }
+  @Override
+  public String getClusterState(String clusterId,
+      long clusterDefinitionRev) {
+    ClusterFSM clusterFSM = clusters.get(clusterId);
+    if (clusterFSM != null) {
+      return clusterFSM.getClusterState();
+    }
+    return null;
+  }
+
+  @Override
+  public ClusterFSM getFSMClusterInstance(String clusterId) {
+    return clusters.get(clusterId);
+  }
+
+}
diff --git a/controller/src/main/java/org/apache/ambari/resource/statemachine/FSMDriverInterface.java b/controller/src/main/java/org/apache/ambari/resource/statemachine/FSMDriverInterface.java
new file mode 100644
index 0000000..843ed81
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/resource/statemachine/FSMDriverInterface.java
@@ -0,0 +1,41 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.ambari.resource.statemachine;
+
+import java.io.IOException;
+
+import org.apache.ambari.controller.Cluster;
+
+import com.google.inject.ImplementedBy;
+
+@ImplementedBy(FSMDriver.class)
+public interface FSMDriverInterface {
+  public ClusterFSM createCluster(Cluster cluster, int revision) 
+      throws IOException;
+  
+  public void startCluster(String clusterId);
+  
+  public void stopCluster(String clusterId);
+  
+  public ClusterFSM getFSMClusterInstance(String clusterId);
+  
+  public String getClusterState(String clusterId,
+      long clusterDefinitionRev);
+
+}
diff --git a/controller/src/main/java/org/apache/ambari/resource/statemachine/LifeCycle.java b/controller/src/main/java/org/apache/ambari/resource/statemachine/LifeCycle.java
new file mode 100644
index 0000000..aae4953
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/resource/statemachine/LifeCycle.java
@@ -0,0 +1,28 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.ambari.resource.statemachine;
+
+/**
+ * All participants have at least two states -
+ * ACTIVE, INACTIVE
+ * 
+ */
+public interface LifeCycle {
+  public void activate();
+  public void deactivate();
+}
diff --git a/controller/src/main/java/org/apache/ambari/resource/statemachine/RoleEvent.java b/controller/src/main/java/org/apache/ambari/resource/statemachine/RoleEvent.java
new file mode 100644
index 0000000..9eac200
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/resource/statemachine/RoleEvent.java
@@ -0,0 +1,34 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.ambari.resource.statemachine;
+
+import org.apache.ambari.event.AbstractEvent;
+
+
+public class RoleEvent extends AbstractEvent<RoleEventType> {
+  RoleFSM role;
+  public RoleEvent(RoleEventType eventType, RoleFSM role) {
+    super (eventType);
+    this.role = role;
+  }
+  
+  public RoleFSM getRole() {
+    return role;
+  }
+
+}
diff --git a/controller/src/main/java/org/apache/ambari/resource/statemachine/RoleEventType.java b/controller/src/main/java/org/apache/ambari/resource/statemachine/RoleEventType.java
new file mode 100644
index 0000000..f0b5ff5
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/resource/statemachine/RoleEventType.java
@@ -0,0 +1,40 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.ambari.resource.statemachine;
+
+public enum RoleEventType {
+  
+  //Producer:Client, Cluster
+  START,
+
+  //Producer:Client, Cluster
+  STOP,
+
+  //Producer: Service
+  START_SUCCESS,
+  
+  //Producer: Service
+  START_FAILURE,
+  
+  //Producer: Service
+  STOP_SUCCESS,
+  
+  //Producer: Service
+  STOP_FAILURE,
+  
+}
diff --git a/controller/src/main/java/org/apache/ambari/resource/statemachine/RoleFSM.java b/controller/src/main/java/org/apache/ambari/resource/statemachine/RoleFSM.java
new file mode 100644
index 0000000..3d47146
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/resource/statemachine/RoleFSM.java
@@ -0,0 +1,36 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.ambari.resource.statemachine;
+
+public interface RoleFSM {
+  
+  public RoleState getRoleState();
+  
+  public String getRoleName();
+  
+  public ServiceFSM getAssociatedService();
+  
+  public boolean shouldStop();
+  
+  public boolean shouldStart();
+
+  public void activate();
+  
+  public void deactivate();
+
+}
diff --git a/controller/src/main/java/org/apache/ambari/resource/statemachine/RoleImpl.java b/controller/src/main/java/org/apache/ambari/resource/statemachine/RoleImpl.java
new file mode 100644
index 0000000..b254675
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/resource/statemachine/RoleImpl.java
@@ -0,0 +1,206 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.ambari.resource.statemachine;
+
+import org.apache.ambari.common.state.SingleArcTransition;
+import org.apache.ambari.common.state.StateMachine;
+import org.apache.ambari.common.state.StateMachineFactory;
+import org.apache.ambari.event.EventHandler;
+
+import com.google.inject.Inject;
+
+public class RoleImpl implements RoleFSM, EventHandler<RoleEvent> {
+
+  private final String roleName;
+  private final ServiceFSM service;
+  private static StateMachineInvokerInterface stateMachineInvoker;
+  @Inject
+  public static void setInvoker(StateMachineInvokerInterface sm) {
+    stateMachineInvoker = sm;
+  }
+  /* The state machine for the role looks like:
+   * (INACTIVE or FAIL) --S_START--> STARTING --S_START_SUCCESS--> ACTIVE
+   *                                --S_START_FAILURE--> FAIL
+   * (ACTIVE or FAIL) --S_STOP--> STOPPING --S_STOP_SUCCESS--> INACTIVE
+   *                             --S_STOP_FAILURE--> FAIL
+   */
+  
+  private static final StateMachineFactory 
+  <RoleImpl, RoleState, RoleEventType, RoleEvent> stateMachineFactory 
+         = new StateMachineFactory<RoleImpl, RoleState, RoleEventType, 
+         RoleEvent>(RoleState.INACTIVE)
+
+         //START event transitions
+         .addTransition(RoleState.INACTIVE, RoleState.STARTING, 
+             RoleEventType.START)
+             
+         .addTransition(RoleState.FAIL, RoleState.STARTING, 
+             RoleEventType.START)
+             
+          //START_SUCCESS event transitions   
+          //if one instance of the role starts up fine, we consider the service
+          //as ready for the 'safe-mode' kinds of checks
+         .addTransition(RoleState.STARTING, RoleState.ACTIVE,
+             RoleEventType.START_SUCCESS, new SuccessfulStartTransition())
+
+          //TODO: add support notion of quorom of nodes that need to be up
+         .addTransition(RoleState.STARTING, RoleState.FAIL,
+             RoleEventType.START_FAILURE, new FailedStartTransition())
+
+             
+         .addTransition(RoleState.STARTING, RoleState.STOPPING, 
+             RoleEventType.STOP)
+             
+         .addTransition(RoleState.ACTIVE, RoleState.ACTIVE,
+             RoleEventType.START_SUCCESS)
+
+          //required number of nodes have this role started
+         .addTransition(RoleState.ACTIVE, RoleState.ACTIVE,
+             RoleEventType.START_FAILURE)
+             
+          //STOP event transitions   
+         .addTransition(RoleState.ACTIVE, RoleState.STOPPING, 
+             RoleEventType.STOP)
+
+         .addTransition(RoleState.FAIL, RoleState.STOPPING, 
+             RoleEventType.STOP)
+
+          //STOP_SUCCESS event transitions   
+         .addTransition(RoleState.STOPPING, RoleState.INACTIVE,
+             RoleEventType.STOP_SUCCESS, new SuccessfulStopTransition())
+
+         .addTransition(RoleState.INACTIVE, RoleState.INACTIVE,
+             RoleEventType.STOP_SUCCESS)
+
+          //enough number of nodes have stopped already
+         .addTransition(RoleState.INACTIVE, RoleState.INACTIVE,
+             RoleEventType.STOP_FAILURE)
+             
+          //STOP_FAILURE event transitions                
+         .addTransition(RoleState.STOPPING, RoleState.FAIL,
+             RoleEventType.STOP_FAILURE, new FailedStopTransition())
+             
+         .installTopology();
+  
+  private final StateMachine<RoleState, RoleEventType, RoleEvent>
+      stateMachine;
+  
+  public RoleImpl(ServiceFSM service, String roleName) {
+    this.roleName = roleName;
+    this.service = service;
+    stateMachine = stateMachineFactory.make(this);
+  }
+  
+  StateMachine<RoleState, RoleEventType, RoleEvent> getStateMachine() {
+    return stateMachine;
+  }
+  
+  @Override
+  public RoleState getRoleState() {
+    return stateMachine.getCurrentState();
+  }
+
+  @Override
+  public String getRoleName() {
+    return roleName;
+  }
+
+  @Override
+  public void handle(RoleEvent event) {
+    getStateMachine().doTransition(event.getType(), event);
+  }
+
+  @Override
+  public ServiceFSM getAssociatedService() {
+    return service;
+  }
+  
+  
+  static void sendEventToService(RoleImpl operand, RoleEvent event,
+      ServiceEventType serviceEvent) {
+    ServiceFSM service = operand.getAssociatedService();
+    stateMachineInvoker.getAMBARIEventHandler().handle(
+        new ServiceEvent(serviceEvent, service, 
+            operand));
+  }
+  
+ 
+  static class SuccessfulStartTransition  implements 
+  SingleArcTransition<RoleImpl, RoleEvent> {
+
+    @Override
+    public void transition(RoleImpl operand, RoleEvent event) {
+      //if one instance of the role starts up fine, we consider the service
+      //as ready for the 'safe-mode' kinds of checks
+      sendEventToService(operand, event, ServiceEventType.ROLE_START_SUCCESS);
+    }
+  }
+  
+  static class FailedStartTransition implements 
+  SingleArcTransition<RoleImpl, RoleEvent>  {
+
+    @Override
+    public void transition(RoleImpl operand, RoleEvent event) {
+      //TODO : add support for notion of quorum
+      sendEventToService(operand, event, ServiceEventType.ROLE_START_FAILURE);
+    }
+  }
+  
+  static class SuccessfulStopTransition implements
+  SingleArcTransition<RoleImpl, RoleEvent> {
+    //TODO: figure out if we need notion of quorum for stop success
+    @Override
+    public void transition(RoleImpl operand, RoleEvent event) {
+      sendEventToService(operand, event, ServiceEventType.ROLE_STOP_SUCCESS);
+    }
+  }
+  
+  static class FailedStopTransition implements
+  SingleArcTransition<RoleImpl, RoleEvent> {
+    //TODO: figure out if we need notion of quorum for stop success
+    @Override
+    public void transition(RoleImpl operand, RoleEvent event) {
+      sendEventToService(operand, event, ServiceEventType.ROLE_STOP_FAILURE);
+    }
+  }
+
+  @Override
+  public void activate() {
+    stateMachineInvoker.getAMBARIEventHandler()
+       .handle(new RoleEvent(RoleEventType.START, this));
+  }
+
+  @Override
+  public void deactivate() {
+    stateMachineInvoker.getAMBARIEventHandler()
+       .handle(new RoleEvent(RoleEventType.STOP, this));  
+  }
+
+  @Override
+  public boolean shouldStop() {
+    return getRoleState() == RoleState.STOPPING 
+        || getRoleState() == RoleState.INACTIVE;
+  }
+
+  @Override
+  public boolean shouldStart() {
+    return getRoleState() == RoleState.STARTING 
+        || getRoleState() == RoleState.ACTIVE;
+  }
+
+}
diff --git a/controller/src/main/java/org/apache/ambari/resource/statemachine/RoleState.java b/controller/src/main/java/org/apache/ambari/resource/statemachine/RoleState.java
new file mode 100644
index 0000000..deefce2
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/resource/statemachine/RoleState.java
@@ -0,0 +1,22 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.ambari.resource.statemachine;
+
+public enum RoleState {
+  INACTIVE, STARTING, ACTIVE, FAIL, STOPPING, 
+}
\ No newline at end of file
diff --git a/controller/src/main/java/org/apache/ambari/resource/statemachine/ServiceEvent.java b/controller/src/main/java/org/apache/ambari/resource/statemachine/ServiceEvent.java
new file mode 100644
index 0000000..a621420
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/resource/statemachine/ServiceEvent.java
@@ -0,0 +1,46 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.ambari.resource.statemachine;
+
+import org.apache.ambari.event.AbstractEvent;
+
+
+public class ServiceEvent extends AbstractEvent<ServiceEventType> {
+  private ServiceFSM service;
+  private RoleFSM role;
+  
+  public ServiceEvent(ServiceEventType eventType, ServiceFSM service) {
+    super (eventType);
+    this.service = service;
+  }
+  
+  public ServiceEvent(ServiceEventType eventType, ServiceFSM service, RoleFSM role) {
+    super (eventType);
+    this.service = service;
+    this.role = role;
+  }
+  
+  public ServiceFSM getService() {
+    return service;
+  }
+  
+  public RoleFSM getRole() {
+    return role;
+  }
+
+}
diff --git a/controller/src/main/java/org/apache/ambari/resource/statemachine/ServiceEventType.java b/controller/src/main/java/org/apache/ambari/resource/statemachine/ServiceEventType.java
new file mode 100644
index 0000000..b9d2c76
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/resource/statemachine/ServiceEventType.java
@@ -0,0 +1,52 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.ambari.resource.statemachine;
+
+public enum ServiceEventType {
+  
+  //Producer:Client, Cluster
+  START,
+
+  //Producer: Role
+  PRESTART_SUCCESS,
+  
+  //Producer: Role
+  PRESTART_FAILURE,
+  
+  //Producer:Client, Cluster
+  STOP,
+
+  //Producer: Service
+  AVAILABLE_CHECK_SUCCESS,
+  
+  //Producer: Service
+  AVAILABLE_CHECK_FAILURE,
+
+  //Producer: Role
+  ROLE_START_SUCCESS,
+
+  //Producer: Role
+  ROLE_START_FAILURE,
+
+  //Producer: Role
+  ROLE_STOP_SUCCESS,
+
+  //Producer: Role
+  ROLE_STOP_FAILURE
+  
+}
diff --git a/controller/src/main/java/org/apache/ambari/resource/statemachine/ServiceFSM.java b/controller/src/main/java/org/apache/ambari/resource/statemachine/ServiceFSM.java
new file mode 100644
index 0000000..0c87d33
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/resource/statemachine/ServiceFSM.java
@@ -0,0 +1,38 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.ambari.resource.statemachine;
+
+import java.util.List;
+
+public interface ServiceFSM {
+  
+  public ServiceState getServiceState();
+  
+  public String getServiceName();
+  
+  public ClusterFSM getAssociatedCluster();
+  
+  public boolean isActive();
+  
+  public List<RoleFSM> getRoles();
+  
+  public void activate();
+  
+  public void deactivate();
+
+}
diff --git a/controller/src/main/java/org/apache/ambari/resource/statemachine/ServiceImpl.java b/controller/src/main/java/org/apache/ambari/resource/statemachine/ServiceImpl.java
new file mode 100644
index 0000000..ad85653
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/resource/statemachine/ServiceImpl.java
@@ -0,0 +1,318 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.ambari.resource.statemachine;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.EnumSet;
+import java.util.Iterator;
+import java.util.List;
+
+import org.apache.ambari.common.state.MultipleArcTransition;
+import org.apache.ambari.common.state.SingleArcTransition;
+import org.apache.ambari.common.state.StateMachine;
+import org.apache.ambari.common.state.StateMachineFactory;
+import org.apache.ambari.event.EventHandler;
+
+import com.google.inject.Inject;
+
+public class ServiceImpl implements ServiceFSM, EventHandler<ServiceEvent> {
+
+  private final ClusterFSM clusterFsm;
+  
+  /* The state machine for the service looks like:
+   * INACTIVE or FAIL --S_START--> PRESTART
+   * PRESTART --S_PRESTART_FAILURE--> FAIL
+   * PRESTART --S_PRESTART_SUCCESS--> STARTING --S_START_SUCCESS--> STARTED
+   *                                           --S_START_FAILURE--> FAIL
+   * STARTED --S_AVAILABLE_CHECK_SUCCESS--> ACTIVE  (check for things like safemode here)
+   * STARTED --S_AVAILABLE_CHECK_FAILURE--> FAIL
+   * ACTIVE or FAIL --S_STOP--> STOPPING --S_STOP_SUCCESS--> INACTIVE
+   *                             --S_STOP_FAILURE--> FAIL
+   */
+  private static StateMachineInvokerInterface stateMachineInvoker;
+  @Inject
+  public static void setInvoker(StateMachineInvokerInterface sm) {
+    stateMachineInvoker = sm;
+  }
+  private static final StateMachineFactory 
+  <ServiceImpl, ServiceState, ServiceEventType, ServiceEvent> 
+  stateMachineFactory 
+         = new StateMachineFactory<ServiceImpl, ServiceState, ServiceEventType,
+         ServiceEvent>(ServiceState.INACTIVE)
+         
+          .addTransition(ServiceState.INACTIVE, ServiceState.PRESTART, 
+             ServiceEventType.START)
+             
+         .addTransition(ServiceState.FAIL, ServiceState.PRESTART, 
+             ServiceEventType.START)
+             
+         .addTransition(ServiceState.PRESTART, ServiceState.FAIL, 
+             ServiceEventType.PRESTART_FAILURE, new StartFailTransition())  
+             
+         .addTransition(ServiceState.PRESTART, ServiceState.STARTING, 
+             ServiceEventType.PRESTART_SUCCESS, new StartServiceTransition())    
+          
+         .addTransition(ServiceState.STARTING, 
+             EnumSet.of(ServiceState.STARTED, ServiceState.STARTING), 
+             ServiceEventType.ROLE_START_SUCCESS,
+             new RoleStartedTransition())
+             
+         .addTransition(ServiceState.STARTING, ServiceState.FAIL, 
+             ServiceEventType.ROLE_START_FAILURE, new StartFailTransition())
+             
+         .addTransition(ServiceState.STARTED, ServiceState.ACTIVE,
+             ServiceEventType.AVAILABLE_CHECK_SUCCESS, new AvailableTransition())
+         
+         .addTransition(ServiceState.STARTED, ServiceState.FAIL,
+             ServiceEventType.AVAILABLE_CHECK_FAILURE, new StartFailTransition())
+                      
+         .addTransition(ServiceState.ACTIVE, ServiceState.ACTIVE, 
+             ServiceEventType.ROLE_START_SUCCESS)
+             
+         .addTransition(ServiceState.ACTIVE, ServiceState.STOPPING, 
+             ServiceEventType.STOP, new StopServiceTransition())
+             
+         .addTransition(ServiceState.STOPPING, 
+             EnumSet.of(ServiceState.INACTIVE, ServiceState.STOPPING),
+             ServiceEventType.ROLE_STOP_SUCCESS, new RoleStoppedTransition())
+             
+         .addTransition(ServiceState.STOPPING, ServiceState.FAIL, 
+             ServiceEventType.ROLE_STOP_FAILURE, new StopFailTransition())
+             
+         .addTransition(ServiceState.FAIL, ServiceState.STOPPING, 
+             ServiceEventType.STOP, new StopServiceTransition())
+             
+         .addTransition(ServiceState.INACTIVE, ServiceState.INACTIVE, 
+             ServiceEventType.ROLE_STOP_SUCCESS)
+             
+         .installTopology();
+  
+  private final StateMachine<ServiceState, ServiceEventType, ServiceEvent>
+      stateMachine;
+  private final List<RoleFSM> serviceRoles = new ArrayList<RoleFSM>();
+  private Iterator<RoleFSM> iterator;
+  private final String serviceName;
+
+  public ServiceImpl(String[] roles, ClusterFSM clusterFsm, String serviceName)
+      throws IOException {
+    this.clusterFsm = clusterFsm;
+    this.serviceName = serviceName;
+    setRoles(roles);
+    stateMachine = stateMachineFactory.make(this);
+  }
+    
+  private void setRoles(String[] roles) {
+    serviceRoles.clear();
+    //get the roles for this service
+    for (String role : roles) {
+      RoleImpl roleImpl = new RoleImpl(this, role);
+      serviceRoles.add(roleImpl);
+    }    
+  }
+
+  public StateMachine getStateMachine() {
+    return stateMachine;
+  }
+  
+  @Override
+  public ServiceState getServiceState() {
+    return stateMachine.getCurrentState();
+  }
+
+  @Override
+  public void handle(ServiceEvent event) {
+    getStateMachine().doTransition(event.getType(), event);
+  }
+
+  @Override
+  public ClusterFSM getAssociatedCluster() {
+    return clusterFsm;
+  }
+  
+  @Override
+  public String getServiceName() {
+    return serviceName;
+  }
+    
+  private RoleFSM getFirstRole() {
+    //this call should reset the iterator
+    iterator = serviceRoles.iterator();
+    if (iterator.hasNext()) {
+      return iterator.next();
+    }
+    return null;
+  }
+  
+  private RoleFSM getNextRole() {
+    if (iterator.hasNext()) {
+      return iterator.next();
+    }
+    return null;
+  }
+
+  static void sendEventToRole(RoleFSM role, RoleEventType roleEvent) {
+    stateMachineInvoker.getAMBARIEventHandler().handle(
+        new RoleEvent(roleEvent, role));
+  }
+  
+  static class StartServiceTransition implements 
+  SingleArcTransition<ServiceImpl, ServiceEvent>  {
+
+    @Override
+    public void transition(ServiceImpl operand, ServiceEvent event) {
+      RoleFSM firstRole = operand.getFirstRole();
+      if (firstRole != null) {
+        sendEventToRole(firstRole, RoleEventType.START);
+      }
+    } 
+  }
+  
+  static void sendEventToCluster(ClusterFSM cluster, ClusterEventType event){
+    stateMachineInvoker.getAMBARIEventHandler().handle(
+        new ClusterEvent(event, cluster));
+  }
+  
+  static class AvailableTransition implements 
+  SingleArcTransition<ServiceImpl, ServiceEvent>  {
+
+    @Override
+    public void transition(ServiceImpl operand, ServiceEvent event) {
+      if (((ClusterImpl)operand.getAssociatedCluster()).getState() 
+          == ClusterStateFSM.STARTING) {
+        //since we support starting services explicitly (without touching the 
+        //associated cluster), we need to check what the cluster state is
+        //before sending it any event
+        sendEventToCluster(operand.getAssociatedCluster(), ClusterEventType.START_SUCCESS);
+      }
+    } 
+  }
+  
+  static class FailureTransition implements 
+  SingleArcTransition<ServiceImpl, ServiceEvent>  {
+
+    private ClusterStateFSM recievingClusterState;
+    private ClusterEventType clusterEvent;
+    
+    protected FailureTransition(final ClusterStateFSM recievingClusterState,
+        final ClusterEventType clusterEvent){
+      this.recievingClusterState = recievingClusterState;
+      this.clusterEvent = clusterEvent;
+    }
+    
+    
+    @Override
+    public void transition(ServiceImpl operand, ServiceEvent event) {
+      if (((ClusterImpl)operand.getAssociatedCluster()).getState() 
+          == recievingClusterState) {
+        //since we support starting/stopping services explicitly (without touching the 
+        //associated cluster), we need to check what the cluster state is
+        //before sending it any event
+        sendEventToCluster(operand.getAssociatedCluster(), clusterEvent);
+      }
+    } 
+  }
+  
+  
+  static class StartFailTransition extends FailureTransition {
+    protected StartFailTransition() {
+      super(ClusterStateFSM.STARTING, ClusterEventType.START_FAILURE);
+    }
+  }
+  
+  static class StopFailTransition extends FailureTransition {
+    protected StopFailTransition() {
+      super(ClusterStateFSM.STOPPING, ClusterEventType.STOP_FAILURE);
+    }
+  }
+  
+  static class StopServiceTransition implements 
+  SingleArcTransition<ServiceImpl, ServiceEvent>  {
+    @Override
+    public void transition(ServiceImpl operand, ServiceEvent event) {
+      RoleFSM firstRole = operand.getFirstRole();
+      if (firstRole != null){ 
+        sendEventToRole(firstRole, RoleEventType.STOP);
+      }
+    }
+  }
+  
+  static class RoleStartedTransition 
+  implements MultipleArcTransition<ServiceImpl, ServiceEvent, ServiceState>  {
+
+    @Override
+    public ServiceState transition(ServiceImpl operand, ServiceEvent event) {
+      //check whether all roles started, and if not remain in the STARTING
+      //state, else move to the STARTED state
+      RoleFSM role = operand.getNextRole();
+      if (role != null) {
+        sendEventToRole(role,  RoleEventType.START);
+        return ServiceState.STARTING;
+      } else {
+        return ServiceState.STARTED;
+      }
+    }
+  }
+  
+  static class RoleStoppedTransition 
+  implements MultipleArcTransition<ServiceImpl, ServiceEvent, ServiceState>  {
+
+    @Override
+    public ServiceState transition(ServiceImpl operand, ServiceEvent event) {
+      //check whether all roles stopped, and if not, remain in the STOPPING
+      //state, else move to the INACTIVE state
+      RoleFSM role = operand.getNextRole();
+      if (role != null) {
+        sendEventToRole(role,  RoleEventType.STOP);
+        return ServiceState.STOPPING;
+      } else {
+        if (((ClusterImpl)operand.getAssociatedCluster()).getState() 
+            == ClusterStateFSM.STOPPING) {
+          //since we support stopping services explicitly (without stopping the 
+          //associated cluster), we need to check what the cluster state is
+          //before sending it any event
+          sendEventToCluster(operand.getAssociatedCluster(), ClusterEventType.STOP_SUCCESS);
+        }
+        return ServiceState.INACTIVE;
+      }
+    }
+  }
+
+  @Override
+  public void activate() {
+    stateMachineInvoker.getAMBARIEventHandler().handle(
+              new ServiceEvent(ServiceEventType.START, this));
+  }
+
+  @Override
+  public void deactivate() {
+    stateMachineInvoker.getAMBARIEventHandler().handle(
+              new ServiceEvent(ServiceEventType.STOP, this));
+  }
+
+  @Override
+  public boolean isActive() {
+    return getServiceState() == ServiceState.ACTIVE;
+  }
+
+  @Override
+  public List<RoleFSM> getRoles() {
+    return serviceRoles;
+  }
+
+}
diff --git a/controller/src/main/java/org/apache/ambari/resource/statemachine/ServiceState.java b/controller/src/main/java/org/apache/ambari/resource/statemachine/ServiceState.java
new file mode 100644
index 0000000..0360d49
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/resource/statemachine/ServiceState.java
@@ -0,0 +1,22 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.ambari.resource.statemachine;
+
+public enum ServiceState {
+  INACTIVE, PRESTART, STARTING, STARTED, ACTIVE, FAIL, STOPPING,
+}
\ No newline at end of file
diff --git a/controller/src/main/java/org/apache/ambari/resource/statemachine/StateMachineInvoker.java b/controller/src/main/java/org/apache/ambari/resource/statemachine/StateMachineInvoker.java
new file mode 100644
index 0000000..42bc294
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/resource/statemachine/StateMachineInvoker.java
@@ -0,0 +1,69 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.ambari.resource.statemachine;
+
+import org.apache.ambari.event.AsyncDispatcher;
+import org.apache.ambari.event.Dispatcher;
+import org.apache.ambari.event.EventHandler;
+
+import com.google.inject.Inject;
+import com.google.inject.Singleton;
+
+@Singleton
+class StateMachineInvoker implements StateMachineInvokerInterface {
+  
+  private Dispatcher dispatcher;
+  
+  @Inject
+  StateMachineInvoker() {
+    dispatcher = new AsyncDispatcher();
+    dispatcher.register(ClusterEventType.class, new ClusterEventDispatcher());
+    dispatcher.register(ServiceEventType.class, new ServiceEventDispatcher());
+    dispatcher.register(RoleEventType.class, new RoleEventDispatcher());
+    dispatcher.start();
+  }
+  
+
+  public EventHandler getAMBARIEventHandler() {
+    return dispatcher.getEventHandler();
+  }
+
+  private static class ClusterEventDispatcher 
+  implements EventHandler<ClusterEvent> {
+    @Override
+    public void handle(ClusterEvent event) {
+      ((EventHandler<ClusterEvent>)event.getCluster()).handle(event);
+    }
+  }
+  
+  private static class ServiceEventDispatcher 
+  implements EventHandler<ServiceEvent> {
+    @Override
+    public void handle(ServiceEvent event) {
+      ((EventHandler<ServiceEvent>)event.getService()).handle(event);
+    }
+  }
+  
+  private static class RoleEventDispatcher 
+  implements EventHandler<RoleEvent> {
+    @Override
+    public void handle(RoleEvent event) {
+      ((EventHandler<RoleEvent>)event.getRole()).handle(event);
+    }
+  }  
+}
diff --git a/controller/src/main/java/org/apache/ambari/resource/statemachine/StateMachineInvokerInterface.java b/controller/src/main/java/org/apache/ambari/resource/statemachine/StateMachineInvokerInterface.java
new file mode 100644
index 0000000..ee2fad4
--- /dev/null
+++ b/controller/src/main/java/org/apache/ambari/resource/statemachine/StateMachineInvokerInterface.java
@@ -0,0 +1,27 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.ambari.resource.statemachine;
+
+import org.apache.ambari.event.EventHandler;
+
+import com.google.inject.ImplementedBy;
+
+@ImplementedBy(StateMachineInvoker.class)
+public interface StateMachineInvokerInterface {
+  public EventHandler getAMBARIEventHandler();
+}
diff --git a/controller/src/main/java/org/apache/hms/controller/ClientHandler.java b/controller/src/main/java/org/apache/hms/controller/ClientHandler.java
deleted file mode 100755
index ed96543..0000000
--- a/controller/src/main/java/org/apache/hms/controller/ClientHandler.java
+++ /dev/null
@@ -1,320 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.controller;
-
-import java.io.EOFException;
-import java.io.IOException;
-import java.util.ArrayList;
-import java.util.List;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hms.controller.CommandHandler;
-import org.apache.hms.common.conf.CommonConfigurationKeys;
-import org.apache.hms.common.entity.cluster.MachineState;
-import org.apache.hms.common.entity.command.Command;
-import org.apache.hms.common.entity.command.CommandStatus;
-import org.apache.hms.common.entity.command.CreateCommand;
-import org.apache.hms.common.entity.command.DeleteCommand;
-import org.apache.hms.common.entity.command.StatusCommand;
-import org.apache.hms.common.entity.manifest.ClusterHistory;
-import org.apache.hms.common.entity.manifest.ClusterManifest;
-import org.apache.hms.common.entity.Response;
-import org.apache.hms.common.util.ExceptionUtil;
-import org.apache.hms.common.util.JAXBUtil;
-import org.apache.hms.common.util.ZookeeperUtil;
-import org.apache.zookeeper.CreateMode;
-import org.apache.zookeeper.KeeperException;
-import org.apache.zookeeper.ZooKeeper;
-import org.apache.zookeeper.ZooDefs.Ids;
-
-public class ClientHandler {
-  private static Log LOG = LogFactory.getLog(ClientHandler.class);
-  
-  private ZooKeeper zk;
-  
-  public ClientHandler(ZooKeeper zk) {
-    this.zk = zk;
-    try {
-      if (zk.exists(CommonConfigurationKeys.ZOOKEEPER_COMMAND_QUEUE_PATH_DEFAULT, null) == null) {
-        LOG.error("HMS command queue at " + CommonConfigurationKeys.ZOOKEEPER_COMMAND_QUEUE_PATH_DEFAULT + " doesn't exist");
-      }
-    } catch (Exception e) {
-      LOG.error(ExceptionUtil.getStackTrace(e));
-    }
-    LOG.info("Created one ClientHandler object");
-  }
-  
-  public String queueCmd(Command cmd) throws KeeperException, InterruptedException, IOException {
-    String path = zk.create(CommonConfigurationKeys.ZOOKEEPER_COMMAND_QUEUE_PATH_DEFAULT + "/cmd-", JAXBUtil.write(cmd), Ids.OPEN_ACL_UNSAFE,
-        CreateMode.PERSISTENT_SEQUENTIAL);
-    LOG.info("Queued command: " + cmd);
-    return path.substring(path.lastIndexOf('/') + 1);
-  }
-  
-//  public Response createCluster2(CreateClusterCommand cmd) throws IOException {
-//    LOG.info("Received COMMAND: " + cmd);
-//    String output = null;
-//    Response r = new Response();
-//    try {
-//      ((ClusterManifest) cmd.getClusterManifest()).load();
-//      String clusterName = cmd.getClusterManifest().getClusterName();
-//      String clusterPath = CommonConfigurationKeys.ZOOKEEPER_CLUSTER_ROOT_DEFAULT + "/" + clusterName;
-//      if (zk.exists(clusterPath, null) != null) {
-//        String msg = "Cluster [" + clusterName + "] already exists. CREATE operation aborted.";
-//        LOG.warn(msg);
-//        r.setOutput(msg);
-//        r.setCode(1);
-//        return r;
-//      }
-//      output = queueCmd(cmd);
-//    } catch (Exception e) {
-//      LOG.warn(ExceptionUtil.getStackTrace(e));
-//      r.setOutput(e.getMessage());
-//      r.setCode(1);
-//      return r;
-//    }
-//    r.setOutput(output);
-//    r.setCode(0);
-//    return r;
-//  }
-
-//  public Response createCluster(CreateCommand cmd) throws IOException {
-//    LOG.info("Received COMMAND: " + cmd);
-//    String output = null;
-//    Response r = new Response();
-//    try {
-//      String clusterPath = CommonConfigurationKeys.ZOOKEEPER_CLUSTER_ROOT_DEFAULT + "/" + cmd.getClusterName();
-//      if (zk.exists(clusterPath, null) != null) {
-//        String msg = "Cluster [" + cmd.getClusterName() + "] already exists. CREATE operation aborted.";
-//        LOG.warn(msg);
-//        r.setOutput(msg);
-//        r.setCode(1);
-//        return r;
-//      }
-//      output = queueCmd(cmd);
-//    } catch (Exception e) {
-//      LOG.warn(ExceptionUtil.getStackTrace(e));
-//      r.setOutput(e.getMessage());
-//      r.setCode(1);
-//      return r;
-//    }
-//    r.setOutput(output);
-//    r.setCode(0);
-//    return r;
-//  }
-//
-//  public Response deleteCluster(DeleteCommand cmd) throws IOException {
-//    LOG.info("Received COMMAND: " + cmd);
-//    String output = null;
-//    Response r = new Response();
-//    try {
-//      String clusterPath = CommonConfigurationKeys.ZOOKEEPER_CLUSTER_ROOT_DEFAULT + "/" + cmd.getClusterName();
-//      if ( zk.exists(clusterPath, null) == null) {
-//        String msg = "Cluster [" + cmd.getClusterName() + "] doesn't exist. Delete operation aborted.";
-//        LOG.warn(msg);
-//        r.setOutput(msg);
-//        r.setCode(1);
-//        return r;
-//      }
-//      output = queueCmd(cmd);
-//    } catch (Exception e) {
-//      LOG.warn(ExceptionUtil.getStackTrace(e));
-//      r.setOutput(e.getMessage());
-//      r.setCode(1);
-//      return r;
-//    }
-//    r.setOutput(output);
-//    r.setCode(0);
-//    return r;
-//  }
-  
-  public ClusterManifest checkClusterStatus(String clusterId) throws IOException {
-    String clusterPath = ZookeeperUtil.getClusterPath(clusterId);
-    try {
-      if(zk.exists(clusterPath, null) == null) {
-        throw new IOException("Cluster "+clusterId+" does not exist.");
-      }
-      ClusterHistory history = JAXBUtil.read(zk.getData(clusterPath, false, null), ClusterHistory.class);
-      int index = history.getHistory().size()-1;
-      ClusterManifest cm = history.getHistory().get(index);
-      return cm;
-    } catch(Throwable e) {
-      throw new IOException(e);
-    }
-  }
-  
-//  public Response checkStatus(StatusCommand cmd) throws IOException {
-//    LOG.info("Received COMMAND: " + cmd);
-//    Response r = new Response();
-//    try {
-//      String nodePath = cmd.getNodePath();
-//      if (nodePath != null) {
-//        if (zk.exists(nodePath, null) == null) {
-//          String msg = "Node " + nodePath + " doesn't exist";
-//          LOG.warn(msg);
-//          r.setOutput(msg);
-//          r.setCode(1);
-//          return r;
-//        }
-//        MachineState state = JAXBUtil.read(zk.getData(nodePath, false, null),
-//            MachineState.class);
-//        r.setOutput(state.toString());
-//        r.setCode(0);
-//        return r;
-//      }
-//      String cmdPath = CommonConfigurationKeys.ZOOKEEPER_COMMAND_QUEUE_PATH_DEFAULT + "/" + cmd.getCmdId();
-//      if ( zk.exists(cmdPath, null) == null) {
-//        String msg = "Command " + cmd.getCmdId() + " doesn't exist";
-//        LOG.warn(msg);
-//        r.setOutput(msg);
-//        r.setCode(1);
-//        return r;
-//      }
-//      String cmdStatusPath = cmdPath + CommandHandler.COMMAND_STATUS;
-//      CommandStatus status = null;
-//      try {
-//        status = JAXBUtil.read(zk.getData(cmdStatusPath, false, null), CommandStatus.class);
-//      } catch (KeeperException.NoNodeException e) {
-//        r.setOutput("Command " + cmd.getCmdId() + ": not yet started");
-//        r.setCode(0);
-//        return r;
-//      }
-//      StringBuilder sb = new StringBuilder(status.toString());
-//      List<String> children = zk.getChildren(cmdStatusPath, null);
-//      if (children != null) {
-//        for (String child : children) {
-//          ActionStatus as = JAXBUtil.read(zk.getData(cmdStatusPath + "/" + child, false, null), ActionStatus.class);
-//          sb.append("\nactionId=");
-//          sb.append(as.getActionId());
-//          sb.append(", host=");
-//          sb.append(as.getHost());
-//          sb.append(", status=");
-//          sb.append(as.getStatus());
-//          sb.append(", error msg: ");
-//          sb.append(as.getError());
-//        }
-//      }
-//      r.setOutput(sb.toString());
-//      r.setCode(0);
-//      return r;
-//    } catch (Exception e) {
-//      LOG.warn(ExceptionUtil.getStackTrace(e));
-//      r.setOutput(e.getMessage());
-//      r.setCode(1);
-//      return r;
-//    }
-//  }
-  
-  public MachineState checkNodeStatus(String nodePath) throws IOException {
-    LOG.info("Received Node Path: " + nodePath);
-    try {
-      if (zk.exists(nodePath, null) == null) {
-        String msg = "Node " + nodePath + " doesn't exist";
-        LOG.warn(msg);
-        throw new IOException(msg);
-      }
-      MachineState state = JAXBUtil.read(zk.getData(nodePath, false, null),
-          MachineState.class);
-      return state;
-    } catch (Exception e) {
-      LOG.warn(ExceptionUtil.getStackTrace(e));
-      throw new IOException(e);
-    }
-  }
-  
-  public CommandStatus checkCommandStatus(StatusCommand cmd) throws IOException {
-    try {
-      String cmdPath = CommonConfigurationKeys.ZOOKEEPER_COMMAND_QUEUE_PATH_DEFAULT + "/" + cmd.getCmdId();
-      if ( zk.exists(cmdPath, null) == null) {
-        String msg = "Command " + cmd.getCmdId() + " doesn't exist";
-        LOG.warn(msg);
-        throw new IOException(msg);
-      }
-      String cmdStatusPath = cmdPath + CommandHandler.COMMAND_STATUS;
-      CommandStatus status = null;
-      try {
-        status = JAXBUtil.read(zk.getData(cmdStatusPath, false, null),
-            CommandStatus.class);
-      } catch (KeeperException.NoNodeException e) {
-        String msg = "Command " + cmd.getCmdId() + ": not yet started";
-        throw new IOException(msg);
-      }
-      return status;
-    } catch (Exception e) {
-      LOG.warn(ExceptionUtil.getStackTrace(e));
-      throw new IOException(e);
-    }
-  }
-
-  public List<Command> listCommand() throws IOException {
-    List<Command> list = new ArrayList<Command>();
-    try {
-      String cmdPath = CommonConfigurationKeys.ZOOKEEPER_COMMAND_QUEUE_PATH_DEFAULT;
-      if(zk.exists(cmdPath, null) == null) {
-        throw new IOException("Command Queue does not exist.");
-      }
-      List<String> commands = zk.getChildren(cmdPath, null);
-      for(String command : commands) {
-        StringBuilder cmdStatusPath = new StringBuilder();
-        cmdStatusPath.append(cmdPath);
-        cmdStatusPath.append("/");
-        cmdStatusPath.append(command);
-        Command cmd = JAXBUtil.read(zk.getData(cmdStatusPath.toString(), false, null),
-          Command.class);
-        cmd.setId(command);
-        list.add(cmd);
-      }
-      return list;
-    } catch(Exception e) {
-      LOG.warn(ExceptionUtil.getStackTrace(e));
-      throw new IOException(e);      
-    }
-  }
-
-  public List<ClusterManifest> listClusters() throws IOException {
-    List<ClusterManifest> list = new ArrayList<ClusterManifest>();
-    try {
-      String cmdPath = CommonConfigurationKeys.ZOOKEEPER_CLUSTER_ROOT_DEFAULT;
-      if(zk.exists(cmdPath, null) == null) {
-        throw new IOException("No cluster exists.");
-      }
-      List<String> commands = zk.getChildren(cmdPath, null);
-      for(String command : commands) {
-        StringBuilder cmdStatusPath = new StringBuilder();
-        cmdStatusPath.append(cmdPath);
-        cmdStatusPath.append("/");
-        cmdStatusPath.append(command);
-        try {
-          ClusterHistory history = JAXBUtil.read(zk.getData(cmdStatusPath.toString(), false, null),
-            ClusterHistory.class);
-          int index = history.getHistory().size()-1;
-          ClusterManifest cluster = history.getHistory().get(index);
-          list.add(cluster);
-        } catch(EOFException skip) {
-          // Skip cluster if the cluster node is in the process of being created.
-        }
-      }
-      return list;
-    } catch(Exception e) {
-      LOG.warn(ExceptionUtil.getStackTrace(e));
-      throw new IOException(e);      
-    }
-  }
-}
diff --git a/controller/src/main/java/org/apache/hms/controller/CommandHandler.java b/controller/src/main/java/org/apache/hms/controller/CommandHandler.java
deleted file mode 100755
index fbebd66..0000000
--- a/controller/src/main/java/org/apache/hms/controller/CommandHandler.java
+++ /dev/null
@@ -1,1113 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.controller;
-
-import java.io.File;
-import java.io.IOException;
-import java.util.ArrayList;
-import java.util.Collections;
-import java.util.Date;
-import java.util.HashMap;
-import java.util.HashSet;
-import java.util.LinkedHashSet;
-import java.util.LinkedList;
-import java.util.List;
-import java.util.Map;
-import java.util.Set;
-import java.util.concurrent.LinkedBlockingQueue;
-import java.util.concurrent.atomic.AtomicInteger;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hms.controller.Controller;
-import org.apache.hms.common.conf.CommonConfigurationKeys;
-import org.apache.hms.common.entity.Status;
-import org.apache.hms.common.entity.action.Action;
-import org.apache.hms.common.entity.action.ActionDependency;
-import org.apache.hms.common.entity.action.ActionStatus;
-import org.apache.hms.common.entity.action.PackageAction;
-import org.apache.hms.common.entity.action.ScriptAction;
-import org.apache.hms.common.entity.cluster.MachineState;
-import org.apache.hms.common.entity.cluster.MachineState.StateEntry;
-import org.apache.hms.common.entity.cluster.MachineState.StateType;
-import org.apache.hms.common.entity.command.ClusterCommand;
-import org.apache.hms.common.entity.command.Command;
-import org.apache.hms.common.entity.command.CommandStatus;
-import org.apache.hms.common.entity.command.CreateClusterCommand;
-import org.apache.hms.common.entity.command.DeleteClusterCommand;
-import org.apache.hms.common.entity.command.UpgradeClusterCommand;
-import org.apache.hms.common.entity.command.CommandStatus.ActionEntry;
-import org.apache.hms.common.entity.command.CommandStatus.HostStatusPair;
-import org.apache.hms.common.entity.command.CreateCommand;
-import org.apache.hms.common.entity.command.DeleteCommand;
-import org.apache.hms.common.entity.manifest.ClusterHistory;
-import org.apache.hms.common.entity.manifest.ClusterManifest;
-import org.apache.hms.common.entity.manifest.ConfigManifest;
-import org.apache.hms.common.entity.manifest.Node;
-import org.apache.hms.common.entity.manifest.NodesManifest;
-import org.apache.hms.common.entity.manifest.PackageInfo;
-import org.apache.hms.common.entity.manifest.Role;
-import org.apache.hms.common.entity.manifest.SoftwareManifest;
-import org.apache.hms.common.util.ExceptionUtil;
-import org.apache.hms.common.util.FileUtil;
-import org.apache.hms.common.util.JAXBUtil;
-import org.apache.hms.common.util.ZookeeperUtil;
-import org.apache.zookeeper.AsyncCallback.Children2Callback;
-import org.apache.zookeeper.AsyncCallback.VoidCallback;
-import org.apache.zookeeper.CreateMode;
-import org.apache.zookeeper.KeeperException;
-import org.apache.zookeeper.WatchedEvent;
-import org.apache.zookeeper.Watcher;
-import org.apache.zookeeper.ZooKeeper;
-import org.apache.zookeeper.Watcher.Event;
-import org.apache.zookeeper.ZooDefs.Ids;
-import org.apache.zookeeper.data.Stat;
-
-
-public class CommandHandler implements Children2Callback, VoidCallback, Watcher {
-  private static Log LOG = LogFactory.getLog(CommandHandler.class);
-  private static String AGENT_ACTION = "/action";
-  private static String AGENT_STATUS = "/status";
-  private static String AGENT_WORKLOG = "/worklog";
-  public static String COMMAND_STATUS = "/status";
-  private static AtomicInteger actionCount = new AtomicInteger();
-  
-  private final ZooKeeper zk;
-  private final int handlerCount;
-  private final LinkedBlockingQueue<String> tasks = new LinkedBlockingQueue<String>();
-  // access to watchedMachineNodes needs to be synchronized on itself
-  private final Map<String, Set<String>> watchedMachineNodes = new HashMap<String, Set<String>>();
-  private Handler handlers[];
-  private volatile boolean running = true; // true while controller runs
-  
-  public CommandHandler(ZooKeeper zk, int handlerCount) throws KeeperException, InterruptedException {
-    this.zk = zk;
-    this.handlerCount = handlerCount;
-    zk.getChildren(CommonConfigurationKeys.ZOOKEEPER_COMMAND_QUEUE_PATH_DEFAULT, this);
-    zk.getChildren(CommonConfigurationKeys.ZOOKEEPER_LIVE_CONTROLLER_PATH_DEFAULT, this);
-    zk.getChildren(CommonConfigurationKeys.ZOOKEEPER_STATUS_QUEUE_PATH_DEFAULT, this);
-  }
-
-  @Override
-  public void processResult(int rc, String path, Object ctx) {
-  }
-  
-  @Override
-  public void processResult(int rc, String path, Object ctx,
-      List<String> children, Stat stat) {
-    for (String child : children) {
-      tasks.add(path + "/" + child);
-    }
-  }
-  
-  @Override
-  public void process(WatchedEvent event) {
-    String path = event.getPath();
-    LOG.info("Triggered path: "+path);
-    if (event.getType() == Event.EventType.NodeChildrenChanged) {
-      if (path.equals(CommonConfigurationKeys.ZOOKEEPER_LIVE_CONTROLLER_PATH_DEFAULT)) {
-        zk.getChildren(CommonConfigurationKeys.ZOOKEEPER_COMMAND_QUEUE_PATH_DEFAULT, this, this, null);
-      } else {
-        zk.getChildren(path, this, this, null);
-      }
-    } else if (event.getType() == Event.EventType.NodeDataChanged) {
-      tasks.add(path);
-    }
-  }
-  
-  private boolean isMachineNode(String path) {
-    if (path.startsWith(CommonConfigurationKeys.ZOOKEEPER_CLUSTER_ROOT_DEFAULT)
-        && path.split("/").length == 4) {
-      return true;
-    }
-    return false;
-  }
-  
-  private void checkCmds(String taskPath) throws IOException, KeeperException,
-      InterruptedException {
-    Set<String> cmdStatusPaths = null;
-    synchronized (watchedMachineNodes) {
-      cmdStatusPaths = watchedMachineNodes.remove(taskPath);
-    }
-    /*
-     * current controller must already own the locks to these cmds. Otherwise,
-     * these cmds wouldn't have been put into watchedMachineNodes.
-     */
-    if (cmdStatusPaths != null) {
-      for (String path : cmdStatusPaths) {
-        queueActions(path);
-      }
-    }
-  }
-  
-  private void handle() throws InterruptedException, KeeperException,
-      IOException {
-    boolean workOnIt = true;
-    String taskPath = tasks.take();
-    if (isMachineNode(taskPath)) {
-      // machine state change
-      LOG.info("machine state changed: " + taskPath);
-      checkCmds(taskPath);
-      return;
-    }
-    try {
-      // trying to acquire the lock
-      zk.create(CommonConfigurationKeys.ZOOKEEPER_LOCK_QUEUE_PATH_DEFAULT + "/"
-          + taskPath.replace('/', '.'), new byte[0], Ids.OPEN_ACL_UNSAFE,
-          CreateMode.EPHEMERAL);
-    } catch (KeeperException.NodeExistsException e) {
-      // client cmd or action status has been processed
-      return;
-    }
-    // got the lock
-    if (taskPath
-        .startsWith(CommonConfigurationKeys.ZOOKEEPER_COMMAND_QUEUE_PATH_DEFAULT)) {
-      String cmd = taskPath.substring(taskPath.lastIndexOf('/') + 1);
-      if (cmd.indexOf('-') < 0) {
-        throw new IOException("Unknown command: " + cmd);
-      }
-      byte[] data = zk.getData(taskPath, false, null);
-      Command command = JAXBUtil.read(data, Command.class);
-      String commandStatusPath = ZookeeperUtil.getCommandStatusPath(taskPath);
-      Stat stat = zk.exists(commandStatusPath, false);
-      if(stat!=null) {
-        byte[] test = zk.getData(commandStatusPath, false, stat);
-        CommandStatus status = JAXBUtil.read(test, CommandStatus.class);
-        if(status.getStatus()==Status.SUCCEEDED || status.getStatus()==Status.FAILED) {
-          workOnIt=false;
-        }
-      }
-      if(workOnIt) {
-        try {
-          if (command instanceof DeleteCommand) {
-            deleteCluster(taskPath, (DeleteCommand) command);
-          } else if (command instanceof CreateClusterCommand) {
-            createCluster(taskPath, (CreateClusterCommand) command);
-          } else if (command instanceof DeleteClusterCommand) {
-            deleteCluster(taskPath, (DeleteClusterCommand) command);
-          } else if (command instanceof UpgradeClusterCommand) {
-            updateCluster(taskPath, (ClusterCommand) command);
-          } else {
-            throw new IOException("Unknown command: " + command);
-          }
-        } catch(KeeperException e) {
-          unlockCommand(taskPath);          
-          // Look for other command to work.
-          zk.getChildren(CommonConfigurationKeys.ZOOKEEPER_COMMAND_QUEUE_PATH_DEFAULT, this);
-        }
-      } else {
-        unlockCommand(taskPath);
-        // Look for other command to work.
-        zk.getChildren(CommonConfigurationKeys.ZOOKEEPER_COMMAND_QUEUE_PATH_DEFAULT, this);
-      }
-    } else if (taskPath
-        .startsWith(CommonConfigurationKeys.ZOOKEEPER_STATUS_QUEUE_PATH_DEFAULT)) {
-      updateSystemState(taskPath);
-    } else if (taskPath
-        .startsWith(CommonConfigurationKeys.ZOOKEEPER_CLUSTER_ROOT_DEFAULT)) {
-      String queueNode = taskPath.substring(0, taskPath.lastIndexOf('/'));
-      LOG.info("queueNode is " + queueNode);
-      if (queueNode.endsWith(AGENT_STATUS)) {
-        // agent status event
-        updateSystemState(taskPath);
-      } else if (queueNode.endsWith(AGENT_ACTION)) {
-        // action being queued
-        runFakeAgent(taskPath);
-      } else {
-        throw new IOException("Unknown event: " + taskPath);
-      }
-    } else {
-      throw new IOException("Unexpected request: " + taskPath);
-    }
-  }
-  
-  /**
-   * Simulate Agent Status
-   */
-  private void runFakeAgent(String actionPath) throws KeeperException,
-      InterruptedException, IOException {
-    LOG.info("Fake agent received action event at " + actionPath);
-    Thread.sleep(1000);
-    Stat stat = zk.exists(actionPath, false);
-    if (stat == null) {
-      // action has been worked on
-      return;
-    }
-    Action action = JAXBUtil.read(zk.getData(actionPath, false, null),
-        Action.class);
-    // create worklog node
-    zk.create(actionPath + AGENT_WORKLOG, new byte[0], Ids.OPEN_ACL_UNSAFE,
-        CreateMode.PERSISTENT);
-    ActionStatus status = new ActionStatus();
-    status.setStatus(Status.SUCCEEDED);
-    status.setError("Failure is unavoidable");
-    status.setCmdPath(action.getCmdPath());
-    status.setActionId(action.getActionId());
-    status.setActionPath(actionPath);
-    String actionQueue = actionPath.substring(0, actionPath.lastIndexOf('/'));
-    String hostNode = actionQueue.substring(0, actionQueue.lastIndexOf('/'));
-    status.setHost(hostNode);
-    String statusNode = zk.create(hostNode + AGENT_STATUS + "/" + "status-",
-        JAXBUtil.write(status), Ids.OPEN_ACL_UNSAFE,
-        CreateMode.PERSISTENT_SEQUENTIAL);
-    LOG.info("Fake agent queued status object at " + statusNode);
-  }
-  
-  private void updateSystemState(String statusPath)
-      throws InterruptedException, KeeperException, IOException {
-    LOG.info("status path is: " + statusPath);
-    Stat stat = zk.exists(statusPath, false);
-    if (stat == null) {
-      /* status has been previously processed by either this or another controller
-       * delete the status lock if it exists 
-       */
-      LOG.info("status has been previously processed: " + statusPath);
-      statusCleanup(statusPath, null);
-      return;
-    }
-    ActionStatus actionStat = JAXBUtil.read(
-        zk.getData(statusPath, false, null), ActionStatus.class);
-    if (actionStat.getStatus() != Status.SUCCEEDED
-        && actionStat.getStatus() != Status.FAILED)
-      throw new IOException("Invalid action status: " + actionStat.getStatus()
-          + " from action " + actionStat.getActionPath());
-    String actionPath = actionStat.getActionPath();
-    stat = zk.exists(actionPath, false);
-    if (stat == null) {
-      /* status has been previously processed by either this or another controller
-       * delete the status znode, plus action and status locks 
-       */
-      statusCleanup(statusPath, actionPath);
-      return;
-    }
-    
-    String actionQueue = actionPath.substring(0, actionPath.lastIndexOf('/'));
-    String hostNode = actionQueue.substring(0, actionQueue.lastIndexOf('/'));
-    // update system status
-    if (actionStat.getStatus() == Status.SUCCEEDED) {
-      Action action = JAXBUtil.read(zk.getData(actionPath, false, null),
-          Action.class);
-      MachineState machineState = JAXBUtil.read(zk.getData(hostNode, false, stat),
-          MachineState.class);
-      boolean retry = true;
-      while (retry) {
-        retry = false;
-        Set<StateEntry> states = machineState.getStates();
-        if (states == null) {
-          states = new HashSet<StateEntry>();
-        }
-        if(action.getExpectedResults()!=null) {
-          states.addAll(action.getExpectedResults());
-        }
-        machineState.setStates(states);
-        try {
-          stat = zk.setData(hostNode, JAXBUtil.write(machineState), stat
-              .getVersion());
-        } catch (KeeperException.BadVersionException e) {
-          LOG.info("version mismatch: expected=" + stat.getVersion() + " msg: "
-              + e.getMessage());
-          machineState = JAXBUtil.read(zk.getData(hostNode, false, stat),
-              MachineState.class);
-          LOG.info("new version is " + stat.getVersion());
-          retry = true;
-        }
-      }
-    }
-    
-    // update cmd status
-    if (actionStat.getStatus() != Status.SUCCEEDED) {
-      try {
-        zk.create(actionStat.getCmdPath() + COMMAND_STATUS + "/"
-            + actionStat.getHost().replace('/', '.') + "-"
-            + actionStat.getActionId(), JAXBUtil.write(actionStat),
-            Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
-      } catch (KeeperException.NodeExistsException e) {
-      }
-    }
-
-    String host = hostNode.substring(hostNode.lastIndexOf('/') + 1);
-    String cmdStatusPath = actionStat.getCmdPath() + COMMAND_STATUS;
-    CommandStatus cmdStatus = JAXBUtil.read(zk.getData(cmdStatusPath, false,
-        stat), CommandStatus.class);
-    boolean retry = true;
-    while (retry) {
-      retry = false;
-      boolean found = false;
-      boolean needUpdate = false;
-      for (ActionEntry actionEntry : cmdStatus.getActionEntries()) {
-        if (actionEntry.getAction().getActionId() == actionStat.getActionId()) {
-          int failCount = 0;
-          if(actionStat.getStatus()==Status.FAILED) {
-            // Count current action status if it has failed.
-            failCount++;
-          }
-          for (HostStatusPair hsp : actionEntry.getHostStatus()) {
-            if(hsp.getStatus()==Status.FAILED) {
-              // Walk through existing hosts, and count number of failed
-              // actions.
-              failCount++;
-            }
-            if (host.equals(hsp.getHost())) {
-              found = true;
-              Status status = hsp.getStatus();
-              if (status == Status.UNQUEUED || status == Status.QUEUED
-                  || status == Status.STARTED) {
-                hsp.setStatus(actionStat.getStatus());
-                cmdStatus.setCompletedActions(cmdStatus.getCompletedActions() + 1);                
-                if (cmdStatus.getCompletedActions() == cmdStatus.getTotalActions()) {
-                  Status overallStatus = Status.SUCCEEDED;
-                  for (ActionEntry aEntry : cmdStatus.getActionEntries()) {
-                    boolean shouldBreak = false;
-                    for (HostStatusPair hspair : aEntry.getHostStatus()) {
-                      if (hspair.getStatus() != Status.SUCCEEDED) {
-                        overallStatus = Status.FAILED;
-                        shouldBreak = true;
-                        break;
-                      }
-                    }
-                    if (shouldBreak)
-                      break;
-                  }
-                  cmdStatus.setStatus(overallStatus);
-                  cmdStatus.setEndTime(new Date(System.currentTimeMillis()).toString());
-                  updateClusterStatus(actionStat.getCmdPath());
-                } else if(failCount==actionEntry.getHostStatus().size()) {
-                  // If all nodes failed the action, set the command to fail.
-                  cmdStatus.setStatus(Status.FAILED);
-                  cmdStatus.setEndTime(new Date(System.currentTimeMillis()).toString());
-                  updateClusterStatus(actionStat.getCmdPath());
-                }
-                needUpdate = true;
-                LOG.info("Fail count:"+failCount);
-                break;
-              } else if (status == actionStat.getStatus()) {
-                // duplicate status update, nothing to be done
-              } else {
-                throw new IOException("UNEXPECTED action status: " + actionStat.getStatus()
-                    + " from action " + actionPath + ", current host status is " + status);
-              }
-            }
-          }
-          if (found) {
-            break;
-          }
-        }
-      }
-      if (!found) {
-        throw new IOException("UNEXPECTED: can't find action " + actionPath);
-      }
-      if (needUpdate) {
-        try {
-          stat = zk.setData(cmdStatusPath, JAXBUtil.write(cmdStatus), stat
-              .getVersion());
-          if(cmdStatus.getStatus() == Status.SUCCEEDED || cmdStatus.getStatus() == Status.FAILED) {
-            unlockCommand(actionStat.getCmdPath());
-          }
-        } catch (KeeperException.BadVersionException e) {
-          LOG.info("version mismatch: expected=" + stat.getVersion() + " msg: "
-              + e.getMessage());
-          cmdStatus = JAXBUtil.read(zk.getData(cmdStatusPath, false, stat),
-              CommandStatus.class);
-          LOG.info("new version is " + stat.getVersion());
-          retry = true;
-        }
-      }
-    }
-
-    statusCleanup(statusPath, actionPath);
-    LOG.info("Deleted action:" + actionPath + ", status:" + statusPath);
-  }
-
-  public void unlockCommand(String cmdPath) {
-    String cmdLock = cmdPath.replace('/', '.');
-    try {
-      deleteIfExists(CommonConfigurationKeys.ZOOKEEPER_LOCK_QUEUE_PATH_DEFAULT+"/"+cmdLock);
-    } catch (InterruptedException e) {
-      LOG.warn("Unable to unlock:" + cmdPath);
-    } catch (KeeperException e) {
-      LOG.warn("Unable to unlock:" + cmdPath);
-    }
-  }
-  
-  private void updateClusterStatus(String cmdPath) throws IOException, KeeperException, InterruptedException {
-    Stat current = zk.exists(cmdPath, false);
-    ClusterCommand cmd = JAXBUtil.read(zk.getData(cmdPath, false, current), ClusterCommand.class);
-    String clusterPath = ZookeeperUtil.getClusterPath(cmd.getClusterManifest().getClusterName());
-    boolean retry = true;
-    while(retry) {
-      retry = false;
-      try {
-        if(cmd instanceof DeleteClusterCommand) {
-          deleteIfExists(clusterPath);
-        }
-        unlockCluster(cmd.getClusterManifest().getClusterName());
-      } catch(KeeperException.BadVersionException e) {
-        retry = true;
-        LOG.warn(ExceptionUtil.getStackTrace(e));
-        LOG.warn("version mismatch: expected=" + current.getVersion());
-        zk.getData(clusterPath, false, current);
-        LOG.warn("Cluster status update failed.  Cluster ID:"+cmd.getClusterManifest().getClusterName()+" state: "+ current.getVersion());
-      }
-    }
-  }
-  
-  private void statusCleanup(String statusPath, String actionPath)
-      throws InterruptedException, KeeperException {
-    deleteIfExists(actionPath);
-    deleteIfExists(statusPath);
-    // delete action lock for fake agent
-    if (actionPath != null && actionPath.length() > 0) {
-      deleteIfExists(CommonConfigurationKeys.ZOOKEEPER_LOCK_QUEUE_PATH_DEFAULT
-          + "/" + actionPath.replace('/', '.'));
-    }
-    // delete status lock
-    if (statusPath != null && statusPath.length() > 0) {
-      deleteIfExists(CommonConfigurationKeys.ZOOKEEPER_LOCK_QUEUE_PATH_DEFAULT
-          + "/" + statusPath.replace('/', '.'));
-    }
-  }
-  
-  private void deleteIfExists(String path) throws InterruptedException,
-      KeeperException {
-    if (path == null || path.length() == 0) {
-      return;
-    }
-    Stat stat = zk.exists(path, false);
-    if (stat == null) {
-      return;
-    }
-    List<String> children = null;
-    try {
-      children = zk.getChildren(path, null);
-      if (children != null) {
-        for (String child : children) {
-          deleteIfExists(path + "/" + child);
-        }
-      }
-      zk.delete(path, -1);
-    } catch (KeeperException.NoNodeException e) {
-      LOG.info(ExceptionUtil.getStackTrace(e));
-    }
-  }
-
-  private void createNodeIfNecessary(String path, byte[] data) throws KeeperException,
-      InterruptedException {
-    try {
-      zk.create(path, data, Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
-    } catch (KeeperException.NodeExistsException e) {
-    }
-  }
-  
-  private void createAndWatchClusterNodes(ClusterManifest cm, String cmdStatusPath)
-    throws KeeperException, InterruptedException, IOException  {
-    CommandStatus cmdStatus = JAXBUtil.read(zk.getData(cmdStatusPath, false, null), CommandStatus.class);
-    List<ActionEntry> list = cmdStatus.getActionEntries();
-    HashSet<String> hosts = new HashSet<String>();
-    for(ActionEntry a : list) {
-      List<HostStatusPair> hsList = a.getHostStatus();
-      for(HostStatusPair hsp : hsList) {
-        hosts.add(hsp.getHost());
-      }
-    }
-    ClusterHistory history;
-    try {
-      String path = ZookeeperUtil.getClusterPath(cm.getClusterName());
-      Stat stat = zk.exists(path, false);
-      byte[] buffer = zk.getData(path, false, stat);
-      history = JAXBUtil.read(buffer, ClusterHistory.class);
-    } catch(KeeperException.NoNodeException e) {
-      history = new ClusterHistory();
-      ArrayList<ClusterManifest> manifests = new ArrayList<ClusterManifest>();
-      manifests.add(cm);
-      history.setHistory(manifests);      
-    }
-    createAndWatchClusterNodes(cmdStatus.getClusterName(), hosts, history);
-  }
-  
-  private void createAndWatchClusterNodes(String cluster , Set<String> hosts, ClusterHistory ch) throws KeeperException, InterruptedException, IOException {
-    byte[] empty = new byte[0];
-    String clusterNode = ZookeeperUtil.getClusterPath(cluster);
-    createNodeIfNecessary(clusterNode, JAXBUtil.write(ch));
-    // create host nodes and queues
-    for (String host : hosts) {
-      String hostNode = clusterNode + "/" + host;
-      createNodeIfNecessary(hostNode, JAXBUtil.write(new MachineState()));
-      String actionQueue = hostNode + AGENT_ACTION;
-      createNodeIfNecessary(actionQueue, empty);
-      String statusQueue = hostNode + AGENT_STATUS;
-      createNodeIfNecessary(statusQueue, empty);
-      // watch on agent status queue
-      zk.getChildren(statusQueue, this);
-      // to run fake agent, watch on action queue also
-      //zk.getChildren(actionQueue, this);
-    }
-  }
-  
-
-  private void commitCommandPlan(String cmdStatusPath, CommandStatus cmdStatus, List<ActionEntry> actionEntries) throws KeeperException, InterruptedException, IOException {
-    cmdStatus.setActionEntries(actionEntries);
-    int totalActions = 0;
-    for (ActionEntry a : actionEntries) {
-      totalActions += a.getHostStatus().size();
-    }
-    cmdStatus.setTotalActions(totalActions);
-    // write out the plan that is captured in cmdStatus
-    zk.create(cmdStatusPath, JAXBUtil.write(cmdStatus), Ids.OPEN_ACL_UNSAFE,
-        CreateMode.PERSISTENT);
-  }
-  
-  private Set<String> convertRolesToHosts(NodesManifest nodesManifest, Set<String> roles) {
-    Set<String> hosts = new HashSet<String>();
-    if (roles==null) {
-      // If roles are unspecified, expand the unique host list
-      for(Role collectRole : nodesManifest.getRoles()) {
-        String[] hostList = collectRole.getHosts();
-        for(String host : hostList) {
-          hosts.add(host);
-        }
-      }
-    } else {
-      for(String role : roles) {
-        for(Role testRole : nodesManifest.getRoles()) {
-          if(role.equals(testRole.getName())) {
-            String[] hostList = testRole.getHosts();
-            for(String host : hostList) {
-              hosts.add(host);
-            }
-          }
-        }
-      }
-    }
-    return hosts;
-  }
-  
-  private PackageInfo[] convertRolesToPackages(SoftwareManifest softwareManifest, String role) {
-    Set<PackageInfo> packages = new LinkedHashSet<PackageInfo>();
-    for(Role tmp : softwareManifest.getRoles()) {
-      if(role==null || tmp.equals(role)) {
-        for(PackageInfo p : tmp.getPackages()) {
-          packages.add(p);
-        }
-      }
-    }
-    return packages.toArray(new PackageInfo[packages.size()]);
-  }
-  
-  private List<HostStatusPair> setHostStatus(Set<String> hosts, Status status) {
-    List<HostStatusPair> nodesList = new ArrayList<HostStatusPair>();
-    for(String node : hosts) {
-      HostStatusPair hsp = new HostStatusPair(node, status);
-      nodesList.add(hsp);
-    }
-    return nodesList;
-  }
-  
-  /**
-   * Create a lock for serializing cluster related commands.
-   * @param clusterName
-   * @throws KeeperException
-   * @throws InterruptedException
-   * @throws IOException
-   */
-  private void lockCluster(String taskPath, String clusterName) throws KeeperException, InterruptedException, IOException {
-    StringBuilder path = new StringBuilder();
-    path.append(CommonConfigurationKeys.ZOOKEEPER_LOCK_QUEUE_PATH_DEFAULT);
-    path.append("/");
-    path.append("cluster.");
-    path.append(clusterName);
-    try {
-    zk.create(path.toString(), new byte[0], Ids.OPEN_ACL_UNSAFE,
-        CreateMode.EPHEMERAL);
-    } catch(KeeperException e) {
-      tasks.add(taskPath);
-      throw e;
-    }
-  }
-
-  /**
-   * Unlock cluster lock for operating next cluster related commands.
-   * @param clusterName
-   * @return Stat which cluster is unlocked
-   * @throws KeeperException
-   * @throws InterruptedException
-   * @throws IOException
-   */
-  public void unlockCluster(String clusterName) throws KeeperException, InterruptedException, IOException {
-    try {
-      StringBuilder path = new StringBuilder();
-      path.append(CommonConfigurationKeys.ZOOKEEPER_LOCK_QUEUE_PATH_DEFAULT);
-      path.append("/");
-      path.append("cluster.");
-      path.append(clusterName);
-      Stat stat = zk.exists(path.toString(), false);
-      zk.delete(path.toString(), stat.getVersion());
-    } catch(KeeperException.NoNodeException e) {      
-    }
-  }
-
-  /**
-   * Check all cluster for duplicated nodes in use.
-   * @param nm
-   * @return true if node is already used by another cluster.
-   * @throws InterruptedException 
-   * @throws KeeperException 
-   * @throws IOException 
-   */
-  private boolean checkNodesInUse(NodesManifest nm) throws KeeperException, InterruptedException {
-    Set<String> hosts = convertRolesToHosts(nm, null);
-    List<String> children = zk.getChildren(CommonConfigurationKeys.ZOOKEEPER_CLUSTER_ROOT_DEFAULT, null);
-    Stat stat = new Stat();
-    boolean result = false;
-    for(String cluster : children) {
-      try {
-        LOG.info("Check "+cluster);
-        String path = ZookeeperUtil.getClusterPath(cluster);
-        byte[] data = zk.getData(path, false, stat);
-        ClusterHistory ch = JAXBUtil.read(data, ClusterHistory.class);
-        int index = ch.getHistory().size() - 1;
-        ClusterManifest cm = ch.getHistory().get(index);
-        Set<String> test = convertRolesToHosts(cm.getNodes(), null);
-        hosts.retainAll(test);
-        if(!hosts.isEmpty()) {
-          result = true;
-          break;
-        }
-      } catch(Exception e) {
-        LOG.error(ExceptionUtil.getStackTrace(e));
-      }
-    }
-    return result;
-  }
-  
-  /**
-   * Update Zookeeper for failed command status
-   * @param path
-   * @param cmd
-   * @throws IOException
-   * @throws KeeperException
-   * @throws InterruptedException
-   */
-  public void failCommand(String path, Command cmd) throws IOException, KeeperException, InterruptedException {
-    String cmdStatusPath = ZookeeperUtil.getCommandStatusPath(path);
-    CommandStatus cmdStatus = new CommandStatus();
-    String currentTime = new Date(System.currentTimeMillis()).toString();
-    cmdStatus.setStartTime(currentTime);
-    cmdStatus.setEndTime(currentTime);
-    cmdStatus.setStatus(Status.FAILED);
-    cmdStatus.setTotalActions(0);
-    boolean retry = true;
-    while(retry) {
-      try {
-        Stat stat = zk.exists(cmdStatusPath, false);
-        if(stat==null) {
-          zk.create(cmdStatusPath, JAXBUtil.write(cmdStatus), Ids.OPEN_ACL_UNSAFE,
-          CreateMode.PERSISTENT);
-        } else {
-          zk.setData(cmdStatusPath, JAXBUtil.write(cmdStatus), stat.getVersion());
-        }
-        retry = false;
-      } catch(KeeperException.BadVersionException e) {
-        retry = true;
-      }
-    }
-    if(cmd instanceof ClusterCommand ) {
-      try {
-        String clusterName = ((ClusterCommand) cmd).getClusterManifest().getClusterName();
-        unlockCluster(clusterName);
-      } catch(NullPointerException e) {
-        // Ignore if the cluster has not been locked.
-      }
-    }
-    String cmdPath = CommonConfigurationKeys.ZOOKEEPER_COMMAND_QUEUE_PATH_DEFAULT + "/" + cmd.getId();
-    unlockCommand(cmdPath);
-  }
-  
-  private void createCluster(String taskPath, ClusterCommand command) throws KeeperException, InterruptedException, IOException {
-    lockCluster(taskPath, command.getClusterManifest().getClusterName());
-    if(checkNodesInUse(command.getClusterManifest().getNodes())) {
-      failCommand(taskPath, command);
-      LOG.error("Duplicated nodes detected in existing cluster.");
-      unlockCluster(command.getClusterManifest().getClusterName());
-      return;
-    }
-    generateClusterPlan(taskPath, (ClusterCommand) command);
-    String cmdStatusPath = ZookeeperUtil.getCommandStatusPath(taskPath);
-    runClusterActions(command.getClusterManifest(), cmdStatusPath);
-  }
-
-  private void updateCluster(String taskPath, ClusterCommand command) throws KeeperException, InterruptedException, IOException {
-    lockCluster(taskPath, command.getClusterManifest().getClusterName());
-    generateClusterPlan(taskPath, (ClusterCommand) command);
-    String cmdStatusPath = ZookeeperUtil.getCommandStatusPath(taskPath);
-    runClusterActions(command.getClusterManifest(), cmdStatusPath);
-  }
-
-  private void deleteCluster(String taskPath, DeleteClusterCommand command) throws KeeperException, InterruptedException, IOException {
-    lockCluster(taskPath, command.getClusterManifest().getClusterName());
-    ClusterManifest cm = command.getClusterManifest();
-    String path = ZookeeperUtil.getClusterPath(cm.getClusterName());
-    byte[] data = zk.getData(path, null, null);
-    ClusterHistory history = JAXBUtil.read(data, ClusterHistory.class);
-    int index = history.getHistory().size()-1;
-    ClusterManifest currentCluster = history.getHistory().get(index);
-    cm.setNodes(currentCluster.getNodes());
-    generateClusterPlan(taskPath, (ClusterCommand) command);
-    String cmdStatusPath = ZookeeperUtil.getCommandStatusPath(taskPath);
-    runClusterActions(command.getClusterManifest(), cmdStatusPath);
-  }
-  
-  private void generateClusterPlan(String cmdPath, ClusterCommand cmd) throws KeeperException, InterruptedException, IOException {
-    String cmdStatusPath = ZookeeperUtil.getCommandStatusPath(cmdPath);
-    Stat stat = zk.exists(cmdStatusPath, false);
-    if (stat != null) {
-      // plan already exists, let's pick up what's left from another controller
-      return;
-    }
-    // new create command
-    LOG.info("Generate command plan: " + cmdPath);
-    String startTime = new Date(System.currentTimeMillis()).toString();
-    CommandStatus cmdStatus = new CommandStatus(Status.STARTED, startTime);
-    
-    // Setup actions
-    List<ActionEntry> actionEntries = new LinkedList<ActionEntry>();
-
-    ClusterManifest cm = ((ClusterCommand) cmd).getClusterManifest();
-    cmdStatus.setClusterName(cm.getClusterName());
-    NodesManifest nm = cm.getNodes();
-    ConfigManifest configM = cm.getConfig();
-    for(Action action : configM.getActions()) {
-      // Find the host list for this action
-      Set<String> hosts;
-      if(action.getRole()==null) {
-        hosts = convertRolesToHosts(nm, null);
-      } else {
-        Set<String> role = new HashSet<String>();
-        role.add(action.getRole());
-        hosts = convertRolesToHosts(nm, role);
-      }
-      List<HostStatusPair> nodesList = setHostStatus(hosts, Status.UNQUEUED);
-
-      ActionEntry ae = new ActionEntry();
-      action.setCmdPath(cmdPath);
-      action.setActionId(actionCount.incrementAndGet());
-      ae.setHostStatus(nodesList);
-      List<ActionDependency> adList = action.getDependencies();
-      if(adList!=null) {
-        for(ActionDependency ad : adList) {
-          Set<String> roles = ad.getRoles();
-          Set<String> dependentHosts = convertRolesToHosts(nm, roles);
-          StringBuilder sb = new StringBuilder();
-          List<String> myhosts = new ArrayList<String>();
-          for(String host : dependentHosts) {
-            sb.append(CommonConfigurationKeys.ZOOKEEPER_CLUSTER_ROOT_DEFAULT);
-            sb.append("/");
-            sb.append(cm.getClusterName());
-            sb.append("/");
-            sb.append(host);
-            myhosts.add(sb.toString());
-            sb.delete(0, sb.length());
-          }
-          ad.setHosts(myhosts);
-        }
-      }
-
-      // If the action is package action resolve the action from software manifest
-      if(action instanceof PackageAction) {
-        SoftwareManifest sm = cm.getSoftware();
-        if(action.getRole()==null) {
-          // If no role is defined, install all the software in the software manifest
-          PackageInfo[] packages = convertRolesToPackages(sm, null);
-          ((PackageAction) action).setPackages(packages);
-        } else {
-          for(Role role : sm.getRoles()) {
-            if(role.getName().equals(action.getRole())) {
-              PackageInfo[] packages = convertRolesToPackages(sm, action.getRole());
-              ((PackageAction) action).setPackages(packages);
-            }
-          }       
-        }
-      }
-        
-      ae.setAction(action);
-      actionEntries.add(ae);
-    }
-    commitCommandPlan(cmdStatusPath, cmdStatus, actionEntries);    
-  }
-  
-  private void runClusterActions(ClusterManifest cm, String cmdStatusPath) throws KeeperException, InterruptedException, IOException {
-    CommandStatus cmdStatus = JAXBUtil.read(zk.getData(cmdStatusPath, false, null), CommandStatus.class);
-    String cluster = cmdStatus.getClusterName();
-    String clusterNode = ZookeeperUtil.getClusterPath(cluster);
-    try {
-      createAndWatchClusterNodes(cm, cmdStatusPath);
-    } catch(KeeperException e) {
-      LOG.debug(ExceptionUtil.getStackTrace(e));
-    }
-    queueActions(cmdStatusPath);
-    LOG.info("Issued actions for cluster [" + cluster
-        + "] with " + zk.exists(clusterNode, null).getNumChildren()
-        + " cluster nodes");
-    return;
-  }
-  
-  private boolean isDependencySatisfied(String cmdStatusPath,
-      List<ActionDependency> dependencies) throws KeeperException,
-      InterruptedException, IOException {
-    if (dependencies == null) {
-      return true;
-    }
-    Stat stat = new Stat();
-    for (ActionDependency dep : dependencies) {
-      List<String> hosts = dep.getHosts();
-      List<StateEntry> deps = dep.getStates();
-      if (hosts == null || hosts.size() == 0 || deps == null
-          || deps.size() == 0) {
-        continue;
-      }
-      int satisfied = 0;
-      for (String host : hosts) {
-        MachineState state = JAXBUtil.read(zk.getData(host, this, stat),
-            MachineState.class);
-        if (state == null || state.getStates() == null
-            || !state.getStates().containsAll(deps)) {
-          /*
-           * Adding the cmd to watchedMachineNodes and return. We only add the
-           * cmd once. Note that whenever the watch is triggered we remove the
-           * mapping from watchedMachineNodes. This ensures that at any time
-           * there is at most one mapping in watchedMachineNodes containing this
-           * cmd as its value. Hence, at any time there will be at most one
-           * handler thread executing queueActions() for this cmd (i.e., when
-           * watch is triggered).
-           */
-          synchronized (watchedMachineNodes) {
-            Set<String> cmdStatusPaths = watchedMachineNodes.get(host);
-            if (cmdStatusPaths == null) {
-              cmdStatusPaths = new HashSet<String>();
-            }
-            cmdStatusPaths.add(cmdStatusPath);
-            watchedMachineNodes.put(host, cmdStatusPaths);
-          }
-        } else {
-          satisfied++;
-        }
-      }
-      float confidenceLevel = satisfied / hosts.size();
-      if(confidenceLevel<0.5) {
-        return false;
-      }
-    }
-    return true;
-  }
-  
-  private void queueActions(String cmdStatusPath) throws IOException,
-      KeeperException, InterruptedException {
-    LOG.info("try to queue actions for cmd " + cmdStatusPath);
-    Stat stat = new Stat();
-    CommandStatus cmdStatus = JAXBUtil.read(zk.getData(cmdStatusPath, false,
-        stat), CommandStatus.class);
-    // we queue actions and update their status one at a time. After each
-    // action is queued, we try to update its status (retry if necessary).
-    // If retry happens, we start over again and try to find actions that
-    // need to be issued.
-    boolean startOver = true;
-    while (startOver) {
-      startOver = false;
-      for (ActionEntry actionEntry : cmdStatus.getActionEntries()) {
-        //TODO needs to check if an actionEntry is already done
-        if (!isDependencySatisfied(cmdStatusPath, actionEntry.getAction().getDependencies())) {
-          LOG.info("dependency is not satified for actionId=" + actionEntry.getAction().getActionId());
-          return;
-        }
-        int actionId = actionEntry.getAction().getActionId();
-        for (HostStatusPair hsp : actionEntry.getHostStatus()) {
-          if (hsp.getStatus() == Status.UNQUEUED) {
-            // queue action
-            String actionNode = CommonConfigurationKeys.ZOOKEEPER_CLUSTER_ROOT_DEFAULT
-                + "/"
-                + cmdStatus.getClusterName()
-                + "/"
-                + hsp.getHost()
-                + AGENT_ACTION + "/" + "action-";
-            actionNode = zk.create(actionNode, JAXBUtil.write(actionEntry
-                .getAction()), Ids.OPEN_ACL_UNSAFE,
-                CreateMode.PERSISTENT_SEQUENTIAL);
-
-            // update status for queued action
-            String host = hsp.getHost();
-            hsp.setStatus(Status.QUEUED);
-            boolean retry = true;
-            while (retry) {
-              retry = false;
-              try {
-                stat = zk.setData(cmdStatusPath, JAXBUtil.write(cmdStatus),
-                    stat.getVersion());
-              } catch (KeeperException.BadVersionException e) {
-                LOG.info("version mismatch: expected=" + stat.getVersion()
-                    + " msg: " + e.getMessage());
-                // our copy is stale, we need to start over again after
-                // updating the current status
-                startOver = true;
-                cmdStatus = JAXBUtil.read(zk
-                    .getData(cmdStatusPath, false, stat), CommandStatus.class);
-                LOG.info("new version is " + stat.getVersion());
-                // find the item we want to update and check if it needs to be
-                // updated
-                boolean found = false;
-                for (ActionEntry actEntry : cmdStatus.getActionEntries()) {
-                  if (actEntry.getAction().getActionId() == actionId) {
-                    for (HostStatusPair hostStat : actEntry.getHostStatus()) {
-                      if (hostStat.getHost().equals(host)) {
-                        // only update the status when we are in unqueued
-                        // state
-                        if (hostStat.getStatus() == Status.UNQUEUED) {
-                          hostStat.setStatus(Status.QUEUED);
-                          retry = true;
-                        }
-                        found = true;
-                        break;
-                      }
-                    }
-                    if (found)
-                      break;
-                  }
-                }
-              }
-            }
-            LOG.info("Queued action " + actionNode);
-            if (startOver)
-              break;
-          }
-        }
-        if (startOver)
-          break;
-      }
-    }
-  }
-  
-  private void recursiveDelete(String path) throws KeeperException, InterruptedException {
-    List<String> children = zk.getChildren(path, null);
-    if (children.size() > 0) {
-      for (String child : children) {
-        recursiveDelete(path + "/" + child);
-      }
-    }
-    zk.delete(path, -1);
-  }
-  
-  private void deleteClusterInZookeeper(String cmdPath, DeleteClusterCommand cmd) 
-      throws KeeperException, InterruptedException, IOException {
-    String clusterName = cmd.getClusterManifest().getClusterName();
-    LOG.info("Starting COMMAND: " + cmd + " on " + cmdPath);
-    String cmdStatusPath = cmdPath + COMMAND_STATUS;
-    try {
-      String startTime = new Date(System.currentTimeMillis()).toString();
-      zk.create(cmdStatusPath, JAXBUtil.write(new CommandStatus(Status.STARTED,
-          startTime)), Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
-    } catch (KeeperException.NodeExistsException e) {
-      // cmd has been worked on
-    }
-    Stat stat = new Stat();
-    byte[] data = zk.getData(cmdStatusPath, false, stat);
-    CommandStatus cmdStatus = JAXBUtil.read(data, CommandStatus.class);
-    if (cmdStatus.getStatus() == Status.SUCCEEDED) {
-      return;
-    }
-    String clusterPath = CommonConfigurationKeys.ZOOKEEPER_CLUSTER_ROOT_DEFAULT
-        + "/" + clusterName;
-    deleteIfExists(clusterPath);
-    cmdStatus.setEndTime(new Date(System.currentTimeMillis()).toString());
-    cmdStatus.setStatus(Status.SUCCEEDED);
-    zk.setData(cmdStatusPath, JAXBUtil.write(cmdStatus), stat.getVersion());
-    LOG.info("Deleted cluster " + clusterName);    
-  }
-  
-  private void deleteCluster(String cmdPath, DeleteCommand cmd)
-      throws KeeperException, InterruptedException, IOException {
-    DeleteClusterCommand delete = new DeleteClusterCommand();
-    ClusterManifest cm = new ClusterManifest();
-    cm.setClusterName(cmd.getClusterName());
-    delete.setClusterManifest(cm);
-    deleteClusterInZookeeper(cmdPath, delete);
-  }
-  
-  public Command getCommand(String cmdPath) throws KeeperException, InterruptedException, IOException {
-    Stat stat = new Stat();
-    Command cmd = JAXBUtil.read(zk.getData(cmdPath, false, stat), Command.class);
-    return cmd;
-  }
-
-  public synchronized void start() {
-    handlers = new Handler[handlerCount];
-    
-    for (int i = 0; i < handlerCount; i++) {
-      handlers[i] = new Handler(i);
-      handlers[i].start();
-    }
-  }
-  
-  /** Stops the service. */
-  public synchronized void stop() {
-    LOG.info("Stopping command handler");
-    running = false;
-    if (handlers != null) {
-      for (int i = 0; i < handlerCount; i++) {
-        if (handlers[i] != null) {
-          handlers[i].interrupt();
-        }
-      }
-    }
-    notifyAll();
-  }
-
-  /** Wait for the server to be stopped.
-   * Does not wait for all subthreads to finish.
-   *  See {@link #stop()}.
-   */
-  public synchronized void join() throws InterruptedException {
-    while (running) {
-      wait();
-    }
-  }
-  
-  /** Handles queued commands . */
-  private class Handler extends Thread {
-    public Handler(int instanceNumber) {
-      this.setDaemon(true);
-      this.setName("Command handler " + instanceNumber);
-    }
-
-    @Override
-    public void run() {
-      LOG.info(getName() + ": starting");
-
-      while (running) {
-        try {
-          handle();
-        } catch (InterruptedException e) {
-          if (running) { // unexpected -- log it
-            LOG.warn(getName() + " caught: " + ExceptionUtil.getStackTrace(e));
-          }
-        } catch (Exception e) {
-          LOG.warn(getName() + " caught: " + ExceptionUtil.getStackTrace(e));
-        }
-      }
-      LOG.info(getName() + ": exiting");
-    }
-  }
-}
diff --git a/controller/src/main/java/org/apache/hms/controller/Controller.java b/controller/src/main/java/org/apache/hms/controller/Controller.java
deleted file mode 100755
index 1719384..0000000
--- a/controller/src/main/java/org/apache/hms/controller/Controller.java
+++ /dev/null
@@ -1,266 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.controller;
-
-import java.io.File;
-import java.io.IOException;
-import java.util.Collection;
-import java.util.prefs.Preferences;
-
-import org.apache.commons.configuration.HierarchicalINIConfiguration;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hms.common.conf.CommonConfigurationKeys;
-import org.apache.hms.common.util.DaemonWatcher;
-import org.apache.hms.common.util.ExceptionUtil;
-import org.apache.hms.common.util.MulticastDNS;
-import org.apache.hms.common.util.ServiceDiscoveryUtil;
-import org.apache.hms.controller.ClientHandler;
-import org.apache.hms.controller.CommandHandler;
-import org.apache.zookeeper.CreateMode;
-import org.apache.zookeeper.KeeperException;
-import org.apache.zookeeper.WatchedEvent;
-import org.apache.zookeeper.Watcher;
-import org.apache.zookeeper.ZooKeeper;
-import org.apache.zookeeper.ZooDefs.Ids;
-import org.mortbay.jetty.Server;
-import org.mortbay.jetty.servlet.Context;
-import org.mortbay.jetty.servlet.DefaultServlet;
-import org.mortbay.jetty.servlet.ServletHolder;
-import org.mortbay.resource.Resource;
-import org.mortbay.resource.ResourceCollection;
-
-
-import com.sun.jersey.spi.container.servlet.ServletContainer;
-
-public class Controller implements Watcher {
-  private static Log LOG = LogFactory.getLog(Controller.class);
-  public static String CONTROLLER_PREFIX = "v1";
-  public static int CONTROLLER_PORT = 4080;
-  private static Controller instance = new Controller();
-  private Server server = null;
-  private String credential = null;
-  
-  private ZooKeeper zk;
-  private ClientHandler clientHandler;
-  private CommandHandler commandHandler;
-  public volatile boolean running = true; // true while controller runs
-  private String zookeeperAddress = CommonConfigurationKeys.ZOOKEEPER_ADDRESS_DEFAULT;
-
-  public static Controller getInstance() {
-    return instance;
-  }
-
-  public ZooKeeper getZKInstance() {
-    return this.zk;
-  }
-  
-  public ClientHandler getClientHandler() {
-    return clientHandler;
-  }
-  
-  public CommandHandler getCommandHandler() {
-    return commandHandler;
-  }
-  
-  public void process(WatchedEvent event) {
-    if (event.getType() == Event.EventType.None) {
-        // We are are being told that the state of the
-        // connection has changed
-        switch (event.getState()) {
-        case SyncConnected:
-            // In this particular example we don't need to do anything
-            // here - watches are automatically re-registered with 
-            // server and any watches triggered while the client was 
-            // disconnected will be delivered (in order of course)
-            break;
-        case Expired:
-            // It's all over
-            running = false;
-            commandHandler.stop();
-            break;
-        }
-    }
-  }
-  
-  public void parseConfig() {
-    StringBuilder confPath = new StringBuilder();
-    String confDir = System.getProperty("HMS_CONF_DIR");
-    if(confDir==null) {
-      confDir = "/etc/hms";
-    }
-    confPath.append(confDir);
-    confPath.append("/hms.ini");
-    try {
-      HierarchicalINIConfiguration ini = new HierarchicalINIConfiguration(confPath.toString());
-      zookeeperAddress = ini.getSection("zookeeper").getString("quorum", null);
-      String user = ini.getSection("zookeeper").getString("user", null);
-      String password = ini.getSection("zookeeper").getString("password", null);
-      if(user!=null && password!=null) {
-        credential = new StringBuilder().append(user).append(":").append(password).toString();
-      }
-    } catch (Exception e) {
-      LOG.warn("Invalid HMS configuration file: " + confPath);
-      zookeeperAddress = null;
-    }
-    LOG.info("ZooKeeper Quorum in "+confPath.toString()+": "+zookeeperAddress);
-  }
-  
-  // Resolve the list of zookeeper hosts from HMS beacons
-  public void initmDNS() {
-    try {
-      ServiceDiscoveryUtil sdu = new ServiceDiscoveryUtil(CommonConfigurationKeys.ZEROCONF_ZOOKEEPER_TYPE);
-      sdu.start();
-      Thread.sleep(5000);
-      Collection<String> list = sdu.resolve();
-      if(list.size()>0) {
-        StringBuffer buf = new StringBuffer();
-        String delimiter = "";
-        for(String addr : list) {
-          buf.append(delimiter);
-          buf.append(addr);
-          delimiter = ",";
-        }
-        zookeeperAddress = buf.toString();
-      }
-      sdu.close();
-      if(zookeeperAddress.equals("")) {
-        throw new RuntimeException("Unknown ZooKeeper location.");
-      }
-      LOG.info("Discovered zookeeper location: "+zookeeperAddress);
-    } catch(Exception e) {
-      zookeeperAddress = CommonConfigurationKeys.ZOOKEEPER_ADDRESS_DEFAULT;
-      LOG.info("Use default zookeeper location: "+zookeeperAddress);
-    }
-  }
-  
-  public void start() {
-    try {
-      //System.out.close();
-      //System.err.close();
-      parseConfig();
-      if(zookeeperAddress == null) {
-        initmDNS();
-      }
-      run();
-    } catch(Exception e) {
-      LOG.error(ExceptionUtil.getStackTrace(e));
-      System.exit(-1);
-    }
-  }
-
-  private void createDirectory(String path) throws KeeperException, InterruptedException {
-    try {
-      zk.create(path, new byte[0], Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
-      if(credential!=null) {
-        zk.setACL(path, Ids.CREATOR_ALL_ACL, -1);
-      }
-      LOG.info("Created HMS cluster root at " + CommonConfigurationKeys.ZOOKEEPER_CLUSTER_ROOT_DEFAULT);        
-    } catch (KeeperException.NodeExistsException e) {
-    } catch (KeeperException.AuthFailedException e) {
-      LOG.warn("Failed to authenticate for "+path);
-    }
-  }
-  
-  private void initializeZooKeeper() throws KeeperException, InterruptedException, IOException {
-    zk = new ZooKeeper(zookeeperAddress, 600000, this);
-    if(credential!=null) {
-      zk.addAuthInfo("digest", credential.getBytes());
-    }
-    String[] list = {
-        CommonConfigurationKeys.ZOOKEEPER_CLUSTER_ROOT_DEFAULT,
-        CommonConfigurationKeys.ZOOKEEPER_COMMAND_QUEUE_PATH_DEFAULT,
-        CommonConfigurationKeys.ZOOKEEPER_LOCK_QUEUE_PATH_DEFAULT,
-        CommonConfigurationKeys.ZOOKEEPER_LIVE_CONTROLLER_PATH_DEFAULT,
-        CommonConfigurationKeys.ZOOKEEPER_NODES_MANIFEST_PATH_DEFAULT,
-        CommonConfigurationKeys.ZOOKEEPER_STATUS_QUEUE_PATH_DEFAULT
-    };
-    for(String path : list) {
-      createDirectory(path);
-    }
-  }
-  
-  public void run() {
-    try {
-      initializeZooKeeper();
-      LOG.info("Connected to ZooKeeper");
-      clientHandler = new ClientHandler(zk);
-      commandHandler = new CommandHandler(zk, 5);
-      commandHandler.start();
-    } catch (Exception e) {
-      LOG.error(ExceptionUtil.getStackTrace(e));
-    }
-    server = new Server(CONTROLLER_PORT);
-
-    try {
-      Context root = new Context(server, "/", Context.SESSIONS);
-      String HMS_HOME = System.getenv("HMS_HOME");
-      root.setBaseResource(new ResourceCollection(new Resource[]
-        {
-          Resource.newResource(HMS_HOME+"/webapps/")
-        }));
-      ServletHolder rootServlet = root.addServlet(DefaultServlet.class, "/");
-      rootServlet.setInitOrder(1);
-      
-      ServletHolder sh = new ServletHolder(ServletContainer.class);
-      sh.setInitParameter("com.sun.jersey.config.property.resourceConfigClass", "com.sun.jersey.api.core.PackagesResourceConfig");
-      sh.setInitParameter("com.sun.jersey.config.property.packages", "org.apache.hms.controller.rest");      
-      root.addServlet(sh, "/"+CONTROLLER_PREFIX+"/*");
-      sh.setInitOrder(2);
-      server.setStopAtShutdown(true);
-      server.start();
-    } catch (Exception e) {
-      LOG.error(ExceptionUtil.getStackTrace(e));
-    }
-  }
-  
-  public void stop() throws Exception {
-    try {
-      commandHandler.stop();
-      server.stop();
-    } catch (Exception e) {
-      LOG.error(ExceptionUtil.getStackTrace(e));
-    }
-  }
-  
-  /**
-   * Wait for service to finish.
-   * (Normally, it runs forever.)
-   */
-  public void join() {
-    try {
-      this.commandHandler.join();
-    } catch (InterruptedException ie) {
-    }
-  }
-
-  public static void main(String[] args) {
-    DaemonWatcher.createInstance(System.getProperty("PID"), 9100);
-    try {
-      Controller controller = Controller.getInstance();
-      if (controller != null) {
-        controller.start();
-        controller.join();
-      }
-    } catch(Throwable t) {
-      DaemonWatcher.bailout(1);
-    }
-  }
-
-}
diff --git a/controller/src/main/java/org/apache/hms/controller/rest/ClusterManager.java b/controller/src/main/java/org/apache/hms/controller/rest/ClusterManager.java
deleted file mode 100755
index b244396..0000000
--- a/controller/src/main/java/org/apache/hms/controller/rest/ClusterManager.java
+++ /dev/null
@@ -1,195 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.controller.rest;
-
-import java.io.IOException;
-import java.net.InetAddress;
-import java.net.URL;
-import java.net.UnknownHostException;
-import java.util.List;
-
-import javax.ws.rs.GET;
-import javax.ws.rs.POST;
-import javax.ws.rs.Path;
-import javax.ws.rs.PathParam;
-import javax.ws.rs.QueryParam;
-import javax.ws.rs.WebApplicationException;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hms.common.entity.cluster.MachineState;
-import org.apache.hms.common.entity.command.CreateClusterCommand;
-import org.apache.hms.common.entity.command.CreateCommand;
-import org.apache.hms.common.entity.command.DeleteClusterCommand;
-import org.apache.hms.common.entity.command.DeleteCommand;
-import org.apache.hms.common.entity.command.StatusCommand;
-import org.apache.hms.common.entity.manifest.ClusterManifest;
-import org.apache.hms.common.entity.manifest.ConfigManifest;
-import org.apache.hms.common.entity.manifest.NodesManifest;
-import org.apache.hms.common.entity.manifest.SoftwareManifest;
-import org.apache.hms.common.entity.Response;
-import org.apache.hms.common.util.ExceptionUtil;
-import org.apache.hms.controller.Controller;
-
-@Path("cluster")
-public class ClusterManager {
-  private static String HOSTNAME;
-  private static String DEFAULT_URL;
-  
-  public ClusterManager() {
-    InetAddress addr;
-    try {
-      addr = InetAddress.getLocalHost();
-      byte[] ipAddr = addr.getAddress();
-      HOSTNAME = addr.getHostName();
-    } catch (UnknownHostException e) {
-      HOSTNAME = "localhost";
-    }
-    StringBuilder buffer = new StringBuilder();
-    buffer.append("http://");
-    buffer.append(HOSTNAME);
-    buffer.append(":");
-    buffer.append(Controller.CONTROLLER_PORT);
-    buffer.append("/");
-    buffer.append(Controller.CONTROLLER_PREFIX);
-    DEFAULT_URL = buffer.toString();
-  }
-  
-  private static Log LOG = LogFactory.getLog(ClusterManager.class);
-  
-//  @POST
-//  @Path("create")
-//  public Response createCluster(CreateCommand cmd) {
-//    try {
-//      return Controller.getInstance().getClientHandler().createCluster(cmd);
-//    } catch (IOException e) {
-//      throw new WebApplicationException(e);
-//    }
-//  }
-//  
-//  @POST
-//  @Path("delete")
-//  public Response deleteCluster(DeleteCommand cmd) {
-//    try {
-//      LOG.info("received: " + cmd);
-//      Response r = Controller.getInstance().getClientHandler().deleteCluster(cmd);
-//      LOG.info("response is: " + r.getOutput());
-//      return r;
-//    } catch (IOException e) {
-//      LOG.warn("got excpetion: " + e);
-//      throw new WebApplicationException(e);
-//    }
-//  }
-  
-  @GET
-  @Path("status/{clusterId}")
-  public ClusterManifest checkStatus(@PathParam("clusterId") String clusterId) {
-    try {
-      return Controller.getInstance().getClientHandler().checkClusterStatus(clusterId);
-    } catch (IOException e) {
-      LOG.warn(ExceptionUtil.getStackTrace(e));
-      throw new WebApplicationException(e);
-    }
-  }
-
-  @GET
-  @Path("node/status")
-  public MachineState checkNodeStatus(@QueryParam("node") String nodeId) {
-    try {
-      return Controller.getInstance().getClientHandler().checkNodeStatus(nodeId);
-    } catch (IOException e) {
-      LOG.warn(ExceptionUtil.getStackTrace(e));
-      throw new WebApplicationException(e);
-    }
-  }
-
-  @GET
-  @Path("manifest/create-cluster-sample")
-  public CreateClusterCommand getCreateClusterSample(@QueryParam("expand") boolean expand, @QueryParam("name") String clusterName) {
-    try {
-      URL nodeUrl = new URL(DEFAULT_URL+"/nodes/manifest/sample");
-      URL softwareUrl = new URL(DEFAULT_URL+"/software/manifest/sample");
-      URL configUrl = new URL(DEFAULT_URL+"/config/manifest/create-hadoop-cluster");
-      CreateClusterCommand command = new CreateClusterCommand();
-      ClusterManifest cm = new ClusterManifest();
-      if(clusterName!=null) {
-        cm.setClusterName(clusterName);
-      }
-      NodesManifest nodesM = new NodesManifest();
-      nodesM.setUrl(nodeUrl);
-      cm.setNodes(nodesM);
-      SoftwareManifest softwareM = new SoftwareManifest();
-      softwareM.setUrl(softwareUrl);
-      cm.setSoftware(softwareM);
-      ConfigManifest configM = new ConfigManifest();
-      configM.setUrl(configUrl);
-      cm.setConfig(configM);
-      if (expand) {
-        cm.load();
-      }
-      command.setClusterManifest(cm);
-      return command;
-    } catch (IOException e) {
-      throw new WebApplicationException(e);
-    }
-  }
-  
-  @GET
-  @Path("manifest/delete-cluster-sample")
-  public DeleteClusterCommand getDeleteClusterSample(@QueryParam("expand") boolean expand, @QueryParam("name") String clusterName) {
-    try {
-      URL nodeUrl = new URL(DEFAULT_URL+"/nodes/manifest/sample");
-      URL softwareUrl = new URL(DEFAULT_URL+"/software/manifest/sample");
-      URL configUrl = new URL(DEFAULT_URL+"/config/manifest/delete-hadoop-cluster");
-
-      DeleteClusterCommand command = new DeleteClusterCommand();
-      ClusterManifest cm = new ClusterManifest();
-      if(clusterName!=null) {
-        cm.setClusterName(clusterName);
-      }
-      NodesManifest nodesM = new NodesManifest();
-      nodesM.setUrl(nodeUrl);
-      cm.setNodes(nodesM);
-      SoftwareManifest softwareM = new SoftwareManifest();
-      softwareM.setUrl(softwareUrl);
-      cm.setSoftware(softwareM);
-      ConfigManifest configM = new ConfigManifest();
-      configM.setUrl(configUrl);
-      cm.setConfig(configM);
-      if (expand) {
-        cm.load();
-      }
-      command.setClusterManifest(cm);
-      return command;
-    } catch (IOException e) {
-      throw new WebApplicationException(e);
-    }
-  }
-  
-  @GET
-  @Path("list")
-  public List<ClusterManifest> listClusters() {
-    try {
-      List<ClusterManifest> list = Controller.getInstance().getClientHandler().listClusters();
-      return list;
-    } catch(IOException e) {
-      throw new WebApplicationException(e);
-    }
-  }
-}
diff --git a/controller/src/main/java/org/apache/hms/controller/rest/CommandManager.java b/controller/src/main/java/org/apache/hms/controller/rest/CommandManager.java
deleted file mode 100755
index a5b00f5..0000000
--- a/controller/src/main/java/org/apache/hms/controller/rest/CommandManager.java
+++ /dev/null
@@ -1,65 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.controller.rest;
-
-import java.io.IOException;
-import java.util.List;
-
-import javax.ws.rs.GET;
-import javax.ws.rs.Path;
-import javax.ws.rs.PathParam;
-import javax.ws.rs.WebApplicationException;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hms.common.entity.command.Command;
-import org.apache.hms.common.entity.command.CommandStatus;
-import org.apache.hms.common.entity.command.StatusCommand;
-import org.apache.hms.common.util.ExceptionUtil;
-import org.apache.hms.controller.Controller;
-
-@Path("command")
-public class CommandManager {
-  private static Log LOG = LogFactory.getLog(CommandManager.class);
-  
-  @GET
-  @Path("status/{command}")
-  public CommandStatus checkStatus(@PathParam("command") String cmdId) {
-    StatusCommand cmd = new StatusCommand();
-    cmd.setCmdId(cmdId);
-    try {
-      CommandStatus status = Controller.getInstance().getClientHandler().checkCommandStatus(cmd);
-      return status;
-    } catch (IOException e) {
-      LOG.error(e.getMessage());
-      throw new WebApplicationException(404);
-    }
-  }
-  
-  @GET
-  @Path("list")
-  public List<Command> list() {
-    try {
-      List<Command> list = Controller.getInstance().getClientHandler().listCommand();
-      return list;
-    } catch (IOException e) {
-      LOG.error(ExceptionUtil.getStackTrace(e));
-      throw new WebApplicationException(500);
-    }
-  }
-}
diff --git a/controller/src/main/java/org/apache/hms/controller/rest/ConfigManager.java b/controller/src/main/java/org/apache/hms/controller/rest/ConfigManager.java
deleted file mode 100755
index 350ce33..0000000
--- a/controller/src/main/java/org/apache/hms/controller/rest/ConfigManager.java
+++ /dev/null
@@ -1,314 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.controller.rest;
-
-import java.util.ArrayList;
-import java.util.HashSet;
-import java.util.LinkedList;
-import java.util.List;
-import java.util.Set;
-
-import javax.ws.rs.GET;
-import javax.ws.rs.Path;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hms.common.entity.Status;
-import org.apache.hms.common.entity.action.Action;
-import org.apache.hms.common.entity.action.ActionDependency;
-import org.apache.hms.common.entity.action.DaemonAction;
-import org.apache.hms.common.entity.action.PackageAction;
-import org.apache.hms.common.entity.action.ScriptAction;
-import org.apache.hms.common.entity.cluster.MachineState.StateEntry;
-import org.apache.hms.common.entity.cluster.MachineState.StateType;
-import org.apache.hms.common.entity.manifest.ConfigManifest;
-
-
-@Path("config")
-public class ConfigManager {
-  private static Log LOG = LogFactory.getLog(ConfigManager.class);
-  
-  @GET
-  @Path("manifest/create-hadoop-cluster")
-  public ConfigManifest getSample() {
-    List<Action> actions = new ArrayList<Action>();
-
-    // Install software
-    PackageAction install = new PackageAction();
-    install.setActionType("install");
-    //install.setRole("namenode");
-    actions.add(install);
-    List<StateEntry> expectedInstallResults = new LinkedList<StateEntry>();
-    expectedInstallResults.add(new StateEntry(StateType.PACKAGE, "hadoop", Status.INSTALLED));
-    install.setExpectedResults(expectedInstallResults);
-    
-    // Install software
-    //install = new PackageAction();
-    //install.setActionType("install");
-    //install.setRole("datanode");
-    //actions.add(install);
-
-    // Install software
-    //install = new PackageAction();
-    //install.setActionType("install");
-    //install.setRole("jobtracker");
-    //actions.add(install);
-
-    // Install software
-    //install = new PackageAction();
-    //install.setActionType("install");
-    //install.setRole("tasktracker");
-    //actions.add(install);
-
-    // Generate Hadoop configuration
-    ScriptAction setupConfig = new ScriptAction();
-    setupConfig.setScript("/usr/sbin/hadoop-setup-conf.sh");
-    int i=0;
-    String[] parameters = new String[9];
-    parameters[i++] = "--namenode-url=hdfs://${namenode}:9000/";
-    parameters[i++] = "--jobtracker-url=${jobtracker}:9001";
-    parameters[i++] = "--conf-dir=/etc/hadoop";
-    parameters[i++] = "--hdfs-dir=/grid/0/hadoop/var/hdfs";
-    parameters[i++] = "--namenode-dir=/grid/0/hadoop/var/hdfs/namenode";
-    parameters[i++] = "--mapred-dir=/grid/0/tmp/mapred-local,/grid/1/tmp/mapred-local,/grid/2/tmp/mapred-local,/grid/3/tmp/mapred-local,/grid/4/tmp/mapred-local,/grid/5/tmp/mapred-local";
-    parameters[i++] = "--datanode-dir=/grid/0/hadoop/var/hdfs/data,/grid/1/hadoop/var/hdfs/data,/grid/2/hadoop/var/hdfs/data,/grid/3/hadoop/var/hdfs/data,/grid/4/hadoop/var/hdfs/data,/grid/5/hadoop/var/hdfs/data";
-    parameters[i++] = "--log-dir=/var/log/hadoop";
-    parameters[i++] = "--auto";
-    setupConfig.setParameters(parameters);
-    List<StateEntry> expectedConfigResults = new LinkedList<StateEntry>();
-    expectedConfigResults.add(new StateEntry(StateType.PACKAGE, "hadoop-config", Status.INSTALLED));
-    setupConfig.setExpectedResults(expectedConfigResults);
-    actions.add(setupConfig);
-    
-    // Format HDFS
-    ScriptAction setupHdfs = new ScriptAction();
-    setupHdfs.setRole("namenode");
-    setupHdfs.setScript("/usr/sbin/hadoop-setup-hdfs.sh");
-    String[] hdfsParameters = new String[2];
-    hdfsParameters[0]="-c";
-    hdfsParameters[1]="oxygen";
-    setupHdfs.setParameters(hdfsParameters);
-    // Setup dependencies       
-    List<ActionDependency> dep = new LinkedList<ActionDependency>();
-    Set<String> roles = new HashSet<String>();
-    List<StateEntry> states = new LinkedList<StateEntry>();
-    states.add(new StateEntry(StateType.PACKAGE, "hadoop-config", Status.INSTALLED));
-    roles.add("namenode");
-    roles.add("datanode");
-    roles.add("jobtracker");
-    roles.add("tasktracker");
-    dep.add(new ActionDependency(roles, states));
-    setupHdfs.setDependencies(dep);
-    // Setup expected result
-    List<StateEntry> expectedFormatResults = new LinkedList<StateEntry>();
-    expectedFormatResults.add(new StateEntry(StateType.DAEMON, "hadoop-namenode", Status.STARTED));
-    setupHdfs.setExpectedResults(expectedFormatResults);
-    actions.add(setupHdfs);
-    
-    // Start Datanodes
-    DaemonAction dataNodeAction = new DaemonAction();
-    dataNodeAction.setDaemonName("hadoop-datanode");
-    dataNodeAction.setActionType("start");
-    dataNodeAction.setRole("datanode");
-    // Setup namenode started dependencies
-    dep = new LinkedList<ActionDependency>();
-    roles = new HashSet<String>();
-    states = new LinkedList<StateEntry>();
-    states.add(new StateEntry(StateType.DAEMON, "hadoop-namenode", Status.STARTED));
-    roles.add("namenode");
-    dep.add(new ActionDependency(roles, states));
-    dataNodeAction.setDependencies(dep);
-    // Setup expected result
-    List<StateEntry> expectedDatanodeResults = new LinkedList<StateEntry>();
-    expectedDatanodeResults.add(new StateEntry(StateType.DAEMON, "hadoop-datanode", Status.STARTED));
-    dataNodeAction.setExpectedResults(expectedDatanodeResults);
-    actions.add(dataNodeAction);
-
-    // Start Jobtracker
-    DaemonAction jobTrackerAction = new DaemonAction();
-    jobTrackerAction.setDaemonName("hadoop-jobtracker");
-    jobTrackerAction.setActionType("start");
-    jobTrackerAction.setRole("jobtracker");
-    // Setup datanode started dependencies
-    dep = new LinkedList<ActionDependency>();
-    roles = new HashSet<String>();
-    states = new LinkedList<StateEntry>();
-    states.add(new StateEntry(StateType.DAEMON, "hadoop-datanode", Status.STARTED));
-    roles.add("datanode");
-    dep.add(new ActionDependency(roles, states));
-    jobTrackerAction.setDependencies(dep);
-    // Setup expected result
-    List<StateEntry> expectedJobtrackerResults = new LinkedList<StateEntry>();
-    expectedJobtrackerResults.add(new StateEntry(StateType.DAEMON, "hadoop-jobtracker", Status.STARTED));
-    jobTrackerAction.setExpectedResults(expectedJobtrackerResults);
-    actions.add(jobTrackerAction);
-    
-    // Start Tasktrackers
-    DaemonAction taskTrackerAction = new DaemonAction();
-    taskTrackerAction.setDaemonName("hadoop-tasktracker");
-    taskTrackerAction.setActionType("start");
-    taskTrackerAction.setRole("tasktracker");
-    // Setup tasktracker started dependencies
-    dep = new LinkedList<ActionDependency>();
-    roles = new HashSet<String>();
-    states = new LinkedList<StateEntry>();
-    states.add(new StateEntry(StateType.DAEMON, "hadoop-jobtracker", Status.STARTED));
-    roles.add("jobtracker");
-    dep.add(new ActionDependency(roles, states));
-    taskTrackerAction.setDependencies(dep);
-    // Setup expected result
-    List<StateEntry> expectedTasktrackerResults = new LinkedList<StateEntry>();
-    expectedTasktrackerResults.add(new StateEntry(StateType.DAEMON, "hadoop-tasktracker", Status.STARTED));
-    taskTrackerAction.setExpectedResults(expectedTasktrackerResults);
-    actions.add(taskTrackerAction);
-    
-    ConfigManifest cm = new ConfigManifest();
-    cm.setActions(actions);
-    return cm;
-  }
-  
-
-  @GET
-  @Path("manifest/delete-cluster")
-  public ConfigManifest getDestroyCluster() {
-    List<Action> actions = new ArrayList<Action>();
-    ScriptAction nuke = new ScriptAction();
-    nuke.setScript("killall");
-    int i=0;
-    String[] parameters = new String[2];
-    parameters[i++] = "java";
-    parameters[i++] = "|| exit 0";
-    nuke.setParameters(parameters);
-    actions.add(nuke);
-
-    ScriptAction nuke2 = new ScriptAction();
-    nuke2.setScript("killall");
-    i=0;
-    String[] jsvcParameters = new String[2];
-    jsvcParameters[i++] = "jsvc";
-    jsvcParameters[i++] = "|| exit 0";
-    nuke2.setParameters(jsvcParameters);
-    nuke2.setRole("datanode");
-    actions.add(nuke2);
-    
-    ScriptAction nukePackages = new ScriptAction();
-    nukePackages.setScript("rpm");
-    i=0;
-    String[] packagesParameters = new String[8];
-    packagesParameters[i++] = "-e";
-    packagesParameters[i++] = "hadoop";
-    packagesParameters[i++] = "||";
-    packagesParameters[i++] = "rpm";
-    packagesParameters[i++] = "-e";
-    packagesParameters[i++] = "hadoop-mapreduce";
-    packagesParameters[i++] = "hadoop-hdfs";
-    packagesParameters[i++] = "hadoop-common || rm -rf /home/hms/apps/*";
-    nukePackages.setParameters(packagesParameters);
-    actions.add(nukePackages);
-    ScriptAction scrub = new ScriptAction();
-    scrub.setScript("rm");
-    String[] scrubParameters = new String[2];
-    scrubParameters[0] = "-rf";
-    scrubParameters[1] = "/grid/[0-3]/hadoop/var";
-    scrub.setParameters(scrubParameters);
-    actions.add(scrub);
-    
-    ConfigManifest cm = new ConfigManifest();
-    cm.setActions(actions);
-    return cm;
-  }
-
-  @GET
-  @Path("manifest/delete-hadoop-cluster")
-  public ConfigManifest getDeleteCluster() {
-    List<StateEntry> states = new LinkedList<StateEntry>();
-    Set<String> roles = new HashSet<String>();
-    List<Action> actions = new ArrayList<Action>();
-    // Stop Tasktrackers
-    DaemonAction taskTrackerAction = new DaemonAction();
-    taskTrackerAction.setDaemonName("hadoop-tasktracker");
-    taskTrackerAction.setActionType("stop");
-    taskTrackerAction.setRole("tasktracker");
-    // Setup expected result
-    List<StateEntry> expectedTasktrackerResults = new LinkedList<StateEntry>();
-    expectedTasktrackerResults.add(new StateEntry(StateType.DAEMON, "hadoop-tasktracker", Status.STOPPED));
-    taskTrackerAction.setExpectedResults(expectedTasktrackerResults);
-    actions.add(taskTrackerAction);
-
-    // Stop Jobtracker
-    DaemonAction jobTrackerAction = new DaemonAction();
-    jobTrackerAction.setDaemonName("hadoop-jobtracker");
-    jobTrackerAction.setActionType("stop");
-    jobTrackerAction.setRole("jobtracker");
-    // Setup tasktracker stop dependencies
-    List<ActionDependency> dep = new LinkedList<ActionDependency>();
-    roles = new HashSet<String>();
-    states = new LinkedList<StateEntry>();
-    states.add(new StateEntry(StateType.DAEMON, "hadoop-jobtracker", Status.STARTED));
-    roles.add("jobtracker");
-    dep.add(new ActionDependency(roles, states));
-    jobTrackerAction.setDependencies(dep);
-    // Setup expected result
-    List<StateEntry> expectedJobtrackerResults = new LinkedList<StateEntry>();
-    expectedJobtrackerResults.add(new StateEntry(StateType.DAEMON, "hadoop-jobtracker", Status.STOPPED));
-    jobTrackerAction.setExpectedResults(expectedJobtrackerResults);
-    actions.add(jobTrackerAction);
-
-    // Stop Datanodes
-    DaemonAction datanodeAction = new DaemonAction();
-    datanodeAction.setDaemonName("hadoop-datanode");
-    datanodeAction.setActionType("stop");
-    datanodeAction.setRole("datanode");
-    // Setup datanode stop dependencies
-    dep = new LinkedList<ActionDependency>();
-    roles = new HashSet<String>();
-    states = new LinkedList<StateEntry>();
-    states.add(new StateEntry(StateType.DAEMON, "hadoop-datanode", Status.STARTED));
-    roles.add("datanode");
-    dep.add(new ActionDependency(roles, states));
-    datanodeAction.setDependencies(dep);
-    // Setup expected result
-    List<StateEntry> expectedDatanodeResults = new LinkedList<StateEntry>();
-    expectedDatanodeResults.add(new StateEntry(StateType.DAEMON, "hadoop-datanode", Status.STOPPED));
-    datanodeAction.setExpectedResults(expectedDatanodeResults);
-    actions.add(datanodeAction);
-
-    // Stop Namenode
-    DaemonAction namenodeAction = new DaemonAction();
-    namenodeAction.setDaemonName("hadoop-namenode");
-    namenodeAction.setActionType("stop");
-    namenodeAction.setRole("namenode");
-    // Setup namenode stop dependencies
-    dep = new LinkedList<ActionDependency>();
-    roles = new HashSet<String>();
-    states = new LinkedList<StateEntry>();
-    states.add(new StateEntry(StateType.DAEMON, "hadoop-namenode", Status.STARTED));
-    roles.add("namenode");
-    dep.add(new ActionDependency(roles, states));
-    namenodeAction.setDependencies(dep);
-    // Setup expected result
-    List<StateEntry> expectedNamenodeResults = new LinkedList<StateEntry>();
-    expectedNamenodeResults.add(new StateEntry(StateType.DAEMON, "hadoop-namenode", Status.STOPPED));
-    namenodeAction.setExpectedResults(expectedNamenodeResults);
-    actions.add(namenodeAction);
-    ConfigManifest cm = new ConfigManifest();
-    cm.setActions(actions);
-    return cm;
-  }
-}
diff --git a/controller/src/main/java/org/apache/hms/controller/rest/ControllerManager.java b/controller/src/main/java/org/apache/hms/controller/rest/ControllerManager.java
deleted file mode 100755
index f445a1f..0000000
--- a/controller/src/main/java/org/apache/hms/controller/rest/ControllerManager.java
+++ /dev/null
@@ -1,162 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.controller.rest;
-
-import java.io.IOException;
-
-import javax.ws.rs.Consumes;
-import javax.ws.rs.DELETE;
-import javax.ws.rs.GET;
-import javax.ws.rs.POST;
-import javax.ws.rs.PUT;
-import javax.ws.rs.Path;
-import javax.ws.rs.PathParam;
-import javax.ws.rs.WebApplicationException;
-import javax.ws.rs.core.MediaType;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hms.common.conf.CommonConfigurationKeys;
-import org.apache.hms.common.entity.Response;
-import org.apache.hms.common.entity.command.Command;
-import org.apache.hms.common.entity.command.CommandStatus;
-import org.apache.hms.common.entity.command.CreateClusterCommand;
-import org.apache.hms.common.entity.command.DeleteClusterCommand;
-import org.apache.hms.common.entity.command.StatusCommand;
-import org.apache.hms.common.entity.command.UpgradeClusterCommand;
-import org.apache.hms.common.util.ExceptionUtil;
-import org.apache.hms.controller.ClientHandler;
-import org.apache.hms.controller.CommandHandler;
-import org.apache.hms.controller.Controller;
-
-@Path("controller")
-public class ControllerManager {
-  private static Log LOG = LogFactory.getLog(ControllerManager.class);
-
-  @GET
-  @Path("command/status/{command}")
-  public CommandStatus checkCommandStatus(@PathParam("command") String cmdId) {
-    StatusCommand cmd = new StatusCommand();
-    cmd.setCmdId(cmdId);
-    try {
-      return Controller.getInstance().getClientHandler().checkCommandStatus(cmd);
-    } catch (IOException e) {
-      throw new WebApplicationException(404);
-    }
-  }
-
-//  @GET
-//  @Path("cluster/status/{clusterName}")
-//  public Response checkClusterStatus(@PathParam("clusterName") String cmdId) {
-//    StatusCommand cmd = new StatusCommand();
-//    cmd.setCmdId(cmdId);
-//    try {
-//      return Controller.getInstance().getClientHandler().checkStatus(cmd);
-//    } catch (IOException e) {
-//      throw new WebApplicationException(e);
-//    }
-//  }
-  
-  @POST
-  @Consumes(MediaType.APPLICATION_JSON)
-  @Path("create/cluster")
-  public Response createCluster(CreateClusterCommand cmd) {
-    try {
-      Controller ci = Controller.getInstance();
-      ClientHandler ch = ci.getClientHandler();
-      if(ch==null) {
-        LOG.error("ClientHandler is empty");
-      }
-      String path=ch.queueCmd(cmd);
-      Response r = new Response();
-      r.setOutput(path);
-      return r;
-    } catch (Exception e) {
-      LOG.error(ExceptionUtil.getStackTrace(e));
-      throw new WebApplicationException(e);
-    }
-  }
-  
-  @POST
-  @Consumes(MediaType.APPLICATION_JSON)
-  @Path("delete/cluster")
-  public Response deleteCluster(DeleteClusterCommand cmd) {
-    try {
-      Controller ci = Controller.getInstance();
-      ClientHandler ch = ci.getClientHandler();
-      if(ch==null) {
-        LOG.error("ClientHandler is empty");
-      }
-      String path=ch.queueCmd(cmd);
-      Response r = new Response();
-      r.setOutput(path);
-      return r;
-    } catch (Exception e) {
-      LOG.error(ExceptionUtil.getStackTrace(e));
-      throw new WebApplicationException(e);
-    }
-  }
-
-  @POST
-  @Consumes(MediaType.APPLICATION_JSON)
-  @Path("upgrade/cluster")
-  public Response upgradeCluster(UpgradeClusterCommand cmd) {
-    try {
-      Controller ci = Controller.getInstance();
-      ClientHandler ch = ci.getClientHandler();
-      if(ch==null) {
-        LOG.error("ClientHandler is empty");
-      }
-      String path=ch.queueCmd(cmd);
-      Response r = new Response();
-      r.setOutput(path);
-      return r;
-    } catch (Exception e) {
-      LOG.error(ExceptionUtil.getStackTrace(e));
-      throw new WebApplicationException(e);
-    }
-  }
-  
-  @DELETE
-  @Path("abort/{command}")
-  public Response abortCommand(@PathParam("command") String cmdId) {
-    try {
-      Controller ci = Controller.getInstance();
-      CommandHandler ch = ci.getCommandHandler();
-      if(ch==null) {
-        LOG.error("ClientHandler is empty");
-      }
-      String cmdPath = CommonConfigurationKeys.ZOOKEEPER_COMMAND_QUEUE_PATH_DEFAULT + "/" + cmdId;
-      Command cmd = ch.getCommand(cmdPath);
-      ch.failCommand(cmdPath, cmd);
-      Response r = new Response();
-      r.setOutput(cmdId + " is aborted.");
-      return r;
-    } catch(Exception e) {
-      LOG.error(ExceptionUtil.getStackTrace(e));
-      throw new WebApplicationException(e);      
-    }
-  }
-  
-  @DELETE
-  @Path("delete/{command}")
-  public Response deleteCommand(@PathParam("command") String cmdId) {
-    return null;    
-  }
-}
diff --git a/controller/src/main/java/org/apache/hms/controller/rest/NodesManager.java b/controller/src/main/java/org/apache/hms/controller/rest/NodesManager.java
deleted file mode 100755
index 4a95e95..0000000
--- a/controller/src/main/java/org/apache/hms/controller/rest/NodesManager.java
+++ /dev/null
@@ -1,192 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.controller.rest;
-
-import java.net.URI;
-import java.net.URL;
-import java.util.ArrayList;
-import java.util.List;
-
-import javax.ws.rs.DELETE;
-import javax.ws.rs.GET;
-import javax.ws.rs.POST;
-import javax.ws.rs.PUT;
-import javax.ws.rs.Path;
-import javax.ws.rs.PathParam;
-import javax.ws.rs.WebApplicationException;
-import javax.ws.rs.core.Context;
-import javax.ws.rs.core.UriInfo;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hms.common.conf.CommonConfigurationKeys;
-import org.apache.hms.common.entity.Response;
-import org.apache.hms.common.entity.manifest.Node;
-import org.apache.hms.common.entity.manifest.NodesManifest;
-import org.apache.hms.common.entity.manifest.Role;
-import org.apache.hms.common.util.ExceptionUtil;
-import org.apache.hms.common.util.HostUtil;
-import org.apache.hms.common.util.JAXBUtil;
-import org.apache.hms.common.util.ZookeeperUtil;
-import org.apache.hms.controller.Controller;
-import org.apache.zookeeper.CreateMode;
-import org.apache.zookeeper.KeeperException;
-import org.apache.zookeeper.ZooKeeper;
-import org.apache.zookeeper.ZooDefs.Ids;
-import org.apache.zookeeper.data.Stat;
-
-@Path("nodes")
-public class NodesManager {
-  private static Log LOG = LogFactory.getLog(NodesManager.class);
-  
-  @GET
-  @Path("manifest/sample")
-  public NodesManifest getSample() {
-    String[] hosts = { "localhost" };
-    NodesManifest n = new NodesManifest();
-    List<Role> roles = new ArrayList<Role>();
-    Role role = new Role();
-    role.setName("namenode");
-    String[] namenode = { "hrt8n37.cc1.ygridcore.net" };
-    role.setHosts(namenode);
-    roles.add(role);
-    role = new Role();
-    role.setName("datanode");
-    String[] datanode = { "hrt8n38.cc1.ygridcore.net", "hrt8n39.cc1.ygridcore.net" };
-    role.setHosts(datanode);
-    roles.add(role);
-    role = new Role();
-    role.setName("jobtracker");
-    String[] jobtracker = { "hrt8n37.cc1.ygridcore.net" };
-    role.setHosts(jobtracker);
-    roles.add(role);
-    role = new Role();
-    role.setName("tasktracker");
-    String[] tasktracker = { "hrt8n38.cc1.ygridcore.net", "hrt8n39.cc1.ygridcore.net" };
-    role.setHosts(tasktracker);
-    roles.add(role);
-    n.setNodes(roles);
-    return n;
-  }
-  
-  @GET
-  @Path("manifest")
-  public List<NodesManifest> getList() {
-    List<NodesManifest> list = new ArrayList<NodesManifest>();
-    try {
-      ZooKeeper zk = Controller.getInstance().getZKInstance();
-      List<String> nodes = zk.getChildren(CommonConfigurationKeys.ZOOKEEPER_NODES_MANIFEST_PATH_DEFAULT, false);
-      Stat current = new Stat();
-      for(String nodeList : nodes) {
-        byte[] data = zk.getData(ZookeeperUtil.getNodesManifestPath(nodeList), false, current);
-        NodesManifest x = JAXBUtil.read(data, NodesManifest.class);
-        list.add(x);
-      }
-    } catch(Exception e) {
-      LOG.error(ExceptionUtil.getStackTrace(e));
-      throw new WebApplicationException(500);
-    }
-    return list;
-  }
-  
-  @GET
-  @Path("manifest/{id}")
-  public NodesManifest get(@PathParam("id") String id) {
-    try {
-      ZooKeeper zk = Controller.getInstance().getZKInstance();
-      Stat current = new Stat();
-      String path = ZookeeperUtil.getNodesManifestPath(id);
-      byte[] data = zk.getData(path, false, current);
-      NodesManifest result = JAXBUtil.read(data, NodesManifest.class);
-      return result;
-    } catch(KeeperException.NoNodeException e) {
-      throw new WebApplicationException(404);
-    } catch(Exception e) {
-      LOG.error(ExceptionUtil.getStackTrace(e));
-      throw new WebApplicationException(500);
-    }
-  }
-  
-  @POST
-  @Path("manifest")
-  public Response createManifest(@Context UriInfo uri, NodesManifest newManifest) {
-    ZooKeeper zk = Controller.getInstance().getZKInstance();
-    List<Role> roles = newManifest.getRoles();
-    List<Role> testedRoles = new ArrayList<Role>();
-    for(Role role : roles) {
-      String[] hosts = role.getHosts();
-      ArrayList<String> list = new ArrayList<String>(); 
-      for(String host : hosts) {
-        String[] parts = host.split(",");
-        HostUtil util = new HostUtil(parts);
-        list.addAll(util.generate());
-      }
-      role.setHosts(list.toArray(new String[list.size()]));
-      testedRoles.add(role);
-    }
-    newManifest.setNodes(testedRoles);
-    String[] parts = newManifest.getUrl().toString().split("/");
-    String label = ZookeeperUtil.getNodesManifestPath(parts[parts.length -1]);
-    try {
-      byte[] data = JAXBUtil.write(newManifest);      
-      String id = zk.create(label, data, Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
-      Response r = new Response();
-      r.setOutput(id);
-      return r;
-    } catch(Exception e) {
-      LOG.error(ExceptionUtil.getStackTrace(e));
-      throw new WebApplicationException(500);
-    }
-  }
-  
-  @PUT
-  @Path("manifest")
-  public Response updateManifest(NodesManifest updates) {
-    ZooKeeper zk = Controller.getInstance().getZKInstance();
-    try {
-      byte[] data = JAXBUtil.write(updates);
-      String id = ZookeeperUtil.getBaseURL(updates.getUrl().toString());
-      Stat stat = zk.exists(CommonConfigurationKeys.ZOOKEEPER_NODES_MANIFEST_PATH_DEFAULT+'/'+id, false);
-      zk.setData(CommonConfigurationKeys.ZOOKEEPER_NODES_MANIFEST_PATH_DEFAULT+'/'+id, data, stat.getVersion());
-      Response r = new Response();
-      r.setOutput("Update successful.");
-      return r;
-    } catch(Exception e) {
-      LOG.error(ExceptionUtil.getStackTrace(e));
-      throw new WebApplicationException(500);
-    }
-  }
-  
-  @DELETE
-  @Path("manifest/{id}")
-  public Response deleteManifest(@PathParam("id") String id) {
-    ZooKeeper zk = Controller.getInstance().getZKInstance();
-    try {
-      String path = ZookeeperUtil.getNodesManifestPath(id);
-      Stat current = zk.exists(path, false);
-      zk.delete(path, current.getVersion());
-    } catch(Exception e) {
-      LOG.error(ExceptionUtil.getStackTrace(e));
-      throw new WebApplicationException(500);      
-    }
-    Response r = new Response();
-    r.setOutput("Node list: " + id + " deleted.");
-    return r;
-  }
-}
diff --git a/controller/src/main/java/org/apache/hms/controller/rest/SoftwareManager.java b/controller/src/main/java/org/apache/hms/controller/rest/SoftwareManager.java
deleted file mode 100755
index 9e42c35..0000000
--- a/controller/src/main/java/org/apache/hms/controller/rest/SoftwareManager.java
+++ /dev/null
@@ -1,71 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.hms.controller.rest;
-
-import java.util.ArrayList;
-import java.util.List;
-
-import javax.ws.rs.GET;
-import javax.ws.rs.Path;
-
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.hms.common.entity.manifest.PackageInfo;
-import org.apache.hms.common.entity.manifest.Role;
-import org.apache.hms.common.entity.manifest.SoftwareManifest;
-
-@Path("software")
-public class SoftwareManager {
-  private static Log LOG = LogFactory.getLog(SoftwareManager.class);
-  
-  @GET
-  @Path("manifest/sample")
-  public SoftwareManifest getSample() {
-    SoftwareManifest sm = new SoftwareManifest();
-    sm.setName("hadoop");
-    sm.setVersion("0.23");
-    PackageInfo[] packages = new PackageInfo[1];
-    packages[0]= new PackageInfo();
-    packages[0].setName("http://hrt8n36.cc1.ygridcore.net/hadoop-0.23.torrent");
-
-    List<Role> roles = new ArrayList<Role>();
-    Role role = new Role();
-    role.setName("namenode");
-    role.setPackages(packages);
-    roles.add(role);
-    
-    role = new Role();
-    role.setName("datanode");
-    role.setPackages(packages);
-    roles.add(role);
-    
-    role = new Role();
-    role.setName("jobtracker");
-    role.setPackages(packages);
-    roles.add(role);
-    
-    role = new Role();
-    role.setName("tasktracker");
-    role.setPackages(packages);
-    roles.add(role);
-
-    sm.setRoles(roles);
-    return sm;
-  }
-}
diff --git a/controller/src/main/resources/WEB-INF/jetty.xml b/controller/src/main/resources/WEB-INF/jetty.xml
index bcd16ca..f54be49 100644
--- a/controller/src/main/resources/WEB-INF/jetty.xml
+++ b/controller/src/main/resources/WEB-INF/jetty.xml
@@ -1,4 +1,22 @@
 <?xml version="1.0"?>
+
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
 <!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure//EN" "http://jetty.mortbay.org/configure.dtd">
 
 <!-- =============================================================== -->
@@ -51,8 +69,8 @@
             <Set name="Acceptors">2</Set>
             <Set name="statsOn">false</Set>
             <Set name="confidentialPort">8443</Set>
-	    <Set name="lowResourcesConnections">5000</Set>
-	    <Set name="lowResourcesMaxIdleTime">5000</Set>
+            <Set name="lowResourcesConnections">5000</Set>
+            <Set name="lowResourcesMaxIdleTime">5000</Set>
           </New>
       </Arg>
     </Call>
@@ -152,9 +170,9 @@
         <New class="org.mortbay.jetty.deployer.WebAppDeployer">
           <Set name="contexts"><Ref id="Contexts"/></Set>
           <Set name="webAppDir"><SystemProperty name="HMS_HOME" default="."/>/webapps</Set>
-	  <Set name="parentLoaderPriority">false</Set>
-	  <Set name="extract">false</Set>
-	  <Set name="allowDuplicates">false</Set>
+          <Set name="parentLoaderPriority">false</Set>
+          <Set name="extract">false</Set>
+          <Set name="allowDuplicates">false</Set>
         </New>
       </Arg>
     </Call>
@@ -188,4 +206,19 @@
     <Set name="sendDateHeader">true</Set>
     <Set name="gracefulShutdown">1000</Set>
 
+    <!-- =========================================================== -->
+    <!-- shared key authentication                                   -->
+    <!-- =========================================================== -->
+    <Set name="UserRealms">
+      <Array type="org.mortbay.jetty.security.UserRealm">
+        <Item>
+          <New class="org.mortbay.jetty.security.HashUserRealm">
+           <Set name="name">Auth</Set>
+           <Set name="config"><SystemProperty name="AMBARI_CONF_DIR" default="."/>/auth.conf</Set>
+           <Set name="refreshInterval">0</Set>
+          </New>
+        </Item>
+      </Array>
+    </Set>
+
 </Configure>
diff --git a/controller/src/main/resources/WEB-INF/web.xml b/controller/src/main/resources/WEB-INF/web.xml
index 0061233..7d2f70c 100644
--- a/controller/src/main/resources/WEB-INF/web.xml
+++ b/controller/src/main/resources/WEB-INF/web.xml
@@ -1,22 +1,39 @@
 <?xml version="1.0" encoding="ISO-8859-1"?>

 

+<!--

+   Licensed to the Apache Software Foundation (ASF) under one or more

+   contributor license agreements.  See the NOTICE file distributed with

+   this work for additional information regarding copyright ownership.

+   The ASF licenses this file to You under the Apache License, Version 2.0

+   (the "License"); you may not use this file except in compliance with

+   the License.  You may obtain a copy of the License at

+

+       http://www.apache.org/licenses/LICENSE-2.0

+

+   Unless required by applicable law or agreed to in writing, software

+   distributed under the License is distributed on an "AS IS" BASIS,

+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+   See the License for the specific language governing permissions and

+   limitations under the License.

+-->

+

 <web-app xmlns="http://java.sun.com/xml/ns/javaee"

    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

    xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"

    version="2.5"> 

 

     <description>

-      HMS Controller

+      Ambari Controller

     </description>

-    <display-name>HMS Controller</display-name>

+    <display-name>Ambari Controller</display-name>

 

     <servlet>

       <servlet-name>REST_API</servlet-name>

       <servlet-class>com.sun.jersey.spi.container.servlet.ServletContainer

       </servlet-class>

       <init-param>

-	<param-name>com.sun.jersey.config.property.packages</param-name>

-	<param-value>org.apache.hms.controller.rest</param-value>

+        <param-name>com.sun.jersey.config.property.packages</param-name>

+        <param-value>org.apache.ambari.controller.rest</param-value>

       </init-param>

       <load-on-startup>1</load-on-startup>

     </servlet>

@@ -25,4 +42,22 @@
       <url-pattern>/v1/*</url-pattern>

     </servlet-mapping>    

 

+    <security-role>

+    <role-name>user</role-name>

+    </security-role>

+

+    <login-config>

+    <realm-name>Controller</realm-name>

+    </login-config>

+

+    <security-constraint>

+    <web-resource-collection>

+    <web-resource-name>Controller Resource</web-resource-name>

+    <url-pattern>/v1/controller/*</url-pattern>

+    </web-resource-collection>

+

+    <auth-constraint>

+    <role-name>user</role-name>

+    </auth-constraint>

+    </security-constraint>

 </web-app>

diff --git a/controller/src/main/resources/application-agent-doc.xml b/controller/src/main/resources/application-agent-doc.xml
new file mode 100644
index 0000000..e32c835
--- /dev/null
+++ b/controller/src/main/resources/application-agent-doc.xml
@@ -0,0 +1,27 @@
+<!--
+
+ Licensed to the Apache Software Foundation (ASF) under one
+ or more contributor license agreements.  See the NOTICE file
+ distributed with this work for additional information
+ regarding copyright ownership.  The ASF licenses this file
+ to you under the Apache License, Version 2.0 (the
+ "License"); you may not use this file except in compliance
+ with the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+-->
+
+<applicationDocs targetNamespace="http://research.sun.com/wadl/2006/10">
+
+    <doc xml:lang="en" title="Ambari Agent REST API">
+        Ambari Agent REST API.
+    </doc>
+
+</applicationDocs>
diff --git a/controller/src/main/resources/application-doc.xml b/controller/src/main/resources/application-doc.xml
new file mode 100644
index 0000000..bcce4f8
--- /dev/null
+++ b/controller/src/main/resources/application-doc.xml
@@ -0,0 +1,55 @@
+<!--
+
+ Licensed to the Apache Software Foundation (ASF) under one
+ or more contributor license agreements.  See the NOTICE file
+ distributed with this work for additional information
+ regarding copyright ownership.  The ASF licenses this file
+ to you under the Apache License, Version 2.0 (the
+ "License"); you may not use this file except in compliance
+ with the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+-->
+
+<applicationDocs targetNamespace="http://research.sun.com/wadl/2006/10">
+
+<doc xml:lang="en" title="Ambari REST API">
+
+<p>
+   Ambari provides rich REST interfaces that allow the creation, 
+   modification, querying, and deletion of stacks and clusters. The
+   primary resources are: 
+   <ul>
+   <li><a href="index.html#Stacks">Stacks</a> - definition of which 
+       components should be deployed and how they should be 
+       <a href="index.html#Configuration">configured</a>.</li>
+   <li><a href="index.html#Clusters">Clusters</a> - combination a stack 
+       and nodes to run Hadoop</li><li>Nodes - the machines managed by 
+       Ambari</li>
+   </ul>
+   Each is represented by a top level resource, which is a container,
+   and nested resources for each instance.
+</p><br/>
+<p>
+   The resources and the entities that are passed to them are defined
+   using JAXB and are represented in either XML or JSON formats
+   depending on the ContentType and Accept HTTP headers. The definition
+   of the types is given in the <a
+   href="apidocs/org/apache/ambari/common/rest/entities/package-summary.html">
+   JavaDoc</a>.
+</p><br/>
+<p>
+   Typical usage would be to create a new stack derived from a pre-defined
+   one and change the neccessary configuration parameters. Then create a
+   cluster based on the stack by assigning nodes and marking it active.
+</p>
+</doc>
+
+</applicationDocs>
diff --git a/controller/src/main/resources/application-grammars.xml b/controller/src/main/resources/application-grammars.xml
new file mode 100644
index 0000000..2840a26
--- /dev/null
+++ b/controller/src/main/resources/application-grammars.xml
@@ -0,0 +1,49 @@
+<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+<!--
+
+    DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS HEADER.
+
+    Copyright (c) 2010-2011 Oracle and/or its affiliates. All rights reserved.
+
+    The contents of this file are subject to the terms of either the GNU
+    General Public License Version 2 only ("GPL") or the Common Development
+    and Distribution License("CDDL") (collectively, the "License").  You
+    may not use this file except in compliance with the License.  You can
+    obtain a copy of the License at
+    http://glassfish.java.net/public/CDDL+GPL_1_1.html
+    or packager/legal/LICENSE.txt.  See the License for the specific
+    language governing permissions and limitations under the License.
+
+    When distributing the software, include this License Header Notice in each
+    file and include the License file at packager/legal/LICENSE.txt.
+
+    GPL Classpath Exception:
+    Oracle designates this particular file as subject to the "Classpath"
+    exception as provided by Oracle in the GPL Version 2 section of the License
+    file that accompanied this code.
+
+    Modifications:
+    If applicable, add the following below the License Header, with the fields
+    enclosed by brackets [] replaced by your own identifying information:
+    "Portions Copyright [year] [name of copyright owner]"
+
+    Contributor(s):
+    If you wish your version of this file to be governed by only the CDDL or
+    only the GPL Version 2, indicate your decision by adding "[Contributor]
+    elects to include this software in this distribution under the [CDDL or GPL
+    Version 2] license."  If you don't indicate a single choice of license, a
+    recipient has the option to distribute your version of this file under
+    either the CDDL, the GPL Version 2 or to extend the choice of license to
+    its licensees as provided above.  However, if you add GPL Version 2 code
+    and therefore, elected the GPL Version 2 license, then the option applies
+    only if the new code is made subject to such option by the copyright
+    holder.
+
+-->
+
+<grammars xmlns="http://research.sun.com/wadl/2006/10"
+    xmlns:xsd="http://www.w3.org/2001/XMLSchema"
+    xmlns:xi="http://www.w3.org/1999/XML/xinclude">
+    <include href="schema1.xsd" />
+</grammars>
+
diff --git a/controller/src/main/resources/org/apache/ambari/acd/hadoop-common-0.1.0.acd b/controller/src/main/resources/org/apache/ambari/acd/hadoop-common-0.1.0.acd
new file mode 100644
index 0000000..10718cc
--- /dev/null
+++ b/controller/src/main/resources/org/apache/ambari/acd/hadoop-common-0.1.0.acd
@@ -0,0 +1,35 @@
+<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+<component provides="common" package="hadoop-${version}.tar.gz"
+           user="root">
+  <configure>
+<![CDATA[
+import ambari_component
+
+for file in ['log4j', 'commons-logging', 'hadoop-metrics2']:
+  ambari_component.copyProperties('hadoop/' + file, {})
+
+ambari_component.copySh('hadoop/hadoop-env', {})
+
+ambari_component.copyXml('hadoop/core-site', {})
+]]>
+  </configure>
+  <install>
+<![CDATA[
+import ambari_component
+import os
+
+if not os.path.isdir("stack"):
+  os.mkdir("stack")
+
+ambari_component.installTar("hadoop")
+]]>
+  </install>
+  <uninstall>
+<![CDATA[
+import ambari_component
+import shutil
+
+shutil.rmtree("stack")
+]]>
+  </uninstall>
+</component>
diff --git a/controller/src/main/resources/org/apache/ambari/acd/hdfs-0.1.0.acd b/controller/src/main/resources/org/apache/ambari/acd/hdfs-0.1.0.acd
new file mode 100644
index 0000000..547a382
--- /dev/null
+++ b/controller/src/main/resources/org/apache/ambari/acd/hdfs-0.1.0.acd
@@ -0,0 +1,38 @@
+<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+<component provides="hdfs">
+  <requires name="common"/>
+  <roles name="namenode"/>
+  <roles name="secondarynamenode"/>
+  <roles name="datanode"/>
+  <start>
+<![CDATA[
+import os
+import sys
+
+[pgm, cluster, role] = sys.argv
+os.environ['HADOOP_CONF_DIR']=os.getcwd() + "/etc/hadoop"
+os.execlp("hadoop", "hadoop", role)
+]]>
+  </start>
+  <check runOn="namenode">
+<![CDATA[
+import os
+import sys
+
+os.environ['HADOOP_CONF_DIR']=os.getcwd() + "/etc/hadoop"
+sys.exit(os.system('hadoop dfsadmin -safemode get | grep "Safe mode is OFF"'))
+
+]]>
+  </check>
+
+  <prestart runOn="namenode">
+<![CDATA[]]>
+import subprocess
+import os
+
+os.environ['HADOOP_CONF_DIR']=os.getcwd() + "/etc/hadoop"
+proc = subprocess.Popen(['hadoop', 'namenode', '-format'], stdin=subprocess.PIPE)
+proc.communicate('N\n')
+proc.wait()
+  </prestart>
+</component>
diff --git a/controller/src/main/resources/org/apache/ambari/acd/mapreduce-0.1.0.acd b/controller/src/main/resources/org/apache/ambari/acd/mapreduce-0.1.0.acd
new file mode 100644
index 0000000..e810e86
--- /dev/null
+++ b/controller/src/main/resources/org/apache/ambari/acd/mapreduce-0.1.0.acd
@@ -0,0 +1,38 @@
+<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
+<component provides="mapred">
+  <requires name="hdfs"/>
+  <roles name="jobtracker"/>
+  <roles name="tasktracker"/>
+  <roles name="historyserver"/>
+  <start>
+<![CDATA[
+import os
+import sys
+
+[pgm, cluster, role] = sys.argv
+os.environ['HADOOP_CONF_DIR']=os.getcwd() + "/etc/hadoop"
+os.execlp("hadoop", "hadoop", role)
+]]>
+  </start>
+  <check runOn="jobtracker">
+<![CDATA[
+import os
+import sys
+
+os.environ['HADOOP_CONF_DIR']=os.getcwd() + "/etc/hadoop"
+sys.exit(os.system('hadoop job -list'))
+
+]]>
+  </check>
+
+  <prestart runOn="namenode">
+<![CDATA[
+import os
+import sys
+
+os.environ['HADOOP_CONF_DIR']=os.getcwd() + "/etc/hadoop"
+sys.exit(os.system('hadoop dfs -mkdir /mapred'))
+
+]]>
+  </prestart>
+</component>
diff --git a/controller/src/main/resources/org/apache/ambari/clusters/cluster123.xml b/controller/src/main/resources/org/apache/ambari/clusters/cluster123.xml
new file mode 100644
index 0000000..0ec21e8
--- /dev/null
+++ b/controller/src/main/resources/org/apache/ambari/clusters/cluster123.xml
@@ -0,0 +1,8 @@
+<cluster description="Owen's cluster"
+         stackName="puppet1" stackRevision="0"
+         goalState="active" nodes="node00,node01,node02,node03">
+  <activeServices>HDFS</activeServices>
+  <activeServices>MapReduce</activeServices>
+  <roleToNodesMap roleName="namenode" nodes="node00"/>
+  <roleToNodesMap roleName="jobtracker" nodes="node01"/>
+</cluster>
diff --git a/controller/src/main/resources/org/apache/ambari/components/impl/jaxb.index b/controller/src/main/resources/org/apache/ambari/components/impl/jaxb.index
new file mode 100644
index 0000000..b3f02a4
--- /dev/null
+++ b/controller/src/main/resources/org/apache/ambari/components/impl/jaxb.index
@@ -0,0 +1 @@
+XmlComponentDefinition$Component
\ No newline at end of file
diff --git a/controller/src/main/resources/org/apache/ambari/stacks/cluster123-0.xml b/controller/src/main/resources/org/apache/ambari/stacks/cluster123-0.xml
new file mode 100644
index 0000000..f9c9ab8
--- /dev/null
+++ b/controller/src/main/resources/org/apache/ambari/stacks/cluster123-0.xml
@@ -0,0 +1,24 @@
+<stack parentName="hadoop-security" parentRevision="0">
+    <configuration>
+        <category name="ambari">
+            <property name="data.dirs" value="/grid/*" />
+            <property name="hdfs.user" value="hrt_hdfs" />
+            <property name="mapreduce.user" value="hrt_mapred" />
+            <property name="hbase.user" value="hrt_hbase" />
+            <property name="hcat.user" value="hrt_hcat" />
+            <property name="user.realm" value="HORTON.YGRIDCORE.NET" />
+        </category>
+        <category name="hadoop-env">
+            <property name="HADOOP_CLIENT_OPTS" 
+                      value="-Xmx256m ${HADOOP_CLIENT_OPTS}" />
+        </category>
+    </configuration>
+    <components name="hdfs">
+      <configuration>
+        <category name="hadoop-env">
+          <property name="HADOOP_NAMENODE_OPTS" 
+                    value="-Xmx512m -Dsecurity.audit.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT ${HADOOP_NAMENODE_OPTS}" />
+        </category>
+      </configuration>
+    </components>
+</stack>
diff --git a/controller/src/main/resources/org/apache/ambari/stacks/cluster124-0.xml b/controller/src/main/resources/org/apache/ambari/stacks/cluster124-0.xml
new file mode 100644
index 0000000..f9c9ab8
--- /dev/null
+++ b/controller/src/main/resources/org/apache/ambari/stacks/cluster124-0.xml
@@ -0,0 +1,24 @@
+<stack parentName="hadoop-security" parentRevision="0">
+    <configuration>
+        <category name="ambari">
+            <property name="data.dirs" value="/grid/*" />
+            <property name="hdfs.user" value="hrt_hdfs" />
+            <property name="mapreduce.user" value="hrt_mapred" />
+            <property name="hbase.user" value="hrt_hbase" />
+            <property name="hcat.user" value="hrt_hcat" />
+            <property name="user.realm" value="HORTON.YGRIDCORE.NET" />
+        </category>
+        <category name="hadoop-env">
+            <property name="HADOOP_CLIENT_OPTS" 
+                      value="-Xmx256m ${HADOOP_CLIENT_OPTS}" />
+        </category>
+    </configuration>
+    <components name="hdfs">
+      <configuration>
+        <category name="hadoop-env">
+          <property name="HADOOP_NAMENODE_OPTS" 
+                    value="-Xmx512m -Dsecurity.audit.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT ${HADOOP_NAMENODE_OPTS}" />
+        </category>
+      </configuration>
+    </components>
+</stack>
diff --git a/controller/src/main/resources/org/apache/ambari/stacks/hadoop-security-0.xml b/controller/src/main/resources/org/apache/ambari/stacks/hadoop-security-0.xml
new file mode 100644
index 0000000..227d681
--- /dev/null
+++ b/controller/src/main/resources/org/apache/ambari/stacks/hadoop-security-0.xml
@@ -0,0 +1,159 @@
+<stack>
+    <repositories kind="TAR">
+        <urls>http://www.apache.org/dist/hadoop/common/</urls>
+    </repositories>
+    <configuration>
+        <category name="ambari">
+            <property name="data.prefix" value="ambari" />
+            <property name="hdfs.user" value="hdfs" />
+            <property name="namenode.principal" value="nn" />
+            <property name="datanode.principal" value="dn" />
+            <property name="mapreduce.user" value="mapred" />
+            <property name="jobtracker.principal" value="jt" />
+            <property name="tasktracker.principal" value="tt" />
+            <property name="hbase.user" value="hrt_hbase" />
+            <property name="hbasemaster.principal" value="hm" />
+            <property name="regionserver.principal" value="rs" />
+            <property name="hcat.user" value="hcat" />
+            <property name="hcat.principal" value="hcat" />
+            <property name="service.realm" value="${ambari.user.realm}" />
+            <property name="admin.group" value="hadoop" />
+            <property name="webauthfilter"
+                      value="org.apache.hadoop.http.lib.StaticUserWebFilter"/>
+        </category>
+        <category name="core-site">
+            <property name="fs.default.name" 
+                      value="hdfs://${ambari.namenode.host}:8020" />
+            <property name="fs.trash.interval" value="360" />
+            <property name="hadoop.security.authentication" value="kerberos" />
+            <property name="hadoop.security.authorization" value="true" />
+            <property name="hadoop.kerberos.kinit.command" 
+                      value="/usr/kerberos/bin/kinit" />
+            <property name="HADOOP_CONF_DIR" 
+                      value="${ambari.cluster.prefix}/stack/etc/hadoop" />
+        </category>
+        <category name="hdfs-site">
+            <property name="dfs.umaskmode" value="077" />
+            <property name="dfs.block.access.token.enable" value="true" />
+            <property name="dfs.namenode.kerberos.principal" 
+                      value="${ambari.namenode.principal}/_HOST@${ambari.service.realm}" />
+            <property name="dfs.namenode.kerberos.https.principal" 
+                      value="host/_HOST@${ambari.service.realm}" />
+            <property name="dfs.http.port" value="50070" />
+            <property name="dfs.https.port" value="50470" />
+            <property name="dfs.https.address" 
+                      value="${ambari.namenode.host}:${dfs.https.port}" />
+            <property name="dfs.namenode.http-address" 
+                      value="${ambari.namenode.host}:${dfs.http.port}" />
+            <property name="dfs.namenode.https-address" 
+                      value="${ambari.namenode.host}:${dfs.https.port}" />
+        </category>
+        <category name="hadoop-env">
+            <property name="JAVA_HOME" value="${ambari.cluster.prefix}/stack/share/java" />
+            <property name="HADOOP_OPTS" 
+                      value="-Djava.net.preferIPv4Stack=true $HADOOP_OPTS" />
+            <property name="HADOOP_JOBTRACKER_OPTS" 
+                      value="-Dsecurity.audit.logger=INFO,DRFAS -Dmapred.audit.logger=INFO,MRAUDIT -Dmapred.jobsummary.logger=INFO,JSA ${HADOOP_JOBTRACKER_OPTS}" />
+            <property name="HADOOP_TASKTRACKER_OPTS" 
+                      value="-Dsecurity.audit.logger=ERROR,console -Dmapred.audit.logger=ERROR,console ${HADOOP_TASKTRACKER_OPTS}" />
+            <property name="HADOOP_CLIENT_OPTS" 
+                      value="-Xmx128m ${HADOOP_CLIENT_OPTS}" />
+            <property name="HADOOP_IDENT_STRING" 
+                      value="${ambari.cluster.name}" />
+        </category>
+        <category name="hadoop_metrics2">
+            <property name="*.period" value="60" />
+        </category>
+    </configuration>
+    <components architecture="x86_64" name="common" 
+                provider="org.apache.hadoop" version="0.20.205.0">
+        <definition name="hadoop-common" version="0.1.0" 
+                    provider="org.apache.ambari"/>
+    </components>
+    <components architecture="x86_64" name="hdfs" 
+                provider="org.apache.hadoop" version="0.20.205.0">
+        <definition name="hdfs" version="0.1.0" 
+                    provider="org.apache.ambari"/>
+        <configuration>
+           <category name="hadoop-env">
+              <property name="HADOOP_LOG_DIR" 
+                        value="${ambari.cluster.prefix}/logs" />
+              <property name="HADOOP_SECURE_DN_LOG_DIR" 
+                        value="${ambari.cluster.prefix}/logs" />
+              <property name="HADOOP_PID_DIR" 
+                        value="${ambari.cluster.prefix}/pid" />
+              <property name="HADOOP_SECURE_DN_PID_DIR" 
+                        value="${ambari.cluster.prefix}/pid" />
+           </category>
+           <category name="core-site">
+              <property name="hadoop.security.groups.cache.secs" 
+                        value="14400" />
+              <property name="hadoop.http.filter.initializers" 
+                        value="${ambari.webauthfilter}"/>
+           </category>
+           <category name="hdfs-site">
+              <property name="dfs.secondary.namenode.kerberos.principal" 
+                        value="${dfs.namenode.kerberos.principal}" />
+            <property name="dfs.secondary.namenode.kerberos.https.principal" 
+                      value="${dfs.namenode.kerberos.https.principal}" />
+            <property name="dfs.secondary.https.port" value="50490" />
+            <property name="dfs.secondary.http.address" 
+                      value="${ambari.secondarynamenode.host}:${dfs.secondary.https.port}" />
+            <property name="dfs.datanode.kerberos.principal" 
+                      value="${ambari.datanode.principal}/_HOST@${ambari.service.realm}" />
+            <property name="dfs.namenode.keytab.file" 
+                      value="/etc/security/keytabs/nn.service.keytab" />
+            <property name="dfs.secondary.namenode.keytab.file" 
+                      value="/etc/security/keytabs/nn.service.keytab" />
+            <property name="dfs.datanode.keytab.file" 
+                      value="/etc/security/keytabs/dn.service.keytab" />
+           </category>
+        </configuration>
+        <roles name="namenode">
+           <configuration>
+              <category name="hadoop-env">
+                 <property name="HADOOP_NAMENODE_OPTS" 
+                           value="-Dsecurity.audit.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT ${HADOOP_NAMENODE_OPTS}" />
+                 <property name="HADOOP_SECONDARYNAMENODE_OPTS" 
+                           value="-Dsecurity.audit.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT ${HADOOP_SECONDARYNAMENODE_OPTS}"/>
+              </category>
+              <category name="core-site">
+                <property name="hadoop.security.auth_to_local" value="RULE:[1:$1@$0](.*@${ambari.user.realm})s/@.*//
+            RULE:[2:$1@$0](${ambari.jobtracker.principal}@${ambari.service.realm})s/.*/${ambari.mapreduce.user}/
+            RULE:[2:$1@$0](${ambari.tasktracer.principal}@${ambari.service.realm})s/.*/${ambari.mapreduce.user}/
+            RULE:[2:$1@$0](${ambari.namenode.principal}@${ambari.service.realm})s/.*/${ambari.hdfs.user}/
+            RULE:[2:$1@$0](${ambari.datanode.principal}@${ambari.service.realm})s/.*/${ambari.hdfs.user}/
+            RULE:[2:$1@$0](${ambari.hbasemaster.principal}@${ambari.service.realm})s/.*/${ambari.hbase.user}/
+            RULE:[2:$1@$0](${ambari.regionserver.principal}@${ambari.service.realm})s/.*/${ambari.hbase.user}/
+            RULE:[2:$1@$0](${ambari.hcat.principal}@${ambari.service.realm})s/.*/${ambari.hcat.user}/" />
+             </category>
+             <category name="hdfs-site">
+              <property name="dfs.name.dir" 
+                        value="${ambari.cluster.prefix}/data/namenode" />
+               <property name="dfs.safemode.threshold.pct" value="1.0f" />
+               <property name="dfs.hosts" 
+                         value="${HADOOP_CONF_DIR}/dfs.include" />
+               <property name="dfs.hosts.exclude" 
+                         value="${HADOOP_CONF_DIR}/dfs.exclude" />
+               <property name="dfs.cluster.administrators" 
+                         value="${ambari.hdfs.user}" />
+               <property name="dfs.permissions.superusergroup" 
+                         value="${ambari.admin.group}" />
+             </category>
+           </configuration>
+        </roles>
+        <roles name="datanode">
+           <configuration>
+              <category name="hadoop-env">
+                <property name="HADOOP_SECURE_DN_USER" 
+                          value="${ambari.hdfs.user}" />
+                <property name="HADOOP_DATANODE_OPTS" 
+                          value="-Dsecurity.audit.logger=ERROR,DRFAS ${HADOOP_DATANODE_OPTS}" />
+              </category>
+              <category name="hdfs-site">
+                <property name="dfs.datanode.data.dir.perm" value="700" />
+              </category>
+           </configuration>
+        </roles>
+    </components>
+</stack>
diff --git a/controller/src/main/resources/org/apache/ambari/stacks/horton-0.json b/controller/src/main/resources/org/apache/ambari/stacks/horton-0.json
new file mode 100644
index 0000000..1523b02
--- /dev/null
+++ b/controller/src/main/resources/org/apache/ambari/stacks/horton-0.json
@@ -0,0 +1,28 @@
+{
+  "@name": "horton",
+  "@parentName": "puppet1",
+  "@parentRevision": "0",
+  "globals":[
+    {
+      "@name":"ambari_user_realm",
+      "@value":"HORTON.YGRIDCORE.NET"
+    }
+  ],
+  "configuration": {
+    "category": []
+  },
+  "components": [
+    {"@name": "hdfs",
+     "configuration": {
+       "category": [
+         {"@name":"hdfs-site.xml",
+          "property": [
+            {"@name": "dfs.data.dir",
+             "@value": "/grid/0/ambari/<%= ambari_cluster_name %>/hdfs/data,/grid/1/ambari/<%= ambari_cluster_name %>/hdfs/data,/grid/2/ambari/<%= ambari_cluster_name %>/hdfs/data,/grid/3/ambari/<%= ambari_cluster_name %>/hdfs/data"}
+          ]
+         }
+       ]
+      }
+    }
+ ]
+}
diff --git a/controller/src/main/resources/org/apache/ambari/stacks/puppet1-0.json b/controller/src/main/resources/org/apache/ambari/stacks/puppet1-0.json
new file mode 100644
index 0000000..27f70e4
--- /dev/null
+++ b/controller/src/main/resources/org/apache/ambari/stacks/puppet1-0.json
@@ -0,0 +1,665 @@
+{
+  "@name":"puppet1",
+  "repositories":{
+    "@kind":"TAR",
+    "urls":"http://www.apache.org/dist/hadoop/common/"
+  },
+  "default_user_group":{
+    "@user":"hadoop",
+    "@userid":"",
+    "@group":"hadoop",
+    "@groupid":""
+  },
+  "globals":[
+          {
+            "@name":"ambari_namenode_principal",
+            "@value":"nn"
+          },
+          {
+            "@name":"ambari_datanode_principal",
+            "@value":"dn"
+          },
+          {
+            "@name":"ambari_jobtracker_principal",
+            "@value":"jt"
+          },
+          {
+            "@name":"ambari_tasktracker_principal",
+            "@value":"tt"
+          },
+          {
+            "@name":"ambari_hbasemaster_principal",
+            "@value":"hm"
+          },
+          {
+            "@name":"ambari_regionserver_principal",
+            "@value":"rs"
+          },
+          {
+            "@name":"ambari_hcat_principal",
+            "@value":"hcat"
+          },
+          {
+            "@name":"ambari_user_realm",
+            "@value":"KERBEROS.EXAMPLE.COM"
+          },
+          {
+            "@name":"ambari_service_realm",
+            "@value":"<%= ambari_user_realm %>"
+          },
+          {
+            "@name":"ambari_keytab_dir",
+            "@value":"/etc/security/keytabs"
+          },
+          {
+            "@name":"ambari_auth_to_local",
+            "@value":"RULE:[1:$1@$0](.*@<%= ambari_user_realm %>)s/@.*//\nRULE:[2:$1@$0](<%= ambari_jobtracker_principal %>@<%= ambari_service_realm %>)s/.*/<%= ambari_mapreduce_user %>/\nRULE:[2:$1@$0](<%= ambari_tasktracker_principal %>@<%= ambari_service_realm %>)s/.*/<%= ambari_mapreduce_user %>/\nRULE:[2:$1@$0](<%= ambari_namenode_principal %>@<%= ambari_service_realm %>)s/.*/<%= ambari_hdfs_user %>/\nRULE:[2:$1@$0](<%= ambari_datanode_principal %>@<%= ambari_service_realm %>)s/.*/<%= ambari_hdfs_user %>/"
+          }
+        ],
+  "configuration":{
+    "category":[
+      {
+        "@name":"core-site.xml",
+        "property":[
+          {
+            "@name":"fs.default.name",
+            "@value":"hdfs://<%= ambari_namenode_host %>:8020"
+          },
+          {
+            "@name":"fs.trash.interval",
+            "@value":"360"
+          },
+          {
+            "@name":"hadoop.security.authentication",
+            "@value":"simple"
+          },
+          {
+            "@name":"hadoop.security.authorization",
+            "@value":"false"
+          },
+          {
+            "@name":"hadoop.kerberos.kinit.command",
+            "@value":"/usr/kerberos/bin/kinit"
+          },
+          {
+            "@name":"local.realm",
+            "@value":"<%= ambari_user_realm %>"
+          }
+        ]
+      },
+      {
+        "@name":"hdfs-site.xml",
+        "property":[
+          {
+            "@name":"dfs.http.address",
+            "@value":"<%= ambari_namenode_host %>:50070"
+          },
+          {
+            "@name":"dfs.umaskmode",
+            "@value":"077"
+          },
+          {
+            "@name":"dfs.block.access.token.enable",
+            "@value":"false"
+          },
+          {
+            "@name":"dfs.namenode.kerberos.principal",
+            "@value":"<%= ambari_namenode_principal %>/_HOST@<%= ambari_service_realm %>"
+          },
+          {
+            "@name":"dfs.namenode.kerberos.https.principal",
+            "@value":"host/_HOST@<%= ambari_service_realm %>"
+          },
+          {
+            "@name":"dfs.secondary.namenode.kerberos.principal",
+            "@value":"<%= ambari_namenode_principal %>/_HOST@<%= ambari_service_realm %>"
+          },
+          {
+            "@name":"dfs.secondary.namenode.kerberos.https.principal",
+            "@value":"host/_HOST@<%= ambari_service_realm %>"
+          },
+          {
+            "@name":"dfs.datanode.kerberos.principal",
+            "@value":"<%= ambari_datanode_principal %>/_HOST@<%= ambari_service_realm %>"
+          },
+          {
+            "@name":"dfs.web.authentication.kerberos.principal",
+            "@value":"HTTP/_HOST@<%= ambari_service_realm %>"
+          },
+          {
+            "@name":"dfs.https.address",
+            "@value":"<%= ambari_namenode_host %>:50470"
+          },
+          {
+            "@name":"dfs.secondary.http.address",
+            "@value":"<%= ambari_namenode_host %>:50090"
+          },
+          {
+            "@name":"dfs.support.append",
+            "@value":"true"
+          }
+        ]
+      },
+      {
+        "@name":"hadoop-env.sh",
+        "property":[
+          {
+            "@name":"JAVA_HOME",
+            "@value":"/usr/java/latest"
+          },
+          {
+            "@name":"HADOOP_CONF_DIR",
+            "@value":"<%= ambari_role_prefix %>/etc/conf"
+          },
+          {
+            "@name":"HADOOP_OPTS",
+            "@value":"-Djava.net.preferIPv4Stack=true $HADOOP_OPTS"
+          },
+          {
+            "@name":"HADOOP_CLIENT_OPTS",
+            "@value":"-Xmx128m $HADOOP_CLIENT_OPTS"
+          }
+        ]
+      },
+      {
+        "@name":"mapred-site.xml",
+        "property":[
+          {
+            "@name":"mapred.tasktracker.tasks.sleeptime-before-sigkill",
+            "@value":"250"
+          },
+          {
+            "@name":"mapred.system.dir",
+            "@value":"/mapred/mapredsystem"
+          },
+          {
+            "@name":"mapred.job.tracker",
+            "@value":"<%= ambari_jobtracker_host %>:9000"
+          },
+          {
+            "@name":"mapred.job.tracker.http.address",
+            "@value":"<%= ambari_jobtracker_host %>:50030"
+          },
+          {
+            "@name":"mapred.local.dir",
+            "@value":"<%= ambari_role_prefix %>/mapred/local"
+          },
+          {
+            "@name":"mapreduce.cluster.administrators",
+            "@value":"<%= ambari_mapreduce_user %>"
+          },
+          {
+            "@name":"mapred.map.tasks.speculative.execution",
+            "@value":"false"
+          },
+          {
+            "@name":"mapred.reduce.tasks.speculative.execution",
+            "@value":"false"
+          },
+          {
+            "@name":"mapred.output.compression.type",
+            "@value":"BLOCK"
+          },
+          {
+            "@name":"jetty.connector",
+            "@value":"org.mortbay.jetty.nio.SelectChannelConnector"
+          },
+          {
+            "@name":"mapred.task.tracker.task-controller",
+            "@value":"org.apache.hadoop.mapred.DefaultTaskController"
+          },
+          {
+            "@name":"mapred.child.root.logger",
+            "@value":"INFO,TLA"
+          },
+          {
+            "@name":"mapred.child.java.opts",
+            "@value":"-server -Xmx640m -Djava.net.preferIPv4Stack=true"
+          },
+          {
+            "@name":"mapred.child.ulimit",
+            "@value":"8388608"
+          },
+          {
+            "@name":"mapred.job.tracker.persist.jobstatus.active",
+            "@value":"true"
+          },
+          {
+            "@name":"mapred.job.tracker.persist.jobstatus.dir",
+            "@value":"file:///<%= ambari_role_prefix %>/jobstatus/<%= ambari_jobtracker_principal %>"
+          },
+          {
+            "@name":"mapred.job.tracker.history.completed.location",
+            "@value":"/mapred/history/done"
+          },
+          {
+            "@name":"mapred.heartbeats.in.second",
+            "@value":"200"
+          },
+          {
+            "@name":"mapreduce.tasktracker.outofband.heartbeat",
+            "@value":"true"
+          },
+          {
+            "@name":"mapred.jobtracker.maxtasks.per.job",
+            "@value":"200000"
+          },
+          {
+            "@name":"mapreduce.jobtracker.kerberos.principal",
+            "@value":"<%= ambari_jobtracker_principal %>/_HOST@<%= ambari_service_realm %>"
+          },
+          {
+            "@name":"mapreduce.tasktracker.kerberos.principal",
+            "@value":"<%= ambari_tasktracker_principal %>/_HOST@<%= ambari_service_realm %>"
+          },
+          {
+            "@name":"hadoop.job.history.user.location",
+            "@value":"none"
+          },
+          {
+            "@name":"mapreduce.jobtracker.keytab.file",
+            "@value":"/etc/security/keytabs/jt.service.keytab"
+          },
+          {
+            "@name":"mapreduce.tasktracker.keytab.file",
+            "@value":"/etc/security/keytabs/tt.service.keytab"
+          },
+          {
+            "@name":"mapreduce.jobtracker.staging.root.dir",
+            "@value":"/user"
+          },
+          {
+            "@name":"mapreduce.job.acl-modify-job",
+            "@value":""
+          },
+          {
+            "@name":"mapreduce.job.acl-view-job",
+            "@value":"Dr.Who"
+          },
+          {
+            "@name":"mapreduce.tasktracker.group",
+            "@value":"<%= ambari_mapreduce_group %>"
+          },
+          {
+            "@name":"mapred.acls.enabled",
+            "@value":"true"
+          },
+          {
+            "@name":"mapred.jobtracker.taskScheduler",
+            "@value":"org.apache.hadoop.mapred.CapacityTaskScheduler"
+          },
+          {
+            "@name":"mapred.queue.names",
+            "@value":"default"
+          },
+          {
+            "@name":"mapreduce.history.server.embedded",
+            "@value":"false"
+          },
+          {
+            "@name":"mapreduce.history.server.http.address",
+            "@value":"<%= ambari_jobtracker_host %>:51111"
+          },
+          {
+            "@name":"mapreduce.jobhistory.kerberos.principal",
+            "@value":"<%= ambari_jobtracker_principal %>/_HOST@<%= ambari_service_realm %>"
+          },
+          {
+            "@name":"mapreduce.jobhistory.keytab.file",
+            "@value":"/etc/security/keytabs/jt.service.keytab"
+          },
+          {
+            "@name":"mapred.hosts",
+            "@value":"<%= ambari_role_prefix %>/etc/hadoop/mapred.include"
+          },
+          {
+            "@name":"mapred.hosts.exclude",
+            "@value":"<%= ambari_role_prefix %>/etc/hadoop/mapred.exclude"
+          },
+          {
+            "@name":"mapred.jobtracker.retirejob.check",
+            "@value":"10000"
+          },
+          {
+            "@name":"mapred.jobtracker.retirejob.interval",
+            "@value":"0"
+          }
+        ]
+      }
+    ]
+  },
+  "components":[
+    {
+      "@name":"common",
+      "@architecture":"x86_64",
+      "@version":"0.20.205.0",
+      "@provider":"org.apache.hadoop",
+      "definition":{
+        "@provider":"org.apache.ambari",
+        "@name":"hadoop-common",
+        "@version":"0.1.0"
+      }
+    },
+    {
+      "@name":"mapreduce",
+      "@architecture":"x86_64",
+      "@version":"0.20.205.0",
+      "@provider":"org.apache.hadoop",
+      "definition":{
+        "@provider":"org.apache.ambari",
+        "@name":"mapreduce",
+        "@version":"0.1.0"
+      },
+      "configuration":{
+        "category":[
+          {
+            "@name":"core-site.xml",
+            "property":[
+              {
+                "@name":"hadoop.security.auth_to_local",
+                "@value":"<%= ambari_auth_to_local %>"
+              },
+              {
+                "@name":"hadoop.security.groups.cache.secs",
+                "@value":"14400"
+              },
+              {
+                "@name":"hadoop.http.filter.initializers",
+                "@value":"org.apache.hadoop.http.lib.StaticUserWebFilter"
+              }
+            ]
+          },
+          {
+            "@name":"hadoop-metrics2.properties",
+            "property":{
+              "@name":"*.period",
+              "@value":"60"
+            }
+          },
+          {
+            "@name":"capacity-scheduler.xml",
+            "property":[
+                {
+                  "@name":"mapred.capacity-scheduler.maximum-system-jobs",
+                  "@value":"3000"
+                },
+                {
+                  "@name":"mapred.capacity-scheduler.queue.default.capacity",
+                  "@value":"100"
+                },
+                {
+                  "@name":"mapred.capacity-scheduler.queue.default.maximum-capacity",
+                  "@value":"-1"
+                },
+                {
+                  "@name":"mapred.capacity-scheduler.queue.default.supports-priority",
+                  "@value":"false"
+                },
+                {
+                  "@name":"mapred.capacity-scheduler.queue.default.minimum-user-limit-percent",
+                  "@value":"100"
+                },
+                {
+                  "@name":"mapred.capacity-scheduler.queue.default.user-limit-factor",
+                  "@value":"1"
+                },
+                {
+                  "@name":"mapred.capacity-scheduler.queue.default.maximum-initialized-active-tasks",
+                  "@value":"200000"
+                },
+                {
+                  "@name":"mapred.capacity-scheduler.queue.default.maximum-initialized-active-tasks-per-user",
+                  "@value":"100000"
+                },
+                {
+                  "@name":"mapred.capacity-scheduler.queue.default.init-accept-jobs-factor",
+                  "@value":"10"
+                },
+                {
+                  "@name":"mapred.capacity-scheduler.default-supports-priority",
+                  "@value":"false"
+                },
+                {
+                  "@name":"mapred.capacity-scheduler.default-minimum-user-limit-percent",
+                  "@value":"100"
+                },
+                {
+                  "@name":"mapred.capacity-scheduler.default-user-limit-factor",
+                  "@value":"1"
+                },
+                {
+                  "@name":"mapred.capacity-scheduler.default-maximum-active-tasks-per-queue",
+                  "@value":"200000"
+                },
+                {
+                  "@name":"mapred.capacity-scheduler.default-maximum-active-tasks-per-user",
+                  "@value":"100000"
+                },
+                {
+                  "@name":"mapred.capacity-scheduler.default-init-accept-jobs-factor",
+                  "@value":"10"
+                },
+                {
+                  "@name":"mapred.capacity-scheduler.init-poll-interval",
+                  "@value":"5000"
+                },
+                {
+                  "@name":"mapred.capacity-scheduler.init-worker-threads",
+                  "@value":"5"
+                }
+              ]
+          },
+          {
+            "@name":"taskcontroller.cfg",
+            "property":[
+                {
+                  "@name":"mapreduce.cluster.local.dir",
+                  "@value":"<%= ambari_role_prefix %>/mapred/local"
+                },
+                {
+                  "@name":"mapreduce.tasktracker.group",
+                  "@value":"<%= ambari_mapreduce_group %>"
+                },
+                {
+                  "@name":"hadoop.log.dir",
+                  "@value":"<%= ambari_role_prefix %>/logs"
+                }
+              ]
+          },
+          {
+            "@name":"hadoop-env.sh",
+            "property":[
+                {
+                  "@name":"HADOOP_CONF_DIR",
+                  "@value":"<%= ambari_role_prefix %>/etc/hadoop"
+                },
+                {
+                  "@name":"HADOOP_JOBTRACKER_OPTS",
+                  "@value":"-Dsecurity.audit.logger=INFO,DRFAS -Dmapred.audit.logger=INFO,MRAUDIT -Dhadoop.mapreduce.jobsummary.logger=INFO,JSA $HADOOP_JOBTRACKER_OPTS"
+                },
+                {
+                  "@name":"HADOOP_TASKTRACKER_OPTS",
+                  "@value":"-Dsecurity.audit.logger=ERROR,console -Dmapred.audit.logger=ERROR,console $HADOOP_TASKTRACKER_OPTS"
+                },
+                {
+                  "@name":"HADOOP_LOG_DIR",
+                  "@value":"<%= ambari_role_prefix %>/logs"
+                },
+                {
+                  "@name":"HADOOP_PID_DIR",
+                  "@value":"<%= ambari_role_prefix %>/pids"
+                },
+                {
+                  "@name":"HADOOP_IDENT_STRING",
+                  "@value":"<%= ambari_cluster_name %>"
+                }
+            ]
+          }
+        ]
+      }
+    },
+    {
+      "@name":"hdfs",
+      "@provider":"org.apache.hadoop",
+      "definition":{
+        "@provider":"org.apache.ambari",
+        "@name":"hdfs",
+        "@version":"0.1.0"
+      },
+      "configuration":{
+        "category":[
+          {
+            "@name":"core-site.xml",
+            "property":[
+              {
+                "@name":"hadoop.security.auth_to_local",
+                "@value":"<%= ambari_auth_to_local %>"
+              },
+              {
+                "@name":"hadoop.security.groups.cache.secs",
+                "@value":"14400"
+              },
+              {
+                "@name":"hadoop.http.filter.initializers",
+                "@value":"org.apache.hadoop.http.lib.StaticUserWebFilter"
+              }
+            ]
+          },
+          {
+            "@name":"hdfs-site.xml",
+            "property":[
+              {
+                "@name":"dfs.name.dir",
+                "@value":"<%= ambari_role_prefix %>/hdfs/name"
+              },
+              {
+                "@name":"dfs.data.dir",
+                "@value":"<%= ambari_role_prefix %>/hdfs/data"
+              },
+              {
+                "@name":"dfs.safemode.threshold.pct",
+                "@value":"1.0f"
+              },
+              {
+                "@name":"dfs.datanode.address",
+                "@value":"0.0.0.0:50010"
+              },
+              {
+                "@name":"dfs.datanode.http.address",
+                "@value":"0.0.0.0:50075"
+              },
+              {
+                "@name":"dfs.web.authentication.kerberos.keytab",
+                "@value":"<%= ambari_keytab_dir %>/<%= ambari_namenode_principal %>.service.keytab"
+              },
+              {
+                "@name":"dfs.namenode.keytab.file",
+                "@value":"<%= ambari_keytab_dir %>/<%= ambari_namenode_principal %>.service.keytab"
+              },
+              {
+                "@name":"dfs.secondary.namenode.keytab.file",
+                "@value":"<%= ambari_keytab_dir %>/<%= ambari_namenode_principal %>.service.keytab"
+              },
+              {
+                "@name":"dfs.datanode.keytab.file",
+                "@value":"<%= ambari_keytab_dir %>/<%= ambari_datanode_principal %>.service.keytab"
+              },
+              {
+                "@name":"dfs.secondary.https.port",
+                "@value":"50490"
+              },
+              {
+                "@name":"dfs.https.port",
+                "@value":"50470"
+              },
+              {
+                "@name":"dfs.https.address",
+                "@value":"<%= ambari_namenode_host %>:50470"
+              },
+              {
+                "@name":"dfs.datanode.data.dir.perm",
+                "@value":"700"
+              },
+              {
+                "@name":"dfs.cluster.administrators",
+                "@value":"<%= ambari_hdfs_user %>"
+              },
+              {
+                "@name":"dfs.permissions.superusergroup",
+                "@value":"<%= ambari_hdfs_group %>"
+              },
+              {
+                "@name":"dfs.secondary.http.address",
+                "@value":"<%= ambari_namenode_host %>:50090"
+              },
+              {
+                "@name":"dfs.hosts",
+                "@value":"<%= ambari_role_prefix %>/etc/hadoop/dfs.include"
+              },
+              {
+                "@name":"dfs.hosts.exclude",
+                "@value":"<%= ambari_role_prefix %>/etc/hadoop/dfs.exclude"
+              },
+              {
+                "@name":"dfs.webhdfs.enabled",
+                "@value":"true"
+              }
+            ]
+          },
+          {
+            "@name":"hadoop-env.sh",
+            "property":[
+              {
+                "@name":"HADOOP_CONF_DIR",
+                "@value":"<%= ambari_role_prefix %>/etc/hadoop"
+              },
+              {
+                "@name":"HADOOP_NAMENODE_OPTS",
+                "@value":"-Dsecurity.audit.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT"
+              },
+              {
+                "@name":"HADOOP_SECONDARYNAMENODE_OPTS",
+                "@value":"-Dsecurity.audit.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT"
+              },
+              {
+                "@name":"HADOOP_DATANODE_OPTS",
+                "@value":"-Dsecurity.audit.logger=ERROR,DRFAS"
+              },
+              {
+                "@name":"HADOOP_SECURE_DN_USER",
+                "@value":""
+              },
+              {
+                "@name":"HADOOP_LOG_DIR",
+                "@value":"<%= ambari_role_prefix %>/logs"
+              },
+              {
+                "@name":"HADOOP_SECURE_DN_LOG_DIR",
+                "@value":"<%= ambari_role_prefix %>/logs/secure"
+              },
+              {
+                "@name":"HADOOP_PID_DIR",
+                "@value":"<%= ambari_role_prefix %>/pids"
+              },
+              {
+                "@name":"HADOOP_SECURE_DN_PID_DIR",
+                "@value":"<%= ambari_role_prefix %>/pids/secure"
+              },
+              {
+                "@name":"HADOOP_IDENT_STRING",
+                "@value":"<%= ambari_cluster_name %>"
+              }
+            ]
+          },
+          {
+            "@name":"hadoop-metrics2.properties",
+            "property":{
+              "@name":"*.period",
+              "@value":"60"
+            }
+          }
+        ]
+      }
+    }
+  ]
+}
diff --git a/controller/src/main/webapps.tar.gz b/controller/src/main/webapps.tar.gz
deleted file mode 100644
index 74c4074..0000000
--- a/controller/src/main/webapps.tar.gz
+++ /dev/null
Binary files differ
diff --git a/controller/src/main/webapps/configure-cluster.html b/controller/src/main/webapps/configure-cluster.html
index b1450b4..9b071c2 100644
--- a/controller/src/main/webapps/configure-cluster.html
+++ b/controller/src/main/webapps/configure-cluster.html
@@ -1,4 +1,20 @@
 <!DOCTYPE HTML>
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
 <html>
 <body>
 <h2>Configure a cluster</h2>
diff --git a/controller/src/main/webapps/create-cluster.html b/controller/src/main/webapps/create-cluster.html
index 179a6ae..46360ac 100644
--- a/controller/src/main/webapps/create-cluster.html
+++ b/controller/src/main/webapps/create-cluster.html
@@ -1,4 +1,20 @@
 <!DOCTYPE HTML>
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
 <html>
 <body>
 <h2>Create A New Cluster</h2>
diff --git a/controller/src/main/webapps/create-nodes.html b/controller/src/main/webapps/create-nodes.html
index cbc7a23..950fcf6 100644
--- a/controller/src/main/webapps/create-nodes.html
+++ b/controller/src/main/webapps/create-nodes.html
@@ -1,4 +1,20 @@
 <!DOCTYPE HTML>
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
 <html>
   <body>
     <section>
diff --git a/controller/src/main/webapps/create-nodes.html.orig b/controller/src/main/webapps/create-nodes.html.orig
deleted file mode 100644
index cbc7a23..0000000
--- a/controller/src/main/webapps/create-nodes.html.orig
+++ /dev/null
@@ -1,100 +0,0 @@
-<!DOCTYPE HTML>
-<html>
-  <body>
-    <section>
-      <h3>Define A Node List</h3>
-      <form class="form">
-        <table id="form" class="display">
-          <thead>
-            <th width="200px">Role</th><th>Host(s)</th>
-          </thead>
-        <tbody></tbody>
-        </table>
-        <p>
-        <button type="submit" id="save" onclick="return verifyNodesList()" value="save">Save</button>
-        <button id="cancel" onclick="javascript:window.location='/';">Cancel</button>
-        </p>
-      </form>
-    </section>
-    <script type="text/javascript">
-      var roles = [ "namenode", "secondary-namenode", "datanode", "jobtracker", "tasktracker", "gateway", "jobhistory-server" ];
-      var labelError = "What is the node list name?";
-      var hostError = "Role can not be empty.";
-      function addRole(type) {
-        var buffer = [];
-        var i = 0;
-        var role = "<input type='hidden' id='role."+type+".name' value='"+type+"'>"+type;
-        buffer[i++]=role;
-        var host = "<input type='text' id='role."+type+".host' value='' class='formInput'>";
-        buffer[i++]=host;
-        return buffer;
-      }
-
-      function verifyNodesList() {
-        var labelControl = document.getElementById('label');
-        var invalid = false;
-        if((labelControl.value == "") || (labelControl.value == labelError)) {
-          labelControl.value = labelError;
-          labelControl.select();
-          labelControl.focus();
-          invalid = true;
-        }
-        var label = labelControl.value;
-        for(type in roles) {
-          var control = document.getElementById('role.'+roles[type]+'.host');
-          if((control.value == "") || (control.value == hostError)) {
-            control.value = hostError;
-            control.select();
-            control.focus();
-            invalid = true;
-          }
-        }
-        if(!invalid) {
-          var data = [];
-          var i = 0;
-          var url = window.location.protocol+"//"+window.location.host+"/v1/nodes/manifest/"+labelControl.value;
-          data[i++] = '{"@url":"'+url+'", "roles":[';
-          var list = [];
-          var j = 0;
-          for(type in roles) {
-            var hosts = document.getElementById('role.'+roles[type]+'.host').value;
-            var role = '{"@name":"'+roles[type]+'","host":"'+hosts+'"}';
-            list[j] = role;
-            j++;
-          }
-          data[i++] = list.join(',');
-          data[i++] = ']}';
-          var buffer = data.join("");
-          $.ajax({
-            type: 'POST',
-            url: '/v1/nodes/manifest',
-            contentType: "application/json; charset=utf-8",
-            data: buffer,
-            success: function(data) {
-              window.location.href = '/?func=list-nodes';
-            },
-            dataType:'json'
-          });
-        }
-        return false;
-      }
-
-      var label = "Label <input type='text' id='label' value='' />";
-      $(document).ready(function() {
-        $("#navigation").load("/nav.html");
-        $('#form').dataTable({
-          "bJQueryUI": true, 
-          "sPaginationType": "full_numbers",
-          "oLanguage": {
-            "sInfo" : label
-          },
-          "sDom": '<"H"if>rt<"F"p>'
-        });
-        for(type in roles) {
-          $('#form').dataTable().fnAddData(addRole(roles[type]));
-        }
-        $("#label").focus();
-      });
-    </script>
-  </body>
-</html>
diff --git a/controller/src/main/webapps/css/default.css b/controller/src/main/webapps/css/default.css
index b35a7e4..4384696 100644
--- a/controller/src/main/webapps/css/default.css
+++ b/controller/src/main/webapps/css/default.css
@@ -1,3 +1,20 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 body {
   color: #757575;
   font-family: Arial,sans-serif;
diff --git a/controller/src/main/webapps/css/demo_page.css b/controller/src/main/webapps/css/demo_page.css
deleted file mode 100644
index 088d8b9..0000000
--- a/controller/src/main/webapps/css/demo_page.css
+++ /dev/null
@@ -1,99 +0,0 @@
-
-/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
- * General page setup
- */
-#dt_example {
-	font: 80%/1.45em "Lucida Grande", Verdana, Arial, Helvetica, sans-serif;
-	margin: 0;
-	padding: 0;
-	color: #333;
-	background-color: #fff;
-}
-
-
-#dt_example #container {
-	width: 800px;
-	margin: 30px auto;
-	padding: 0;
-}
-
-
-#dt_example #footer {
-	margin: 50px auto 0 auto;
-	padding: 0;
-}
-
-#dt_example #demo {
-	margin: 30px auto 0 auto;
-}
-
-#dt_example .demo_jui {
-	margin: 30px auto 0 auto;
-}
-
-#dt_example .big {
-	font-size: 1.3em;
-	font-weight: bold;
-	line-height: 1.6em;
-	color: #4E6CA3;
-}
-
-#dt_example .spacer {
-	height: 20px;
-	clear: both;
-}
-
-#dt_example .clear {
-	clear: both;
-}
-
-#dt_example pre {
-	padding: 15px;
-	background-color: #F5F5F5;
-	border: 1px solid #CCCCCC;
-}
-
-#dt_example h1 {
-	margin-top: 2em;
-	font-size: 1.3em;
-	font-weight: normal;
-	line-height: 1.6em;
-	color: #4E6CA3;
-	border-bottom: 1px solid #B0BED9;
-	clear: both;
-}
-
-#dt_example h2 {
-	font-size: 1.2em;
-	font-weight: normal;
-	line-height: 1.6em;
-	color: #4E6CA3;
-	clear: both;
-}
-
-#dt_example a {
-	color: #0063DC;
-	text-decoration: none;
-}
-
-#dt_example a:hover {
-	text-decoration: underline;
-}
-
-#dt_example ul {
-	color: #4E6CA3;
-}
-
-.css_right {
-	float: right;
-}
-
-.css_left {
-	float: left;
-}
-
-.demo_links {
-	float: left;
-	width: 50%;
-	margin-bottom: 1em;
-}
\ No newline at end of file
diff --git a/controller/src/main/webapps/css/demo_table.css b/controller/src/main/webapps/css/demo_table.css
deleted file mode 100644
index 3bc0433..0000000
--- a/controller/src/main/webapps/css/demo_table.css
+++ /dev/null
@@ -1,538 +0,0 @@
-/*
- *  File:         demo_table.css
- *  CVS:          $Id$
- *  Description:  CSS descriptions for DataTables demo pages
- *  Author:       Allan Jardine
- *  Created:      Tue May 12 06:47:22 BST 2009
- *  Modified:     $Date$ by $Author$
- *  Language:     CSS
- *  Project:      DataTables
- *
- *  Copyright 2009 Allan Jardine. All Rights Reserved.
- *
- * ***************************************************************************
- * DESCRIPTION
- *
- * The styles given here are suitable for the demos that are used with the standard DataTables
- * distribution (see www.datatables.net). You will most likely wish to modify these styles to
- * meet the layout requirements of your site.
- *
- * Common issues:
- *   'full_numbers' pagination - I use an extra selector on the body tag to ensure that there is
- *     no conflict between the two pagination types. If you want to use full_numbers pagination
- *     ensure that you either have "example_alt_pagination" as a body class name, or better yet,
- *     modify that selector.
- *   Note that the path used for Images is relative. All images are by default located in
- *     ../images/ - relative to this CSS file.
- */
-
-/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
- * DataTables features
- */
-
-.dataTables_wrapper {
-	position: relative;
-	min-height: 302px;
-	clear: both;
-	_height: 302px;
-	zoom: 1; /* Feeling sorry for IE */
-}
-
-.dataTables_processing {
-	position: absolute;
-	top: 50%;
-	left: 50%;
-	width: 250px;
-	height: 30px;
-	margin-left: -125px;
-	margin-top: -15px;
-	padding: 14px 0 2px 0;
-	border: 1px solid #ddd;
-	text-align: center;
-	color: #999;
-	font-size: 14px;
-	background-color: white;
-}
-
-.dataTables_length {
-	width: 40%;
-	float: left;
-}
-
-.dataTables_filter {
-	width: 50%;
-	float: right;
-	text-align: right;
-}
-
-.dataTables_info {
-	width: 60%;
-	float: left;
-}
-
-.dataTables_paginate {
-	width: 44px;
-	* width: 50px;
-	float: right;
-	text-align: right;
-}
-
-/* Pagination nested */
-.paginate_disabled_previous, .paginate_enabled_previous, .paginate_disabled_next, .paginate_enabled_next {
-	height: 19px;
-	width: 19px;
-	margin-left: 3px;
-	float: left;
-}
-
-.paginate_disabled_previous {
-	background-image: url('../images/back_disabled.jpg');
-}
-
-.paginate_enabled_previous {
-	background-image: url('../images/back_enabled.jpg');
-}
-
-.paginate_disabled_next {
-	background-image: url('../images/forward_disabled.jpg');
-}
-
-.paginate_enabled_next {
-	background-image: url('../images/forward_enabled.jpg');
-}
-
-
-
-/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
- * DataTables display
- */
-table.display {
-	margin: 0 auto;
-	clear: both;
-	width: 100%;
-	
-	/* Note Firefox 3.5 and before have a bug with border-collapse
-	 * ( https://bugzilla.mozilla.org/show%5Fbug.cgi?id=155955 ) 
-	 * border-spacing: 0; is one possible option. Conditional-css.com is
-	 * useful for this kind of thing
-	 *
-	 * Further note IE 6/7 has problems when calculating widths with border width.
-	 * It subtracts one px relative to the other browsers from the first column, and
-	 * adds one to the end...
-	 *
-	 * If you want that effect I'd suggest setting a border-top/left on th/td's and 
-	 * then filling in the gaps with other borders.
-	 */
-}
-
-table.display thead th {
-	padding: 3px 18px 3px 10px;
-	border-bottom: 1px solid black;
-	font-weight: bold;
-	cursor: pointer;
-	* cursor: hand;
-}
-
-table.display tfoot th {
-	padding: 3px 18px 3px 10px;
-	border-top: 1px solid black;
-	font-weight: bold;
-}
-
-table.display tr.heading2 td {
-	border-bottom: 1px solid #aaa;
-}
-
-table.display td {
-	padding: 3px 10px;
-}
-
-table.display td.center {
-	text-align: center;
-}
-
-
-
-/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
- * DataTables sorting
- */
-
-.sorting_asc {
-	background: url('../images/sort_asc.png') no-repeat center right;
-}
-
-.sorting_desc {
-	background: url('../images/sort_desc.png') no-repeat center right;
-}
-
-.sorting {
-	background: url('../images/sort_both.png') no-repeat center right;
-}
-
-.sorting_asc_disabled {
-	background: url('../images/sort_asc_disabled.png') no-repeat center right;
-}
-
-.sorting_desc_disabled {
-	background: url('../images/sort_desc_disabled.png') no-repeat center right;
-}
-
-
-
-
-
-/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
- * DataTables row classes
- */
-table.display tr.odd.gradeA {
-	background-color: #ddffdd;
-}
-
-table.display tr.even.gradeA {
-	background-color: #eeffee;
-}
-
-table.display tr.odd.gradeC {
-	background-color: #ddddff;
-}
-
-table.display tr.even.gradeC {
-	background-color: #eeeeff;
-}
-
-table.display tr.odd.gradeX {
-	background-color: #ffdddd;
-}
-
-table.display tr.even.gradeX {
-	background-color: #ffeeee;
-}
-
-table.display tr.odd.gradeU {
-	background-color: #ddd;
-}
-
-table.display tr.even.gradeU {
-	background-color: #eee;
-}
-
-
-tr.odd {
-	background-color: #E2E4FF;
-}
-
-tr.even {
-	background-color: white;
-}
-
-
-
-
-
-/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
- * Misc
- */
-.dataTables_scroll {
-	clear: both;
-}
-
-.dataTables_scrollBody {
-	*margin-top: -1px;
-}
-
-.top, .bottom {
-	padding: 15px;
-	background-color: #F5F5F5;
-	border: 1px solid #CCCCCC;
-}
-
-.top .dataTables_info {
-	float: none;
-}
-
-.clear {
-	clear: both;
-}
-
-.dataTables_empty {
-	text-align: center;
-}
-
-tfoot input {
-	margin: 0.5em 0;
-	width: 100%;
-	color: #444;
-}
-
-tfoot input.search_init {
-	color: #999;
-}
-
-td.group {
-	background-color: #d1cfd0;
-	border-bottom: 2px solid #A19B9E;
-	border-top: 2px solid #A19B9E;
-}
-
-td.details {
-	background-color: #d1cfd0;
-	border: 2px solid #A19B9E;
-}
-
-
-.example_alt_pagination div.dataTables_info {
-	width: 40%;
-}
-
-.paging_full_numbers {
-	width: 400px;
-	height: 22px;
-	line-height: 22px;
-}
-
-.paging_full_numbers span.paginate_button,
- 	.paging_full_numbers span.paginate_active {
-	border: 1px solid #aaa;
-	-webkit-border-radius: 5px;
-	-moz-border-radius: 5px;
-	padding: 2px 5px;
-	margin: 0 3px;
-	cursor: pointer;
-	*cursor: hand;
-}
-
-.paging_full_numbers span.paginate_button {
-	background-color: #ddd;
-}
-
-.paging_full_numbers span.paginate_button:hover {
-	background-color: #ccc;
-}
-
-.paging_full_numbers span.paginate_active {
-	background-color: #99B3FF;
-}
-
-table.display tr.even.row_selected td {
-	background-color: #B0BED9;
-}
-
-table.display tr.odd.row_selected td {
-	background-color: #9FAFD1;
-}
-
-
-/*
- * Sorting classes for columns
- */
-/* For the standard odd/even */
-tr.odd td.sorting_1 {
-	background-color: #D3D6FF;
-}
-
-tr.odd td.sorting_2 {
-	background-color: #DADCFF;
-}
-
-tr.odd td.sorting_3 {
-	background-color: #E0E2FF;
-}
-
-tr.even td.sorting_1 {
-	background-color: #EAEBFF;
-}
-
-tr.even td.sorting_2 {
-	background-color: #F2F3FF;
-}
-
-tr.even td.sorting_3 {
-	background-color: #F9F9FF;
-}
-
-
-/* For the Conditional-CSS grading rows */
-/*
- 	Colour calculations (based off the main row colours)
-  Level 1:
-		dd > c4
-		ee > d5
-	Level 2:
-	  dd > d1
-	  ee > e2
- */
-tr.odd.gradeA td.sorting_1 {
-	background-color: #c4ffc4;
-}
-
-tr.odd.gradeA td.sorting_2 {
-	background-color: #d1ffd1;
-}
-
-tr.odd.gradeA td.sorting_3 {
-	background-color: #d1ffd1;
-}
-
-tr.even.gradeA td.sorting_1 {
-	background-color: #d5ffd5;
-}
-
-tr.even.gradeA td.sorting_2 {
-	background-color: #e2ffe2;
-}
-
-tr.even.gradeA td.sorting_3 {
-	background-color: #e2ffe2;
-}
-
-tr.odd.gradeC td.sorting_1 {
-	background-color: #c4c4ff;
-}
-
-tr.odd.gradeC td.sorting_2 {
-	background-color: #d1d1ff;
-}
-
-tr.odd.gradeC td.sorting_3 {
-	background-color: #d1d1ff;
-}
-
-tr.even.gradeC td.sorting_1 {
-	background-color: #d5d5ff;
-}
-
-tr.even.gradeC td.sorting_2 {
-	background-color: #e2e2ff;
-}
-
-tr.even.gradeC td.sorting_3 {
-	background-color: #e2e2ff;
-}
-
-tr.odd.gradeX td.sorting_1 {
-	background-color: #ffc4c4;
-}
-
-tr.odd.gradeX td.sorting_2 {
-	background-color: #ffd1d1;
-}
-
-tr.odd.gradeX td.sorting_3 {
-	background-color: #ffd1d1;
-}
-
-tr.even.gradeX td.sorting_1 {
-	background-color: #ffd5d5;
-}
-
-tr.even.gradeX td.sorting_2 {
-	background-color: #ffe2e2;
-}
-
-tr.even.gradeX td.sorting_3 {
-	background-color: #ffe2e2;
-}
-
-tr.odd.gradeU td.sorting_1 {
-	background-color: #c4c4c4;
-}
-
-tr.odd.gradeU td.sorting_2 {
-	background-color: #d1d1d1;
-}
-
-tr.odd.gradeU td.sorting_3 {
-	background-color: #d1d1d1;
-}
-
-tr.even.gradeU td.sorting_1 {
-	background-color: #d5d5d5;
-}
-
-tr.even.gradeU td.sorting_2 {
-	background-color: #e2e2e2;
-}
-
-tr.even.gradeU td.sorting_3 {
-	background-color: #e2e2e2;
-}
-
-
-/*
- * Row highlighting example
- */
-.ex_highlight #example tbody tr.even:hover, #example tbody tr.even td.highlighted {
-	background-color: #ECFFB3;
-}
-
-.ex_highlight #example tbody tr.odd:hover, #example tbody tr.odd td.highlighted {
-	background-color: #E6FF99;
-}
-
-.ex_highlight_row #example tr.even:hover {
-	background-color: #ECFFB3;
-}
-
-.ex_highlight_row #example tr.even:hover td.sorting_1 {
-	background-color: #DDFF75;
-}
-
-.ex_highlight_row #example tr.even:hover td.sorting_2 {
-	background-color: #E7FF9E;
-}
-
-.ex_highlight_row #example tr.even:hover td.sorting_3 {
-	background-color: #E2FF89;
-}
-
-.ex_highlight_row #example tr.odd:hover {
-	background-color: #E6FF99;
-}
-
-.ex_highlight_row #example tr.odd:hover td.sorting_1 {
-	background-color: #D6FF5C;
-}
-
-.ex_highlight_row #example tr.odd:hover td.sorting_2 {
-	background-color: #E0FF84;
-}
-
-.ex_highlight_row #example tr.odd:hover td.sorting_3 {
-	background-color: #DBFF70;
-}
-
-
-/*
- * KeyTable
- */
-table.KeyTable td {
-	border: 3px solid transparent;
-}
-
-table.KeyTable td.focus {
-	border: 3px solid #3366FF;
-}
-
-table.display tr.gradeA {
-	background-color: #eeffee;
-}
-
-table.display tr.gradeC {
-	background-color: #ddddff;
-}
-
-table.display tr.gradeX {
-	background-color: #ffdddd;
-}
-
-table.display tr.gradeU {
-	background-color: #ddd;
-}
-
-div.box {
-	height: 100px;
-	padding: 10px;
-	overflow: auto;
-	border: 1px solid #8080FF;
-	background-color: #E5E5FF;
-}
diff --git a/controller/src/main/webapps/css/demo_table_jui.css b/controller/src/main/webapps/css/demo_table_jui.css
deleted file mode 100644
index 493a8e4..0000000
--- a/controller/src/main/webapps/css/demo_table_jui.css
+++ /dev/null
@@ -1,520 +0,0 @@
-/*
- *  File:         demo_table_jui.css
- *  CVS:          $Id$
- *  Description:  CSS descriptions for DataTables demo pages
- *  Author:       Allan Jardine
- *  Created:      Tue May 12 06:47:22 BST 2009
- *  Modified:     $Date$ by $Author$
- *  Language:     CSS
- *  Project:      DataTables
- *
- *  Copyright 2009 Allan Jardine. All Rights Reserved.
- *
- * ***************************************************************************
- * DESCRIPTION
- *
- * The styles given here are suitable for the demos that are used with the standard DataTables
- * distribution (see www.datatables.net). You will most likely wish to modify these styles to
- * meet the layout requirements of your site.
- *
- * Common issues:
- *   'full_numbers' pagination - I use an extra selector on the body tag to ensure that there is
- *     no conflict between the two pagination types. If you want to use full_numbers pagination
- *     ensure that you either have "example_alt_pagination" as a body class name, or better yet,
- *     modify that selector.
- *   Note that the path used for Images is relative. All images are by default located in
- *     ../images/ - relative to this CSS file.
- */
-
-
-/*
- * jQuery UI specific styling
- */
-
-.paging_two_button .ui-button {
-	float: left;
-	cursor: pointer;
-	* cursor: hand;
-}
-
-.paging_full_numbers .ui-button {
-	padding: 2px 6px;
-	margin: 0;
-	cursor: pointer;
-	* cursor: hand;
-}
-
-.dataTables_paginate .ui-button {
-	margin-right: -0.1em !important;
-}
-
-.paging_full_numbers {
-	width: 350px !important;
-}
-
-.dataTables_wrapper .ui-toolbar {
-	padding: 5px;
-}
-
-.dataTables_paginate {
-	width: auto;
-}
-
-.dataTables_info {
-	padding-top: 3px;
-}
-
-table.display thead th {
-	padding: 3px 0px 3px 10px;
-	cursor: pointer;
-	* cursor: hand;
-}
-
-div.dataTables_wrapper .ui-widget-header {
-	font-weight: normal;
-}
-
-
-/*
- * Sort arrow icon positioning
- */
-table.display thead th div.DataTables_sort_wrapper {
-	position: relative;
-	padding-right: 20px;
-	padding-right: 20px;
-}
-
-table.display thead th div.DataTables_sort_wrapper span {
-	position: absolute;
-	top: 50%;
-	margin-top: -8px;
-	right: 0;
-}
-
-
-
-
-/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
- *
- * Everything below this line is the same as demo_table.css. This file is
- * required for 'cleanliness' of the markup
- *
- * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
-
-
-
-/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
- * DataTables features
- */
-
-.dataTables_wrapper {
-	position: relative;
-	_height: 302px;
-	clear: both;
-}
-
-.dataTables_processing {
-	position: absolute;
-	top: 0px;
-	left: 50%;
-	width: 250px;
-	margin-left: -125px;
-	border: 1px solid #ddd;
-	text-align: center;
-	color: #999;
-	font-size: 11px;
-	padding: 2px 0;
-}
-
-.dataTables_length {
-	width: 40%;
-	float: left;
-}
-
-.dataTables_filter {
-	width: 50%;
-	float: right;
-	text-align: right;
-}
-
-.dataTables_info {
-	width: 50%;
-	float: left;
-}
-
-.dataTables_paginate {
-	float: right;
-	text-align: right;
-}
-
-/* Pagination nested */
-.paginate_disabled_previous, .paginate_enabled_previous, .paginate_disabled_next, .paginate_enabled_next {
-	height: 19px;
-	width: 19px;
-	margin-left: 3px;
-	float: left;
-}
-
-.paginate_disabled_previous {
-	background-image: url('../images/back_disabled.jpg');
-}
-
-.paginate_enabled_previous {
-	background-image: url('../images/back_enabled.jpg');
-}
-
-.paginate_disabled_next {
-	background-image: url('../images/forward_disabled.jpg');
-}
-
-.paginate_enabled_next {
-	background-image: url('../images/forward_enabled.jpg');
-}
-
-
-
-/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
- * DataTables display
- */
-table.display {
-	margin: 0 auto;
-	width: 100%;
-	clear: both;
-	border-collapse: collapse;
-}
-
-table.display tfoot th {
-	padding: 3px 0px 3px 10px;
-	font-weight: bold;
-	font-weight: normal;
-}
-
-table.display tr.heading2 td {
-	border-bottom: 1px solid #aaa;
-}
-
-table.display td {
-	padding: 3px 10px;
-}
-
-table.display td.center {
-	text-align: center;
-}
-
-
-
-/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
- * DataTables sorting
- */
-
-.sorting_asc {
-	background: url('../images/sort_asc.png') no-repeat center right;
-}
-
-.sorting_desc {
-	background: url('../images/sort_desc.png') no-repeat center right;
-}
-
-.sorting {
-	background: url('../images/sort_both.png') no-repeat center right;
-}
-
-.sorting_asc_disabled {
-	background: url('../images/sort_asc_disabled.png') no-repeat center right;
-}
-
-.sorting_desc_disabled {
-	background: url('../images/sort_desc_disabled.png') no-repeat center right;
-}
-
-
-
-
-/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
- * DataTables row classes
- */
-table.display tr.odd.gradeA {
-	background-color: #ddffdd;
-}
-
-table.display tr.even.gradeA {
-	background-color: #eeffee;
-}
-
-
-
-
-table.display tr.odd.gradeA {
-	background-color: #ddffdd;
-}
-
-table.display tr.even.gradeA {
-	background-color: #eeffee;
-}
-
-table.display tr.odd.gradeC {
-	background-color: #ddddff;
-}
-
-table.display tr.even.gradeC {
-	background-color: #eeeeff;
-}
-
-table.display tr.odd.gradeX {
-	background-color: #ffdddd;
-}
-
-table.display tr.even.gradeX {
-	background-color: #ffeeee;
-}
-
-table.display tr.odd.gradeU {
-	background-color: #ddd;
-}
-
-table.display tr.even.gradeU {
-	background-color: #eee;
-}
-
-
-tr.odd {
-	background-color: #E2E4FF;
-}
-
-tr.even {
-	background-color: white;
-}
-
-
-
-
-
-/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
- * Misc
- */
-.dataTables_scroll {
-	clear: both;
-}
-
-.top, .bottom {
-	padding: 15px;
-	background-color: #F5F5F5;
-	border: 1px solid #CCCCCC;
-}
-
-.top .dataTables_info {
-	float: none;
-}
-
-.clear {
-	clear: both;
-}
-
-.dataTables_empty {
-	text-align: center;
-}
-
-tfoot input {
-	margin: 0.5em 0;
-	width: 100%;
-	color: #444;
-}
-
-tfoot input.search_init {
-	color: #999;
-}
-
-td.group {
-	background-color: #d1cfd0;
-	border-bottom: 2px solid #A19B9E;
-	border-top: 2px solid #A19B9E;
-}
-
-td.details {
-	background-color: #d1cfd0;
-	border: 2px solid #A19B9E;
-}
-
-
-.example_alt_pagination div.dataTables_info {
-	width: 40%;
-}
-
-.paging_full_numbers span.paginate_button,
- 	.paging_full_numbers span.paginate_active {
-	border: 1px solid #aaa;
-	-webkit-border-radius: 5px;
-	-moz-border-radius: 5px;
-	padding: 2px 5px;
-	margin: 0 3px;
-	cursor: pointer;
-	*cursor: hand;
-}
-
-.paging_full_numbers span.paginate_button {
-	background-color: #ddd;
-}
-
-.paging_full_numbers span.paginate_button:hover {
-	background-color: #ccc;
-}
-
-.paging_full_numbers span.paginate_active {
-	background-color: #99B3FF;
-}
-
-table.display tr.even.row_selected td {
-	background-color: #B0BED9;
-}
-
-table.display tr.odd.row_selected td {
-	background-color: #9FAFD1;
-}
-
-
-/*
- * Sorting classes for columns
- */
-/* For the standard odd/even */
-tr.odd td.sorting_1 {
-	background-color: #D3D6FF;
-}
-
-tr.odd td.sorting_2 {
-	background-color: #DADCFF;
-}
-
-tr.odd td.sorting_3 {
-	background-color: #E0E2FF;
-}
-
-tr.even td.sorting_1 {
-	background-color: #EAEBFF;
-}
-
-tr.even td.sorting_2 {
-	background-color: #F2F3FF;
-}
-
-tr.even td.sorting_3 {
-	background-color: #F9F9FF;
-}
-
-
-/* For the Conditional-CSS grading rows */
-/*
- 	Colour calculations (based off the main row colours)
-  Level 1:
-		dd > c4
-		ee > d5
-	Level 2:
-	  dd > d1
-	  ee > e2
- */
-tr.odd.gradeA td.sorting_1 {
-	background-color: #c4ffc4;
-}
-
-tr.odd.gradeA td.sorting_2 {
-	background-color: #d1ffd1;
-}
-
-tr.odd.gradeA td.sorting_3 {
-	background-color: #d1ffd1;
-}
-
-tr.even.gradeA td.sorting_1 {
-	background-color: #d5ffd5;
-}
-
-tr.even.gradeA td.sorting_2 {
-	background-color: #e2ffe2;
-}
-
-tr.even.gradeA td.sorting_3 {
-	background-color: #e2ffe2;
-}
-
-tr.odd.gradeC td.sorting_1 {
-	background-color: #c4c4ff;
-}
-
-tr.odd.gradeC td.sorting_2 {
-	background-color: #d1d1ff;
-}
-
-tr.odd.gradeC td.sorting_3 {
-	background-color: #d1d1ff;
-}
-
-tr.even.gradeC td.sorting_1 {
-	background-color: #d5d5ff;
-}
-
-tr.even.gradeC td.sorting_2 {
-	background-color: #e2e2ff;
-}
-
-tr.even.gradeC td.sorting_3 {
-	background-color: #e2e2ff;
-}
-
-tr.odd.gradeX td.sorting_1 {
-	background-color: #ffc4c4;
-}
-
-tr.odd.gradeX td.sorting_2 {
-	background-color: #ffd1d1;
-}
-
-tr.odd.gradeX td.sorting_3 {
-	background-color: #ffd1d1;
-}
-
-tr.even.gradeX td.sorting_1 {
-	background-color: #ffd5d5;
-}
-
-tr.even.gradeX td.sorting_2 {
-	background-color: #ffe2e2;
-}
-
-tr.even.gradeX td.sorting_3 {
-	background-color: #ffe2e2;
-}
-
-tr.odd.gradeU td.sorting_1 {
-	background-color: #c4c4c4;
-}
-
-tr.odd.gradeU td.sorting_2 {
-	background-color: #d1d1d1;
-}
-
-tr.odd.gradeU td.sorting_3 {
-	background-color: #d1d1d1;
-}
-
-tr.even.gradeU td.sorting_1 {
-	background-color: #d5d5d5;
-}
-
-tr.even.gradeU td.sorting_2 {
-	background-color: #e2e2e2;
-}
-
-tr.even.gradeU td.sorting_3 {
-	background-color: #e2e2e2;
-}
-
-
-/*
- * Row highlighting example
- */
-.ex_highlight #example tbody tr.even:hover, #example tbody tr.even td.highlighted {
-	background-color: #ECFFB3;
-}
-
-.ex_highlight #example tbody tr.odd:hover, #example tbody tr.odd td.highlighted {
-	background-color: #E6FF99;
-}
diff --git a/controller/src/main/webapps/css/smoothness/jquery-ui-1.8.13.custom.css b/controller/src/main/webapps/css/smoothness/jquery-ui-1.8.13.custom.css
index d3fb1c7..9375e77 100644
--- a/controller/src/main/webapps/css/smoothness/jquery-ui-1.8.13.custom.css
+++ b/controller/src/main/webapps/css/smoothness/jquery-ui-1.8.13.custom.css
@@ -303,10 +303,10 @@
  */
 .ui-resizable { position: relative;}
 .ui-resizable-handle { position: absolute;font-size: 0.1px;z-index: 99999; display: block;
-	/* http://bugs.jqueryui.com/ticket/7233
-	 - Resizable: resizable handles fail to work in IE if transparent and content overlaps
-	*/
-	background-image:url(data:image/gif;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=);
+        /* http://bugs.jqueryui.com/ticket/7233
+         - Resizable: resizable handles fail to work in IE if transparent and content overlaps
+        */
+        background-image:url(data:image/gif;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=);
 }
 .ui-resizable-disabled .ui-resizable-handle, .ui-resizable-autohide .ui-resizable-handle { display: none; }
 .ui-resizable-n { cursor: n-resize; height: 7px; width: 100%; top: -5px; left: 0; }
@@ -354,7 +354,7 @@
  *
  * http://docs.jquery.com/UI/Autocomplete#theming
  */
-.ui-autocomplete { position: absolute; cursor: default; }	
+.ui-autocomplete { position: absolute; cursor: default; }       
 
 /* workarounds */
 * html .ui-autocomplete { width:1px; } /* without this, the menu expands to 100% in IE6 */
@@ -369,34 +369,34 @@
  * http://docs.jquery.com/UI/Menu#theming
  */
 .ui-menu {
-	list-style:none;
-	padding: 2px;
-	margin: 0;
-	display:block;
-	float: left;
+        list-style:none;
+        padding: 2px;
+        margin: 0;
+        display:block;
+        float: left;
 }
 .ui-menu .ui-menu {
-	margin-top: -3px;
+        margin-top: -3px;
 }
 .ui-menu .ui-menu-item {
-	margin:0;
-	padding: 0;
-	zoom: 1;
-	float: left;
-	clear: left;
-	width: 100%;
+        margin:0;
+        padding: 0;
+        zoom: 1;
+        float: left;
+        clear: left;
+        width: 100%;
 }
 .ui-menu .ui-menu-item a {
-	text-decoration:none;
-	display:block;
-	padding:.2em .4em;
-	line-height:1.5;
-	zoom:1;
+        text-decoration:none;
+        display:block;
+        padding:.2em .4em;
+        line-height:1.5;
+        zoom:1;
 }
 .ui-menu .ui-menu-item a.ui-state-hover,
 .ui-menu .ui-menu-item a.ui-state-active {
-	font-weight: normal;
-	margin: -1px;
+        font-weight: normal;
+        margin: -1px;
 }
 /*
  * jQuery UI Button 1.8.13
diff --git a/controller/src/main/webapps/define-cluster.html b/controller/src/main/webapps/define-cluster.html
deleted file mode 100644
index e342694..0000000
--- a/controller/src/main/webapps/define-cluster.html
+++ /dev/null
@@ -1,41 +0,0 @@
-<!DOCTYPE HTML>
-<html>
-  <head>
-    <title>Manage Nodes</title>
-  <head>
-<body>
-  <form>
-    <table>
-      <tr>
-        <td>Cluster Name</td>
-        <td><input type="text" name="name" value=""/></td>
-      </tr>
-      <tr>
-        <td>Name Node</td>
-        <td><input type="text" name="role" value=""/></td>
-      </tr>
-      <tr>
-        <td>Data Nodes</td>
-        <td><input type="text" name="role" value=""/></td>
-      </tr>
-      <tr>
-        <td>JobTracker</td>
-        <td><input type="text" name="host" value=""/></td>
-      </tr>
-      <tr>
-        <td>Task Trackers</td>
-        <td><input type="text" name="host" value=""/></td>
-      </tr>
-      <tr>
-        <td>Package URL</td>
-        <td><input type="text" name="host" value=""/></td>
-      </tr>
-      <tr>
-        <td>
-          <input type="button" name="command" value="Make"/>
-        </td>
-      </tr>
-    </table>
-  </form>
-</body>
-</html>
diff --git a/controller/src/main/webapps/delete-cluster.html b/controller/src/main/webapps/delete-cluster.html
index 39f580b..bd84065 100644
--- a/controller/src/main/webapps/delete-cluster.html
+++ b/controller/src/main/webapps/delete-cluster.html
@@ -1,4 +1,20 @@
 <!DOCTYPE HTML>
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
 <html>
 <body>
 <section>
diff --git a/controller/src/main/webapps/index.html b/controller/src/main/webapps/index.html
index 53e839d..2e51b82 100644
--- a/controller/src/main/webapps/index.html
+++ b/controller/src/main/webapps/index.html
@@ -1,9 +1,25 @@
 <!DOCTYPE HTML>
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
 <html>
   <head>
     <title>Hadoop Management System</title>
     <style type="text/css" media="screen">
-			@import "/css/demo_table_jui.css";
+                        @import "/css/demo_table_jui.css";
                         @import "/css/smoothness/jquery-ui-1.8.13.custom.css";
                         @import "/css/default.css";
     </style>
diff --git a/controller/src/main/webapps/js/default.js b/controller/src/main/webapps/js/default.js
index ba18f7c..963cd3e 100644
--- a/controller/src/main/webapps/js/default.js
+++ b/controller/src/main/webapps/js/default.js
@@ -1,3 +1,20 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 function renderCluster(cluster) {
   var buffer = [];
   var i=0;
diff --git a/controller/src/main/webapps/list-nodes.html b/controller/src/main/webapps/list-nodes.html
index d7176ea..8b03287 100644
--- a/controller/src/main/webapps/list-nodes.html
+++ b/controller/src/main/webapps/list-nodes.html
@@ -1,4 +1,20 @@
 <!DOCTYPE HTML>
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
 <html>
   <body>
     <section>
diff --git a/controller/src/main/webapps/manage-clusters.html b/controller/src/main/webapps/manage-clusters.html
index c233312..cd3f563 100644
--- a/controller/src/main/webapps/manage-clusters.html
+++ b/controller/src/main/webapps/manage-clusters.html
@@ -1,4 +1,20 @@
 <!DOCTYPE HTML>
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
 <html>
   <body>
     <section>
diff --git a/controller/src/main/webapps/manage-nodes.html b/controller/src/main/webapps/manage-nodes.html
index 6314bd9..f509c2c 100644
--- a/controller/src/main/webapps/manage-nodes.html
+++ b/controller/src/main/webapps/manage-nodes.html
@@ -1,4 +1,20 @@
 <!DOCTYPE HTML>
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
 <html>
 <body>
   <form>
diff --git a/controller/src/main/webapps/nav.html b/controller/src/main/webapps/nav.html
index c1e57da..08377a9 100644
--- a/controller/src/main/webapps/nav.html
+++ b/controller/src/main/webapps/nav.html
@@ -1,3 +1,19 @@
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
 <ul id="drawers">
   <li class="drawer">
     <h2 class="drawer-handle open">Cluster</h2>
diff --git a/controller/src/main/webapps/status-cluster.html b/controller/src/main/webapps/status-cluster.html
index d71cdc2..0b9e0d6 100644
--- a/controller/src/main/webapps/status-cluster.html
+++ b/controller/src/main/webapps/status-cluster.html
@@ -1,4 +1,20 @@
 <!DOCTYPE HTML>
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
 <html>
   <body>
     <section>
diff --git a/controller/src/main/webapps/status-command.html b/controller/src/main/webapps/status-command.html
index e3adfe2..754739b 100644
--- a/controller/src/main/webapps/status-command.html
+++ b/controller/src/main/webapps/status-command.html
@@ -1,4 +1,20 @@
 <!DOCTYPE HTML>
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
 <html>
 <body>
 <h2>Deployment</h2>
diff --git a/controller/src/main/webapps/status-nodes.html b/controller/src/main/webapps/status-nodes.html
index ef00f16..ad11250 100644
--- a/controller/src/main/webapps/status-nodes.html
+++ b/controller/src/main/webapps/status-nodes.html
@@ -1,4 +1,20 @@
 <!DOCTYPE HTML>
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
 <html>
   <body>
     <section>
diff --git a/controller/src/main/webapps/test-config.xml b/controller/src/main/webapps/test-config.xml
deleted file mode 100644
index 3a596a6..0000000
--- a/controller/src/main/webapps/test-config.xml
+++ /dev/null
@@ -1,13 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
-  <configManifest>
-    <actions xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="packageAction">
-      <actionId>0</actionId>
-      <actionType>install</actionType>
-      <expectedResults>
-        <type>PACKAGE</type>
-        <name>hadoop</name>
-        <status>INSTALLED</status>
-      </expectedResults>
-      <dry-run>false</dry-run>
-    </actions>
-  </configManifest>
diff --git a/controller/src/main/webapps/test.xml b/controller/src/main/webapps/test.xml
deleted file mode 100644
index 60b7686..0000000
--- a/controller/src/main/webapps/test.xml
+++ /dev/null
@@ -1,8 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
-<softwareManifest name="hadoop" version="0.20.204">
-  <roles name="namenode">
-    <package>
-      <name>http://people.apache.org/~omalley/hadoop-0.20.204.0-rc3/hadoop-0.20.204.0.tar.gz</name>
-    </package>
-  </roles>
-</softwareManifest>
diff --git a/controller/src/main/webapps/wadl.xsl b/controller/src/main/webapps/wadl.xsl
new file mode 100644
index 0000000..e9d0f41
--- /dev/null
+++ b/controller/src/main/webapps/wadl.xsl
@@ -0,0 +1,669 @@
+<?xml version="1.0" encoding="UTF-8"?>

+<!-- 

+    wadl.xsl (06-May-2011)

+    

+    Transforms Web Application Description Language (WADL) XML documents into HTML.

+

+    Mark Sawers <mark.sawers@ipc.com>

+    

+    Limitations:

+        * Ignores globally defined methods, referred to from a resource using a method reference element.

+          Methods must be embedded in a resource element.

+        * Ditto for globally defined representations. Representations must be embedded within request

+          and response elements.

+        * Ignores type and queryType attributes of resource element.

+        * Ignores resource_type element.

+        * Ignores profile attribute of representation element.

+        * Ignores path attribute and child link elements of param element.

+

+    Copyright (c) 2011 IPC Systems, Inc.

+

+    Parts of this work are adapted from Mark Notingham's wadl_documentation.xsl, at

+        https://github.com/mnot/wadl_stylesheets.

+    

+    This work is licensed under the Creative Commons Attribution-ShareAlike 3.0 License.

+    To view a copy of this license, visit 

+        http://creativecommons.org/licenses/by-sa/3.0/

+    or send a letter to 

+        Creative Commons

+        543 Howard Street, 5th Floor

+        San Francisco, California, 94105, USA

+ -->

+<xsl:stylesheet 

+ xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="2.0"

+ xmlns:wadl="http://research.sun.com/wadl/2006/10"

+ xmlns:xs="http://www.w3.org/2001/XMLSchema"

+ xmlns:html="http://www.w3.org/1999/xhtml"

+ xmlns="http://www.w3.org/1999/xhtml"

+>

+

+<!-- Global variables -->

+<xsl:variable name="g_resourcesBase" select="wadl:application/wadl:resources/@base"/>

+

+<!-- Template for top-level doc element -->

+<xsl:template match="wadl:application">

+    <html>

+    <head>

+        <xsl:call-template name="getStyle"/>

+        <title><xsl:call-template name="getTitle"/></title>

+    </head>

+    <body>

+    <h1><xsl:call-template name="getTitle"/></h1>

+    <xsl:call-template name="getDoc">

+        <xsl:with-param name="base" select="$g_resourcesBase"/>

+    </xsl:call-template>

+    

+    <!-- Summary -->

+    <h2>Summary</h2>

+    <table>

+        <tr>

+            <th>Resource</th>

+            <th>Method</th>

+            <th>Description</th>

+        </tr>

+        <xsl:for-each select="wadl:resources/wadl:resource">

+            <xsl:call-template name="processResourceSummary">

+                <xsl:with-param name="resourceBase" select="$g_resourcesBase"/>

+                <xsl:with-param name="resourcePath" select="@path"/>

+                <xsl:with-param name="lastResource" select="position() = last()"/>

+            </xsl:call-template>

+        </xsl:for-each>

+    </table>

+    <p></p>

+    

+    <!-- Grammars -->

+    <xsl:if test="wadl:grammars/wadl:include">

+        <h2>Grammars</h2>

+        <p>

+            <xsl:for-each select="wadl:grammars/wadl:include">

+                <xsl:variable name="href" select="@href"/>

+                <a href="{$href}"><xsl:value-of select="$href"/></a>

+                <xsl:if test="position() != last()"><br/></xsl:if>  <!-- Add a spacer -->

+            </xsl:for-each>

+        </p>

+    </xsl:if>

+

+    <!-- Detail -->

+    <h2>Resources</h2>

+    <xsl:for-each select="wadl:resources">

+        <xsl:call-template name="getDoc">

+            <xsl:with-param name="base" select="$g_resourcesBase"/>

+        </xsl:call-template>

+        <br/>

+    </xsl:for-each>

+    

+    <xsl:for-each select="wadl:resources/wadl:resource">

+        <xsl:call-template name="processResourceDetail">

+            <xsl:with-param name="resourceBase" select="$g_resourcesBase"/>

+            <xsl:with-param name="resourcePath" select="@path"/>

+        </xsl:call-template>

+    </xsl:for-each>

+

+    </body>

+    </html>

+</xsl:template>

+

+<!-- Supporting templates (functions) -->

+

+<xsl:template name="processResourceSummary">

+    <xsl:param name="resourceBase"/>

+    <xsl:param name="resourcePath"/>

+    <xsl:param name="lastResource"/>

+

+    <xsl:if test="wadl:method">

+        <tr>

+            <!-- Resource -->

+            <td class="summary">

+                <xsl:variable name="id"><xsl:call-template name="getId"/></xsl:variable>

+                <a href="#{$id}">

+                    <xsl:call-template name="getFullResourcePath">

+                        <xsl:with-param name="base" select="$resourceBase"/>                

+                        <xsl:with-param name="path" select="$resourcePath"/>                

+                    </xsl:call-template>

+                </a>

+            </td>

+            <!-- Method -->

+            <td class="summary">

+                <xsl:for-each select="wadl:method">

+                    <xsl:variable name="name" select="@name"/>

+                    <xsl:variable name="id2"><xsl:call-template name="getId"/></xsl:variable>

+                    <a href="#{$id2}"><xsl:value-of select="$name"/></a>

+                    <xsl:for-each select="wadl:doc"><br/></xsl:for-each>

+                    <xsl:if test="position() != last()"><br/></xsl:if>  <!-- Add a spacer -->

+                </xsl:for-each>

+                <br/>

+            </td>

+            <!-- Description -->

+            <td class="summary">

+                <xsl:for-each select="wadl:method">

+                    <xsl:call-template name="getDoc">

+                        <xsl:with-param name="base" select="$resourceBase"/>

+                    </xsl:call-template>

+                    <br/>

+                    <xsl:if test="position() != last()"><br/></xsl:if>  <!-- Add a spacer -->

+                </xsl:for-each>

+            </td>

+        </tr>

+        <!-- Add separator if not the last resource -->

+        <xsl:if test="wadl:method and not($lastResource)">

+            <tr><td class="summarySeparator"></td><td class="summarySeparator"/><td class="summarySeparator"/></tr>

+        </xsl:if>

+    </xsl:if>   <!-- wadl:method -->

+

+    <!-- Call recursively for child resources -->

+    <xsl:for-each select="wadl:resource">

+        <xsl:variable name="base">

+            <xsl:call-template name="getFullResourcePath">

+                <xsl:with-param name="base" select="$resourceBase"/>                

+                <xsl:with-param name="path" select="$resourcePath"/>           

+            </xsl:call-template>

+        </xsl:variable>

+        <xsl:call-template name="processResourceSummary">

+            <xsl:with-param name="resourceBase" select="$base"/>

+            <xsl:with-param name="resourcePath" select="@path"/>

+            <xsl:with-param name="lastResource" select="$lastResource and position() = last()"/>

+        </xsl:call-template>

+    </xsl:for-each>

+

+</xsl:template>

+

+<xsl:template name="processResourceDetail">

+    <xsl:param name="resourceBase"/>

+    <xsl:param name="resourcePath"/>

+

+    <xsl:if test="wadl:method">

+        <h3>

+            <xsl:variable name="id"><xsl:call-template name="getId"/></xsl:variable>

+            <a name="{$id}">

+                <xsl:call-template name="getFullResourcePath">

+                    <xsl:with-param name="base" select="$resourceBase"/>                

+                    <xsl:with-param name="path" select="$resourcePath"/>                

+                </xsl:call-template>

+            </a>

+        </h3>

+        <p>

+            <xsl:call-template name="getDoc">

+                <xsl:with-param name="base" select="$resourceBase"/>

+            </xsl:call-template>

+        </p>

+

+        <h5>Methods</h5>

+

+        <div class="methods">

+            <xsl:for-each select="wadl:method">

+            <div class="method">

+                <table class="methodNameTable">

+                    <tr>

+                        <td class="methodNameTd" style="font-weight: bold">

+                            <xsl:variable name="name" select="@name"/>

+                            <xsl:variable name="id2"><xsl:call-template name="getId"/></xsl:variable>

+                            <a name="{$id2}"><xsl:value-of select="$name"/></a>

+                        </td>

+                        <td class="methodNameTd" style="text-align: right">

+                            <xsl:if test="@id">

+                                <xsl:value-of select="@id"/>() 

+                            </xsl:if>

+                        </td>

+                    </tr>

+                </table>

+                <p>

+                    <xsl:call-template name="getDoc">

+                        <xsl:with-param name="base" select="$resourceBase"/>

+                    </xsl:call-template>

+                </p>

+

+                <!-- Request -->

+                <h6>request</h6>

+                <div style="margin-left: 2em">  <!-- left indent -->

+                <xsl:choose>

+                    <xsl:when test="wadl:request">

+                        <xsl:for-each select="wadl:request">

+                            <xsl:call-template name="getParamBlock">

+                                <xsl:with-param name="style" select="'template'"/>

+                            </xsl:call-template>

+                    

+                            <xsl:call-template name="getParamBlock">

+                                <xsl:with-param name="style" select="'matrix'"/>

+                            </xsl:call-template>

+                    

+                            <xsl:call-template name="getParamBlock">

+                                <xsl:with-param name="style" select="'header'"/>

+                            </xsl:call-template>

+                    

+                            <xsl:call-template name="getParamBlock">

+                                <xsl:with-param name="style" select="'query'"/>

+                            </xsl:call-template>

+                    

+                            <xsl:call-template name="getRepresentations"/>

+                        </xsl:for-each> <!-- wadl:request -->

+                    </xsl:when>

+    

+                    <xsl:when test="not(wadl:request) and (ancestor::wadl:*/wadl:param)">

+                        <xsl:call-template name="getParamBlock">

+                            <xsl:with-param name="style" select="'template'"/>

+                        </xsl:call-template>

+                

+                        <xsl:call-template name="getParamBlock">

+                            <xsl:with-param name="style" select="'matrix'"/>

+                        </xsl:call-template>

+                

+                        <xsl:call-template name="getParamBlock">

+                            <xsl:with-param name="style" select="'header'"/>

+                        </xsl:call-template>

+                

+                        <xsl:call-template name="getParamBlock">

+                            <xsl:with-param name="style" select="'query'"/>

+                        </xsl:call-template>

+                

+                        <xsl:call-template name="getRepresentations"/>

+                    </xsl:when>

+            

+                    <xsl:otherwise>

+                        unspecified

+                    </xsl:otherwise>

+                </xsl:choose>

+                </div>  <!-- left indent for request -->

+                                

+                <!-- Response -->

+                <h6>responses</h6>

+                <div style="margin-left: 2em">  <!-- left indent -->

+                <xsl:choose>

+                    <xsl:when test="wadl:response">

+                        <xsl:for-each select="wadl:response">

+<!--                            <div class="h8">status: </div>

+                            <xsl:choose>

+                                <xsl:when test="@status">

+                                    <xsl:value-of select="@status"/>

+                                </xsl:when>

+                                <xsl:otherwise>

+                                    200 - OK

+                                </xsl:otherwise>

+                            </xsl:choose>

+                            <xsl:for-each select="wadl:doc">

+                                <xsl:if test="@title">

+                                    - <xsl:value-of select="@title"/>

+                                </xsl:if>

+                                <xsl:if test="text()">

+                                    - <xsl:value-of select="text()"/>

+                                </xsl:if>

+                            </xsl:for-each> -->

+                            

+                            <!-- Get response headers/representations -->

+                            <xsl:if test="wadl:param or wadl:representation">

+                                <div style="margin-left: 2em"> <!-- left indent -->

+                                <xsl:if test="wadl:param">

+                                    <div class="h7">headers</div>

+                                    <table>

+                                        <xsl:for-each select="wadl:param[@style='header']">

+                                            <xsl:call-template name="getParams"/>

+                                        </xsl:for-each>

+                                    </table>

+                                </xsl:if>

+    

+                                <xsl:call-template name="getRepresentations"/>

+                                </div>  <!-- left indent for response headers/representations -->

+                            </xsl:if>

+                        </xsl:for-each> <!-- wadl:response -->

+                    </xsl:when>

+                    <xsl:otherwise>

+                        unspecified

+                    </xsl:otherwise>

+                </xsl:choose>                

+                </div>  <!-- left indent for responses -->

+

+            </div>  <!-- class=method -->

+            </xsl:for-each> <!-- wadl:method  -->

+        </div> <!-- class=methods -->

+

+    </xsl:if>   <!-- wadl:method -->

+

+    <!-- Call recursively for child resources -->

+    <xsl:for-each select="wadl:resource">

+        <xsl:variable name="base">

+            <xsl:call-template name="getFullResourcePath">

+                <xsl:with-param name="base" select="$resourceBase"/>                

+                <xsl:with-param name="path" select="$resourcePath"/>           

+            </xsl:call-template>

+        </xsl:variable>

+        <xsl:call-template name="processResourceDetail">

+            <xsl:with-param name="resourceBase" select="$base"/>

+            <xsl:with-param name="resourcePath" select="@path"/>

+        </xsl:call-template>

+    </xsl:for-each> <!-- wadl:resource -->

+</xsl:template>

+

+<xsl:template name="getFullResourcePath">

+    <xsl:param name="base"/>

+    <xsl:param name="path"/>

+    <xsl:choose>

+        <xsl:when test="substring($base, string-length($base)) = '/'">

+            <xsl:value-of select="$base"/>

+        </xsl:when>

+        <xsl:otherwise>

+            <xsl:value-of select="concat($base, '/')"/>

+        </xsl:otherwise>

+    </xsl:choose>

+    <xsl:choose>

+        <xsl:when test="starts-with($path, '/')">

+            <xsl:value-of select="substring($path, 2)"/>

+        </xsl:when>

+        <xsl:otherwise>

+            <xsl:value-of select="$path"/>

+        </xsl:otherwise>

+    </xsl:choose>

+</xsl:template>

+

+<xsl:template name="getDoc">

+    <xsl:param name="base"/>

+    <xsl:for-each select="wadl:doc">

+        <xsl:if test="position() > 1"><br/></xsl:if>

+        <xsl:if test="@title and local-name(..) != 'application'">

+            <xsl:value-of select="@title"/>:

+        </xsl:if>

+        <xsl:choose>

+            <xsl:when test="@title = 'Example'">

+                <xsl:variable name="url">

+                    <xsl:choose>

+                        <xsl:when test="string-length($base) > 0">

+                            <xsl:call-template name="getFullResourcePath">

+                                <xsl:with-param name="base" select="$base"/>                

+                                <xsl:with-param name="path" select="text()"/>

+                            </xsl:call-template>

+                        </xsl:when>

+                        <xsl:otherwise><xsl:value-of select="text()"/></xsl:otherwise>

+                    </xsl:choose>

+                </xsl:variable>

+                <a href="{$url}"><xsl:value-of select="$url"/></a>

+            </xsl:when>

+            <xsl:otherwise>

+               <xsl:apply-templates select="node()" mode="copy"/>

+            </xsl:otherwise>

+        </xsl:choose>

+    </xsl:for-each>

+</xsl:template>

+

+<xsl:template match="a" mode="copy">

+    <xsl:variable name="href" select="@href"/>

+    <a href="{$href}"><xsl:apply-templates select="node()" mode="copy"/></a>

+</xsl:template>

+

+<xsl:template match="b" mode="copy">

+  <b><xsl:apply-templates select="node()" mode="copy"/></b>

+</xsl:template>

+

+<xsl:template match="br" mode="copy">

+  <br><xsl:apply-templates select="node()" mode="copy"/></br>

+</xsl:template>

+

+<xsl:template match="p" mode="copy">

+  <p><xsl:apply-templates select="node()" mode="copy"/></p>

+</xsl:template>

+

+<xsl:template match="ul" mode="copy">

+  <ul><xsl:apply-templates select="node()" mode="copy"/></ul>

+</xsl:template>

+

+<xsl:template match="li" mode="copy">

+  <li><xsl:apply-templates select="node()" mode="copy"/></li>

+</xsl:template>

+

+<xsl:template match="html:*" mode="copy">

+    <!-- remove the prefix on HTML elements -->

+    <xsl:element name="{local-name()}">

+        <xsl:for-each select="@*">

+            <xsl:attribute name="{local-name()}"><xsl:value-of select="."/></xsl:attribute>

+        </xsl:for-each>

+        <xsl:apply-templates select="node()" mode="copy"/>

+    </xsl:element>

+</xsl:template>

+

+<xsl:template name="getId">

+    <xsl:choose>

+        <xsl:when test="@id"><xsl:value-of select="@id"/></xsl:when>

+        <xsl:otherwise><xsl:value-of select="generate-id()"/></xsl:otherwise>

+    </xsl:choose>

+</xsl:template>

+

+<xsl:template name="getParamBlock">

+    <xsl:param name="style"/>

+    <xsl:if test="ancestor-or-self::wadl:*/wadl:param[@style=$style]">

+        <div class="h7"><xsl:value-of select="$style"/> params</div>

+        <table>

+            <xsl:for-each select="ancestor-or-self::wadl:*/wadl:param[@style=$style]">

+                <xsl:call-template name="getParams"/>

+            </xsl:for-each>

+        </table>

+        <p/>

+    </xsl:if>

+</xsl:template>

+

+<xsl:template name="getParams">

+    <tr>

+        <td><strong><xsl:value-of select="@name"/></strong></td>

+            <td>

+                <xsl:if test="not(@type)">

+                    unspecified type

+                </xsl:if>

+                <xsl:call-template name="getParamType">

+                    <xsl:with-param name="qname" select="@type"/>

+                </xsl:call-template>

+                <xsl:if test="@required = 'true'"><br/>(required)</xsl:if>

+                <xsl:if test="@repeating = 'true'"><br/>(repeating)</xsl:if>

+                <xsl:if test="@default"><br/>default: <tt><xsl:value-of select="@default"/></tt></xsl:if>

+                <xsl:if test="@fixed"><br/>fixed: <tt><xsl:value-of select="@fixed"/></tt></xsl:if>

+                <xsl:if test="wadl:option">

+                    <br/>options:

+                    <xsl:for-each select="wadl:option">

+                        <xsl:choose>

+                            <xsl:when test="@mediaType">

+                                <br/><tt><xsl:value-of select="@value"/> (<xsl:value-of select="@mediaType"/>)</tt>

+                            </xsl:when>

+                            <xsl:otherwise>

+                                <tt><xsl:value-of select="@value"/></tt>

+                                <xsl:if test="position() != last()">, </xsl:if>

+                            </xsl:otherwise>

+                        </xsl:choose>

+                    </xsl:for-each>

+                </xsl:if>

+            </td>

+        <xsl:if test="wadl:doc">

+            <td><xsl:value-of select="wadl:doc"/></td>

+        </xsl:if>

+    </tr>

+</xsl:template>

+

+<xsl:template name="getParamType">

+    <xsl:param name="qname"/>

+    <xsl:variable name="prefix" select="substring-before($qname,':')"/>

+    <xsl:variable name="ns-uri" select="./namespace::*[name()=$prefix]"/>

+    <xsl:variable name="localname" select="substring-after($qname, ':')"/>

+    <xsl:choose>

+        <xsl:when test="$ns-uri='http://www.w3.org/2001/XMLSchema' or $ns-uri='http://www.w3.org/2001/XMLSchema-instance'">

+            <a href="http://www.w3.org/TR/xmlschema-2/#{$localname}"><xsl:value-of select="$localname"/></a>

+        </xsl:when>

+        <xsl:otherwise>

+            <xsl:value-of select="$qname"/>

+        </xsl:otherwise>

+    </xsl:choose>

+</xsl:template>

+

+<xsl:template name="getRepresentations">

+    <xsl:if test="wadl:representation">

+        <div class="h7">representations</div>

+        <table>

+            <xsl:for-each select="wadl:representation">

+                <tr>

+                    <td><xsl:value-of select="@status"/></td>

+                    <td><xsl:value-of select="@mediaType"/></td>

+                    <xsl:if test="wadl:doc">

+                        <td>

+                            <xsl:call-template name="getDoc">

+                                <xsl:with-param name="base" select="''"/>

+                            </xsl:call-template>

+                        </td>

+                    </xsl:if>

+                    <xsl:if test="@href or @element">

+                        <td>

+                            <xsl:variable name="href" select="@href"/>

+                            <xsl:choose>

+                                <xsl:when test="@href">

+                                    <a href="{$href}"><xsl:value-of select="@element"/></a>

+                                </xsl:when>

+                                <xsl:otherwise>

+                                    <xsl:value-of select="@element"/>

+                                </xsl:otherwise>

+                            </xsl:choose>

+                        </td>

+                    </xsl:if>

+                </tr>

+                <xsl:call-template name="getRepresentationParamBlock">

+                    <xsl:with-param name="style" select="'template'"/>

+                </xsl:call-template>

+        

+                <xsl:call-template name="getRepresentationParamBlock">

+                    <xsl:with-param name="style" select="'matrix'"/>

+                </xsl:call-template>

+        

+                <xsl:call-template name="getRepresentationParamBlock">

+                    <xsl:with-param name="style" select="'header'"/>

+                </xsl:call-template>

+        

+                <xsl:call-template name="getRepresentationParamBlock">

+                    <xsl:with-param name="style" select="'query'"/>

+                </xsl:call-template>

+            </xsl:for-each>

+        </table>

+    </xsl:if> 

+</xsl:template>

+

+<xsl:template name="getRepresentationParamBlock">

+    <xsl:param name="style"/>

+    <xsl:if test="wadl:param[@style=$style]">

+        <tr>

+            <td style="padding: 0em, 0em, 0em, 2em">

+                <div class="h7"><xsl:value-of select="$style"/> params</div>

+                <table>

+                    <xsl:for-each select="wadl:param[@style=$style]">

+                        <xsl:call-template name="getParams"/>

+                    </xsl:for-each>

+                </table>

+                <p/>

+            </td>

+        </tr>

+    </xsl:if>

+</xsl:template>

+

+<xsl:template name="getStyle">

+     <style type="text/css">

+        body {

+            font-family: sans-serif;

+            font-size: 0.85em;

+            margin: 2em 2em;

+        }

+        .methods {

+            margin-left: 2em; 

+            margin-bottom: 2em;

+        }

+        .method {

+            background-color: #eef;

+            border: 1px solid #DDDDE6;

+            padding: .5em;

+            margin-bottom: 1em;

+            width: 95%

+        }

+        .methodNameTable {

+            width: 100%;

+            border: 0px;

+            border-bottom: 2px solid white;

+            font-size: 1.4em;

+        }

+        .methodNameTd {

+            background-color: #eef;

+        }

+        h1 {

+            font-size: 2m;

+            margin-bottom: 0em;

+        }

+        h2 {

+            border-bottom: 1px solid black;

+            margin-top: 1.5em;

+            margin-bottom: 0.5em;

+            font-size: 1.5em;

+           }

+        h3 {

+            color: #FF6633;

+            font-size: 1.35em;

+            margin-top: .5em;

+            margin-bottom: 0em;

+        }

+        h5 {

+            font-size: 1.2em;

+            color: #99a;

+            margin: 0.5em 0em 0.25em 0em;

+        }

+        h6 {

+            color: #700000;

+            font-size: 1em;

+            margin: 1em 0em 0em 0em;

+        }

+        .h7 {

+            margin-top: .75em;

+            font-size: 1em;

+            font-weight: bold;

+            font-style: italic;

+            color: blue;

+        }

+        .h8 {

+            margin-top: .75em;

+            font-size: 1em;

+            font-weight: bold;

+            font-style: italic;

+            color: black;

+        }

+        tt {

+            font-size: 1em;

+        }

+        table {

+            margin-bottom: 0.5em;

+            border: 1px solid #E0E0E0;

+        }

+        th {

+            text-align: left;

+            font-weight: normal;

+            font-size: 1em;

+            color: black;

+            background-color: #DDDDE6;

+            padding: 3px 6px;

+            border: 1px solid #B1B1B8;

+        }

+        td {

+            padding: 3px 6px;

+            vertical-align: top;

+            background-color: #F6F6FF;

+            font-size: 0.85em;

+        }

+        p {

+            margin-top: 0em;

+            margin-bottom: 0em;

+        }

+        td.summary {

+            background-color: white;

+        }

+        td.summarySeparator {

+            padding: 1px;

+        }

+    </style>

+</xsl:template>

+

+<xsl:template name="getTitle">

+    <xsl:choose>

+        <xsl:when test="wadl:doc/@title">

+            <xsl:value-of select="wadl:doc/@title"/>

+        </xsl:when>

+        <xsl:otherwise>

+            Web Application

+        </xsl:otherwise>

+    </xsl:choose>

+</xsl:template>

+

+</xsl:stylesheet>

diff --git a/controller/src/main/webapps/wizards/index.html b/controller/src/main/webapps/wizards/index.html
deleted file mode 100644
index bc539c3..0000000
--- a/controller/src/main/webapps/wizards/index.html
+++ /dev/null
@@ -1,19 +0,0 @@
-<!DOCTYPE HTML>
-<html>
-  <body>
-    <section>
-      <h3>Cluster Setup Wizard</h3>
-    </section>
-    <section>
-      <div class="wizard">
-        Step <span id="step"></span> of <span id="total"></span>
-      </div>
-    </section>
-    <script type='text/javascript'>
-      $(document).ready(function() {
-        $("#step").html("1");
-        $("#total").html("5");
-      });
-    </script>
-  </body>
-</html>
diff --git a/controller/src/packages/tarball/all.xml b/controller/src/packages/tarball/all.xml
index 24c0cd7..df0d65e 100755
--- a/controller/src/packages/tarball/all.xml
+++ b/controller/src/packages/tarball/all.xml
@@ -1,4 +1,22 @@
 <?xml version="1.0"?>
+
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
 <assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.1"
           xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
           xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.1 http://maven.apache.org/xsd/assembly-1.1.1.xsd">
@@ -38,7 +56,7 @@
       <directory>conf</directory>
     </fileSet>
     <fileSet>
-      <directory>../bin</directory>
+      <directory>../client/bin</directory>
       <outputDirectory>bin</outputDirectory>
       <fileMode>755</fileMode>
     </fileSet>
diff --git a/beacon/src/packages/tarball/all.xml b/controller/src/packages/tarball/binary.xml
similarity index 63%
copy from beacon/src/packages/tarball/all.xml
copy to controller/src/packages/tarball/binary.xml
index 24c0cd7..e4033e8 100755
--- a/beacon/src/packages/tarball/all.xml
+++ b/controller/src/packages/tarball/binary.xml
@@ -1,4 +1,22 @@
 <?xml version="1.0"?>
+
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
 <assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.1"
           xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
           xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.1 http://maven.apache.org/xsd/assembly-1.1.1.xsd">
@@ -6,23 +24,16 @@
     http://maven.apache.org/plugins/maven-assembly-plugin/faq.html#required-classifiers
   -->
   <formats>
-    <format>tar.gz</format>
+    <format>${package.type}</format>
   </formats>
   <fileSets>
     <fileSet>
+      <outputDirectory>share/ambari</outputDirectory>
       <includes>
         <include>${basedir}/*.txt</include>
       </includes>
     </fileSet>
     <fileSet>
-      <includes>
-        <include>pom.xml</include>
-      </includes>
-    </fileSet>
-    <fileSet>
-      <directory>src</directory>
-    </fileSet>
-    <fileSet>
       <directory>src/main/webapps</directory>
       <outputDirectory>webapps</outputDirectory>
     </fileSet>
@@ -36,15 +47,16 @@
     </fileSet>
     <fileSet>
       <directory>conf</directory>
+      <outputDirectory>etc/ambari</outputDirectory>
     </fileSet>
     <fileSet>
-      <directory>../bin</directory>
+      <directory>../client/bin</directory>
       <outputDirectory>bin</outputDirectory>
       <fileMode>755</fileMode>
     </fileSet>
     <fileSet>
       <directory>target</directory>
-      <outputDirectory>/</outputDirectory>
+      <outputDirectory>share/ambari</outputDirectory>
       <includes>
           <include>${artifactId}-${project.version}.jar</include>
           <include>${artifactId}-${project.version}-tests.jar</include>
@@ -66,7 +78,7 @@
   </fileSets>
   <dependencySets>
     <dependencySet>
-      <outputDirectory>/lib</outputDirectory>
+      <outputDirectory>share/ambari</outputDirectory>
       <unpack>false</unpack>
       <scope>runtime</scope>
     </dependencySet>
diff --git a/beacon/src/packages/tarball/all.xml b/controller/src/packages/tarball/source.xml
similarity index 72%
rename from beacon/src/packages/tarball/all.xml
rename to controller/src/packages/tarball/source.xml
index 24c0cd7..df0d65e 100755
--- a/beacon/src/packages/tarball/all.xml
+++ b/controller/src/packages/tarball/source.xml
@@ -1,4 +1,22 @@
 <?xml version="1.0"?>
+
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
 <assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.1"
           xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
           xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.1 http://maven.apache.org/xsd/assembly-1.1.1.xsd">
@@ -38,7 +56,7 @@
       <directory>conf</directory>
     </fileSet>
     <fileSet>
-      <directory>../bin</directory>
+      <directory>../client/bin</directory>
       <outputDirectory>bin</outputDirectory>
       <fileMode>755</fileMode>
     </fileSet>
diff --git a/controller/src/test/java/org/apache/ambari/controller/TestFSMDriverImpl.java b/controller/src/test/java/org/apache/ambari/controller/TestFSMDriverImpl.java
new file mode 100644
index 0000000..d8a86bd
--- /dev/null
+++ b/controller/src/test/java/org/apache/ambari/controller/TestFSMDriverImpl.java
@@ -0,0 +1,63 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.ambari.controller;
+
+import java.io.IOException;
+
+import org.apache.ambari.resource.statemachine.ClusterFSM;
+import org.apache.ambari.resource.statemachine.FSMDriverInterface;
+
+class TestFSMDriverImpl implements FSMDriverInterface {
+
+  ClusterFSM clusterFsm;
+  public void setClusterFsm(ClusterFSM clusterFsm) {
+    this.clusterFsm = clusterFsm;
+  }
+  
+  @Override
+  public ClusterFSM createCluster(Cluster cluster, int revision)
+      throws IOException {
+    // TODO Auto-generated method stub
+    return null;
+  }
+
+  @Override
+  public void startCluster(String clusterId) {
+    // TODO Auto-generated method stub
+    
+  }
+
+  @Override
+  public void stopCluster(String clusterId) {
+    // TODO Auto-generated method stub
+    
+  }
+
+  @Override
+  public ClusterFSM getFSMClusterInstance(String clusterId) {
+    return clusterFsm;
+  }
+
+  @Override
+  public String getClusterState(String clusterId,
+      long clusterDefinitionRev) {
+    // TODO Auto-generated method stub
+    return null;
+  }
+}
\ No newline at end of file
diff --git a/controller/src/test/java/org/apache/ambari/controller/TestHeartbeat.java b/controller/src/test/java/org/apache/ambari/controller/TestHeartbeat.java
new file mode 100644
index 0000000..82913bd
--- /dev/null
+++ b/controller/src/test/java/org/apache/ambari/controller/TestHeartbeat.java
@@ -0,0 +1,696 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.controller;
+
+import static org.mockito.Matchers.anyInt;
+import static org.mockito.Matchers.anyString;
+import static org.mockito.Matchers.any;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+import static org.mockito.Mockito.doAnswer;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Random;
+
+import org.apache.ambari.common.rest.agent.Action;
+import org.apache.ambari.common.rest.agent.Action.Kind;
+import org.apache.ambari.common.rest.agent.ActionResult;
+import org.apache.ambari.common.rest.agent.AgentRoleState;
+import org.apache.ambari.common.rest.agent.CommandResult;
+import org.apache.ambari.common.rest.agent.ControllerResponse;
+import org.apache.ambari.common.rest.agent.HeartBeat;
+import org.apache.ambari.common.rest.entities.ClusterDefinition;
+import org.apache.ambari.common.rest.entities.ClusterState;
+import org.apache.ambari.common.rest.entities.Node;
+import org.apache.ambari.common.rest.entities.NodeState;
+import org.apache.ambari.common.rest.entities.Stack;
+import org.apache.ambari.common.rest.entities.Component;
+import org.apache.ambari.common.rest.entities.UserGroup;
+import org.apache.ambari.components.ComponentPlugin;
+import org.apache.ambari.configuration.Configuration;
+import org.apache.ambari.controller.HeartbeatHandler.ClusterNameAndRev;
+import org.apache.ambari.controller.HeartbeatHandler.SpecialServiceIDs;
+import org.apache.ambari.event.EventHandler;
+import org.apache.ambari.resource.statemachine.ClusterFSM;
+import org.apache.ambari.resource.statemachine.FSMDriverInterface;
+import org.apache.ambari.resource.statemachine.RoleEvent;
+import org.apache.ambari.resource.statemachine.RoleEventType;
+import org.apache.ambari.resource.statemachine.RoleFSM;
+import org.apache.ambari.resource.statemachine.RoleState;
+import org.apache.ambari.resource.statemachine.ServiceEvent;
+import org.apache.ambari.resource.statemachine.ServiceEventType;
+import org.apache.ambari.resource.statemachine.ServiceFSM;
+import org.apache.ambari.resource.statemachine.ServiceState;
+import org.apache.ambari.resource.statemachine.StateMachineInvokerInterface;
+import org.mockito.invocation.InvocationOnMock;
+import org.mockito.stubbing.Answer;
+import org.testng.annotations.BeforeMethod;
+import org.testng.annotations.Test;
+
+import com.google.inject.Guice;
+import com.google.inject.Injector;
+
+public class TestHeartbeat {
+  
+  ComponentPlugin plugin;
+  String[] roles = {"abc"};
+  String[] services = {"comp1"};
+  ClusterDefinition cdef;
+  Cluster cluster;
+  Nodes nodes;
+  Clusters clusters;
+  Stack stack;
+  Component component;
+  UserGroup usergroup;
+  StateMachineInvokerInterface invoker;
+  FSMDriverInterface driver;
+  HeartBeat heartbeat;
+  Node node;
+  Injector injector;
+  final String script = "script-content";
+  final int scriptHash = script.hashCode();
+  
+  private static class TestConfiguration extends Configuration {
+    TestConfiguration() {
+      super(getProperties());
+    }
+    private static Properties getProperties() {
+      Properties props = new Properties();
+      props.setProperty("data.store", "test:/");
+      return props;
+    }
+  }
+  private static class TestModule extends ControllerModule {
+    @Override
+    protected void configure() {
+      super.configure();
+      bind(Configuration.class).to(TestConfiguration.class);
+      bind(FSMDriverInterface.class).to(TestFSMDriverImpl.class);
+    }
+  }
+  
+  @BeforeMethod
+  public void setup() throws Exception {
+    injector = Guice.createInjector(new TestModule());
+    driver = injector.getInstance(FSMDriverInterface.class);
+    invoker = injector.getInstance(StateMachineInvokerInterface.class);
+    plugin = mock(ComponentPlugin.class);
+    when(plugin.getActiveRoles()).thenReturn(roles);
+    when(plugin.getRequiredComponents()).thenReturn(null);
+    cdef = mock(ClusterDefinition.class);
+    when(cdef.getEnabledServices()).thenReturn(Arrays.asList("comp1"));
+    cluster = mock(Cluster.class);
+    when(cluster.getClusterDefinition(anyInt())).thenReturn(cdef);
+    when(cluster.getName()).thenReturn("cluster1");
+    when(cluster.getComponentDefinition("comp1")).thenReturn(plugin);
+    when(cluster.getLatestRevisionNumber()).thenReturn(-1);
+    Action startAction = new Action();
+    startAction.setKind(Kind.START_ACTION);
+    when(plugin.startServer("cluster1", "abc")).thenReturn(startAction);
+    when(plugin.runCheckRole()).thenReturn("abc");
+    when(plugin.runPreStartRole()).thenReturn("abc");
+    Action preStartAction = new Action();
+    preStartAction.setKind(Kind.RUN_ACTION);
+    when(plugin.preStartAction("cluster1", "abc")).thenReturn(preStartAction);
+    Action checkServiceAction = new Action();
+    preStartAction.setKind(Kind.RUN_ACTION);
+    when(plugin.checkService("cluster1","abc")).thenReturn(checkServiceAction);
+    nodes = mock(Nodes.class);
+    clusters = mock(Clusters.class);
+    node = new Node();
+    node.setName("localhost");
+    NodeState nodeState = new NodeState();
+    nodeState.setClusterName("cluster1");
+    node.setNodeState(nodeState);
+    when(nodes.getNode("localhost")).thenReturn(node);
+    when(nodes.getNodeRoles("localhost")).thenReturn(Arrays.asList(roles));
+    when(nodes.getHeathOfNode("localhost")).thenReturn(NodeState.HEALTHY);
+    when(clusters.getClusterByName("cluster1")).thenReturn(cluster);
+    when(clusters.getInstallAndConfigureScript(anyString(), anyInt()))
+        .thenReturn(script);
+    
+    stack = mock(Stack.class);
+    usergroup = mock(UserGroup.class);
+    component = mock (Component.class);
+    when (clusters.getClusterStack("cluster1", true)).thenReturn(stack);
+    when (stack.getComponentByName(anyString())).thenReturn(component);
+    when (component.getUser_group()).thenReturn(usergroup);
+    when (usergroup.getUser()).thenReturn("hadoop");
+
+    heartbeat = new HeartBeat();
+    heartbeat.setIdle(true);
+    heartbeat.setInstallScriptHash(-1);
+    heartbeat.setHostname("localhost");
+    heartbeat.setInstalledRoleStates(new ArrayList<AgentRoleState>());
+  }
+  
+  @Test
+  public void testHeartbeatWithNoClusterDefined() throws Exception {
+    //if a node sends a heartbeat when the node doesn't belong to
+    //any cluster, the response should have an empty list of actions
+    Clusters clusters = mock(Clusters.class);
+    when(clusters.getClusterByName("cluster1")).thenReturn(null);
+    Nodes nodes = mock(Nodes.class);
+    clusters = mock(Clusters.class);
+    Node node = new Node();
+    node.setName("localhost");
+    NodeState nodeState = new NodeState();
+    nodeState.setClusterName(null);
+    node.setNodeState(nodeState);
+    when(nodes.getNode("localhost")).thenReturn(node);
+    when(nodes.getNodeRoles("localhost"))
+         .thenReturn(Arrays.asList(roles));
+    HeartbeatHandler handler = new HeartbeatHandler(clusters, nodes, 
+        driver, invoker);
+    ControllerResponse response = handler.processHeartBeat(heartbeat);
+    assert (response.getActions().size() == 0);
+  }
+  
+  @Test
+  public void testInstall() throws Exception {
+    //send a heartbeat and get a response with install/config action
+    HeartbeatHandler handler = new HeartbeatHandler(clusters, nodes, 
+        driver, invoker);
+    ControllerResponse response = handler.processHeartBeat(heartbeat);
+    List<Action> actions = response.getActions();
+    assert(actions.size() == 2);
+    assert(actions.get(0).getKind() == Action.Kind.INSTALL_AND_CONFIG_ACTION);
+  }
+  
+  
+  @Test
+  public void testStartServer() throws Exception {
+    //send a heartbeat when some server needs to be started, 
+    //and the heartbeat response should have the start action
+    TestClusterImpl clusterImpl = new TestClusterImpl(services,roles);
+    ((TestRoleImpl)clusterImpl.getServices()
+        .get(0).getRoles().get(0)).setShouldStart(true);
+    updateTestFSMDriverImpl(clusterImpl);
+    processHeartbeatAndGetResponse(true);
+  }
+  
+  @Test
+  public void testStopServer() throws Exception {
+    //send a heartbeat when some server needs to be stopped, 
+    //and the heartbeat response shouldn't have a start action
+    //for the server
+    TestClusterImpl clusterImpl = new TestClusterImpl(services,roles);
+    ((TestRoleImpl)clusterImpl.getServices()
+        .get(0).getRoles().get(0)).setShouldStart(false);
+    updateTestFSMDriverImpl(clusterImpl);
+    processHeartbeatAndGetResponse(false);
+  }
+  
+  @Test
+  public void testIsRoleActive() throws Exception {
+    //send a heartbeat with some role server start success, 
+    //and then the role should be considered active
+    TestClusterImpl clusterImpl = new TestClusterImpl(services,roles);
+    updateTestFSMDriverImpl(clusterImpl);
+    RoleFSM roleFsm = clusterImpl.getServices()
+        .get(0).getRoles().get(0);
+    heartbeat.setInstallScriptHash(scriptHash);
+    List<AgentRoleState> installedRoleStates = new ArrayList<AgentRoleState>();
+    AgentRoleState roleState = new AgentRoleState();
+    roleState.setRoleName(roles[0]);
+    roleState.setClusterDefinitionRevision(-1);
+    roleState.setClusterId("cluster1");
+    roleState.setComponentName("comp1");
+    installedRoleStates.add(roleState);
+    heartbeat.setInstalledRoleStates(installedRoleStates);
+    HeartbeatHandler handler = new HeartbeatHandler(clusters, nodes, 
+        driver, invoker);
+    ControllerResponse response = handler.processHeartBeat(heartbeat);
+    checkActions(response, true);
+    int i = 0;
+    while (i++ < 10) {
+      if (roleFsm.getRoleState() == RoleState.ACTIVE) {
+        break;
+      }
+      Thread.sleep(1000);
+    }
+    assert(roleFsm.getRoleState() == RoleState.ACTIVE);
+  }
+  
+  @Test
+  public void testCreationOfPreStartAction() throws Exception {
+    TestClusterImpl clusterImpl = new TestClusterImpl(services,roles);
+    ServiceFSM serviceImpl = clusterImpl.getServices().get(0);
+    ((TestRoleImpl)clusterImpl.getServices().get(0).getRoles().get(0)).setShouldStart(false);
+    ((TestServiceImpl)serviceImpl).setServiceState(ServiceState.PRESTART);
+    updateTestFSMDriverImpl(clusterImpl);
+    checkSpecialAction(ServiceState.PRESTART, ServiceEventType.START, 
+        SpecialServiceIDs.SERVICE_PRESTART_CHECK_ID);
+  }
+  @Test
+  public void testCreationOfCheckRoleAction() throws Exception {
+    
+    TestClusterImpl clusterImpl = new TestClusterImpl(services,roles);
+    ServiceFSM serviceImpl = clusterImpl.getServices().get(0);
+    ((TestServiceImpl)serviceImpl).setServiceState(ServiceState.STARTED);
+    updateTestFSMDriverImpl(clusterImpl);
+    checkSpecialAction(ServiceState.STARTED, ServiceEventType.ROLE_START_SUCCESS, 
+        SpecialServiceIDs.SERVICE_AVAILABILITY_CHECK_ID);
+  }
+  
+  @Test
+  public void testServiceAvailableEvent() throws Exception {
+    TestClusterImpl clusterImpl = new TestClusterImpl(services,roles);
+    updateTestFSMDriverImpl(clusterImpl);
+    heartbeat.setInstallScriptHash(scriptHash);
+    ServiceFSM serviceImpl = clusterImpl.getServices().get(0);
+    ((TestServiceImpl)serviceImpl).setServiceState(ServiceState.STARTED);
+    ActionResult actionResult = new ActionResult();
+    actionResult.setKind(Kind.RUN_ACTION);
+    ClusterNameAndRev clusterNameAndRev = new ClusterNameAndRev("cluster1",-1);
+    String checkActionId = HeartbeatHandler.getSpecialActionID(
+        clusterNameAndRev, "comp1", "abc", 
+        SpecialServiceIDs.SERVICE_AVAILABILITY_CHECK_ID);
+    actionResult.setId(checkActionId);
+    actionResult.setClusterId("cluster1");
+    actionResult.setClusterDefinitionRevision(-1);
+    CommandResult commandResult = new CommandResult(0,"","");
+    actionResult.setCommandResult(commandResult);
+    List<ActionResult> actionResults = new ArrayList<ActionResult>();
+    actionResults.add(actionResult);
+    heartbeat.setActionResults(actionResults);
+    HeartbeatHandler handler = new HeartbeatHandler(clusters, nodes, 
+        driver, invoker);
+    handler.processHeartBeat(heartbeat);
+    int i = 0;
+    while (i++ < 10) {
+      if (serviceImpl.getServiceState() == ServiceState.ACTIVE) {
+        break;
+      }
+      Thread.sleep(1000);
+    }
+    assert(serviceImpl.getServiceState() == ServiceState.ACTIVE);
+  }
+  
+  @Test
+  public void testServiceReadyToStartEvent() throws Exception {
+    TestClusterImpl clusterImpl = new TestClusterImpl(services,roles);
+    updateTestFSMDriverImpl(clusterImpl);
+    heartbeat.setInstallScriptHash(scriptHash);
+    ServiceFSM serviceImpl = clusterImpl.getServices().get(0);
+    ((TestServiceImpl)serviceImpl).setServiceState(ServiceState.PRESTART);
+    ActionResult actionResult = new ActionResult();
+    actionResult.setKind(Kind.RUN_ACTION);
+    ClusterNameAndRev clusterNameAndRev = new ClusterNameAndRev("cluster1", -1);
+    String checkActionId = HeartbeatHandler.getSpecialActionID(
+        clusterNameAndRev, "comp1", "abc", 
+        SpecialServiceIDs.SERVICE_PRESTART_CHECK_ID);
+    actionResult.setId(checkActionId);
+    actionResult.setClusterId("cluster1");
+    actionResult.setClusterDefinitionRevision(-1);
+    CommandResult commandResult = new CommandResult(0,"","");
+    actionResult.setCommandResult(commandResult);
+    List<ActionResult> actionResults = new ArrayList<ActionResult>();
+    actionResults.add(actionResult);
+    heartbeat.setActionResults(actionResults);
+    HeartbeatHandler handler = new HeartbeatHandler(clusters, nodes, 
+        driver, invoker);
+    handler.processHeartBeat(heartbeat);
+    int i = 0;
+    while (i++ < 10) {
+      if (serviceImpl.getServiceState() == ServiceState.STARTING) {
+        break;
+      }
+      Thread.sleep(1000);
+    }
+    assert(serviceImpl.getServiceState() == ServiceState.STARTING);
+  }
+  
+  @Test
+  public void testAgentMarked() throws Exception {
+    //tests whether Nodes.markNodeUnhealthy and Nodes.markNodeHealthy
+    //are called at expected times
+    CommandResult failedCommandResult = new CommandResult();
+    final String stdout = "FAILED_COMMAND_STDOUT";
+    failedCommandResult.setExitCode(1);
+    failedCommandResult.setOutput(stdout);
+    CommandResult successCommandResult = new CommandResult();
+    successCommandResult.setExitCode(0);
+    
+    final MarkCallTracker mUnhealthy = new MarkCallTracker();
+    doAnswer(new Answer<Void>() {
+      public Void answer(InvocationOnMock invocation) {
+        mUnhealthy.methodCalled = true;
+        for (Object obj : invocation.getArguments()) {
+          if (String.class.isAssignableFrom(obj.getClass())) {
+            if (((String)obj).equals("localhost")) {
+              mUnhealthy.hostnameMatched = true;
+            }
+          }
+          if (ArrayList.class.isAssignableFrom(obj.getClass())) {
+            List<CommandResult> results = (List<CommandResult>)obj;
+            for (CommandResult result : results) {
+              if (result.getExitCode() == 1) {
+                if (result.getOutput().equals(stdout)) {
+                  //found the match!
+                  mUnhealthy.stdoutMatched = true;
+                }
+              }
+            }
+          }
+        }
+        return null;
+      }
+    }).when(nodes).markNodeUnhealthy(anyString(), any(List.class));
+    
+    final MarkCallTracker mHealthy = new MarkCallTracker();
+    
+    doAnswer(new Answer<Void>() {
+      public Void answer(InvocationOnMock invocation) {
+        mHealthy.methodCalled = true;
+        for (Object obj : invocation.getArguments()) {
+          if (String.class.isAssignableFrom(obj.getClass())) {
+            if (((String)obj).equals("localhost")) {
+              mHealthy.hostnameMatched = true;
+            }
+          }
+        }
+        return null;
+      }
+    }).when(nodes).markNodeHealthy(anyString());
+    
+    List<ActionResult> actionResults = new ArrayList<ActionResult>();
+    ActionResult failedAction = new ActionResult();
+    failedAction.setCommandResult(failedCommandResult);
+    actionResults.add(failedAction);
+    heartbeat.setActionResults(actionResults);
+    HeartbeatHandler handler = new HeartbeatHandler(clusters, nodes, 
+        driver, invoker);
+    
+    mUnhealthy.stdoutMatched = false;
+    mUnhealthy.hostnameMatched = false;
+    mHealthy.methodCalled = false;
+    //now the call to markNodeUnhealthy should happen
+    handler.processHeartBeat(heartbeat);
+    assert(mUnhealthy.stdoutMatched == true 
+       && mUnhealthy.hostnameMatched == true);
+    
+    
+    actionResults = new ArrayList<ActionResult>();
+    ActionResult successAction = new ActionResult();
+    successAction.setCommandResult(successCommandResult);
+    actionResults.add(successAction);
+    heartbeat.setActionResults(actionResults);
+    
+    mUnhealthy.methodCalled = false;
+    mHealthy.methodCalled = false;
+    //now the call to markNodeUnhealthy should not happen
+    //the call to markNodeHealthy should not happen too
+    handler.processHeartBeat(heartbeat);
+    assert(mUnhealthy.methodCalled == false && mHealthy.methodCalled == false);
+    
+    
+    heartbeat.setFirstContact(true);
+    mHealthy.methodCalled = false;
+    mHealthy.hostnameMatched = false;
+    mUnhealthy.methodCalled = false;
+    //now the call to markNodeHealthy should happen
+    //the call to markNodeUnhealthy should not happen
+    handler.processHeartBeat(heartbeat);
+    assert(mHealthy.methodCalled == true && mHealthy.hostnameMatched == true 
+        && mUnhealthy.methodCalled == false);
+  }
+  
+  @Test
+  public void testActionAssignment() throws Exception {
+    when(nodes.getHeathOfNode("localhost")).thenReturn(NodeState.HEALTHY);
+    HeartbeatHandler handler = new HeartbeatHandler(clusters, nodes, 
+        driver, invoker);
+    ControllerResponse resp = handler.processHeartBeat(heartbeat);
+    List<Action> actions = resp.getActions();
+    assert(actions.size() > 0);
+    
+    when(nodes.getHeathOfNode("localhost")).thenReturn(NodeState.UNHEALTHY);
+    handler = new HeartbeatHandler(clusters, nodes, driver, invoker);
+    resp = handler.processHeartBeat(heartbeat);
+    actions = resp.getActions();
+    assert(actions.size() == 0);
+  }
+  
+  @Test
+  public void testResponseIdIncreasing() throws Exception {
+    short responseId = (short)(new Random().nextInt());
+    HeartBeat heartbeat = new HeartBeat();
+    heartbeat.setResponseId(responseId);
+    HeartbeatHandler handler = new HeartbeatHandler(clusters, nodes, 
+        driver, invoker);
+    ControllerResponse resp = handler.processHeartBeat(heartbeat);
+    assert(resp.getResponseId() == (responseId + 1));
+  }
+  
+  static class MarkCallTracker {
+    boolean methodCalled;
+    boolean hostnameMatched;
+    boolean stdoutMatched;
+  }
+
+  private void checkSpecialAction(ServiceState serviceState, 
+      ServiceEventType serviceEventType, 
+      SpecialServiceIDs serviceId) throws Exception {
+    heartbeat.setInstallScriptHash(scriptHash);
+    HeartbeatHandler handler = new HeartbeatHandler(clusters, nodes, 
+        driver, invoker);
+    ControllerResponse response = handler.processHeartBeat(heartbeat);
+    checkActions(response, ServiceState.STARTED == serviceState);
+    ClusterNameAndRev clusterNameAndRev = new ClusterNameAndRev("cluster1", -1);
+    boolean found = false;
+    String checkActionId = HeartbeatHandler.getSpecialActionID(
+        clusterNameAndRev, "comp1", "abc", 
+        serviceId);
+    for (Action action : response.getActions()) {
+      if (action.getKind() == Kind.RUN_ACTION && 
+          action.getId().equals(checkActionId)) {
+        found = true;
+        break;
+      }
+    }
+    assert(found != false);
+  }
+  
+  private void updateTestFSMDriverImpl(TestClusterImpl clusterImpl) {
+    ((TestFSMDriverImpl)driver).setClusterFsm(clusterImpl);
+  }
+  
+  private void processHeartbeatAndGetResponse(boolean shouldFindStart)
+      throws Exception {
+    heartbeat.setInstallScriptHash(scriptHash);
+    HeartbeatHandler handler = new HeartbeatHandler(clusters, nodes, 
+        driver, invoker);
+    ControllerResponse response = handler.processHeartBeat(heartbeat);
+    checkActions(response, shouldFindStart);
+  }
+  
+  private void checkActions(ControllerResponse response, boolean shouldFindStart) {
+    List<Action> actions = response.getActions();
+    boolean foundStart = false;
+    boolean foundInstall = false;
+    for (Action a : actions) {
+      if (a.getKind() == Action.Kind.START_ACTION) {
+        foundStart = true;
+      }
+      if (a.getKind() == Action.Kind.INSTALL_AND_CONFIG_ACTION) {
+        foundInstall = true;
+      }
+    }
+    assert (foundInstall != false && foundStart == shouldFindStart);
+  }
+
+  
+  class TestClusterImpl implements ClusterFSM {
+    ClusterState clusterState;
+    List<ServiceFSM> serviceFsms;
+    public void setClusterState(ClusterState state) {
+      this.clusterState = state;
+    }
+    public TestClusterImpl(String[] services, String roles[]) {
+      serviceFsms = new ArrayList<ServiceFSM>();
+      for (String service : services) {
+        ServiceFSM srv = new TestServiceImpl(service,roles);
+        serviceFsms.add(srv);
+      }
+    }
+    @Override
+    public List<ServiceFSM> getServices() {
+      return serviceFsms;
+    }
+
+    @Override
+    public Map<String, String> getServiceStates() {
+      // TODO Auto-generated method stub
+      return null;
+    }
+
+    @Override
+    public String getClusterState() {
+      // TODO Auto-generated method stub
+      return null;
+    }
+
+    @Override
+    public void activate() {
+      // TODO Auto-generated method stub
+      
+    }
+
+    @Override
+    public void deactivate() {
+      // TODO Auto-generated method stub
+      
+    }
+    
+  }
+  
+  class TestServiceImpl implements ServiceFSM, EventHandler<ServiceEvent> {
+
+    ServiceState serviceState;
+    String serviceName;
+    List<RoleFSM> roleFsms;
+    public void setServiceState(ServiceState state) {
+      this.serviceState = state;
+    }
+
+    public TestServiceImpl(String service, String[] roles) {
+      roleFsms = new ArrayList<RoleFSM>();
+      for (String role : roles) {
+        TestRoleImpl r = new TestRoleImpl(role);
+        roleFsms.add(r);
+      }
+      serviceName = service;
+    }
+    
+    @Override
+    public ServiceState getServiceState() {
+      return serviceState;
+    }
+
+    @Override
+    public String getServiceName() {
+      return serviceName;
+    }
+
+    @Override
+    public ClusterFSM getAssociatedCluster() {
+      // TODO Auto-generated method stub
+      return null;
+    }
+
+    @Override
+    public boolean isActive() {
+      // TODO Auto-generated method stub
+      return false;
+    }
+
+    @Override
+    public List<RoleFSM> getRoles() {
+      return roleFsms;
+    }
+
+    @Override
+    public void activate() {
+      // TODO Auto-generated method stub
+      
+    }
+
+    @Override
+    public void deactivate() {
+      // TODO Auto-generated method stub
+      
+    }
+
+    @Override
+    public void handle(ServiceEvent event) {
+      if (event.getType() == ServiceEventType.AVAILABLE_CHECK_SUCCESS) {
+        serviceState = ServiceState.ACTIVE;
+      }
+      if (event.getType() == ServiceEventType.PRESTART_SUCCESS) {
+        serviceState = ServiceState.STARTING;
+      }
+    }
+    
+  }
+  
+  class TestRoleImpl implements RoleFSM, EventHandler<RoleEvent>  {
+ 
+    RoleState roleState;
+    String roleName;
+    boolean shouldStart = true;
+    public void setShouldStart(boolean shouldStart) {
+      this.shouldStart = shouldStart;
+    }
+    public void setRoleState(RoleState roleState) {
+      this.roleState = roleState;
+    }
+    
+    public TestRoleImpl(String role) {
+      this.roleName = role;
+    }
+    @Override
+    public RoleState getRoleState() {
+      return roleState;
+    }
+
+    @Override
+    public String getRoleName() {
+      return roleName;
+    }
+
+    @Override
+    public ServiceFSM getAssociatedService() {
+      // TODO Auto-generated method stub
+      return null;
+    }
+
+    @Override
+    public boolean shouldStop() {
+      return false;
+    }
+
+    @Override
+    public boolean shouldStart() {
+      return shouldStart;
+    }
+
+    @Override
+    public void activate() {
+      // TODO Auto-generated method stub
+      
+    }
+
+    @Override
+    public void deactivate() {
+      // TODO Auto-generated method stub
+      
+    }
+
+    @Override
+    public void handle(RoleEvent event) {
+      if (event.getType() == RoleEventType.START_SUCCESS) {
+        roleState = RoleState.ACTIVE;
+      }
+    }
+  }
+}
diff --git a/controller/src/test/java/org/apache/ambari/controller/TestStackFlattener.java b/controller/src/test/java/org/apache/ambari/controller/TestStackFlattener.java
new file mode 100644
index 0000000..f207ec9
--- /dev/null
+++ b/controller/src/test/java/org/apache/ambari/controller/TestStackFlattener.java
@@ -0,0 +1,224 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.ambari.controller;
+
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+
+import org.apache.ambari.common.rest.entities.Component;
+import org.apache.ambari.common.rest.entities.ComponentDefinition;
+import org.apache.ambari.common.rest.entities.Configuration;
+import org.apache.ambari.common.rest.entities.ConfigurationCategory;
+import org.apache.ambari.common.rest.entities.Property;
+import org.apache.ambari.common.rest.entities.RepositoryKind;
+import org.apache.ambari.common.rest.entities.Role;
+import org.apache.ambari.common.rest.entities.Stack;
+import org.apache.ambari.common.rest.entities.UserGroup;
+import org.apache.ambari.components.ComponentPlugin;
+import org.apache.ambari.components.ComponentPluginFactory;
+
+import org.testng.annotations.BeforeMethod;
+import org.testng.annotations.Test;
+import static org.testng.AssertJUnit.assertEquals;
+
+public class TestStackFlattener {
+  
+  Stacks stacks;
+  Stack parentStack;
+  Stack childStack;
+  Stack grandchildStack;
+  ComponentPluginFactory plugins;
+  ComponentPlugin hdfs;
+  ComponentPlugin mapreduce;
+  ComponentDefinition hdfsDefn;
+  ComponentDefinition mapreduceDefn;
+  Component parentHdfs;
+  Component parentMapreduce;
+  StackFlattener flattener;
+  
+  @BeforeMethod
+  public void setup() throws Exception {
+    stacks =  mock(Stacks.class);
+    parentStack = new Stack();
+    parentStack.setName("parent");
+    childStack = new Stack();
+    childStack.setName("child");
+    grandchildStack = new Stack();
+    grandchildStack.setName("grandchild");
+    childStack.setParentName(parentStack.getName());
+    childStack.setParentRevision(0);
+    grandchildStack.setParentName(childStack.getName());
+    grandchildStack.setParentRevision(0);
+    when(stacks.getStack(parentStack.getName(), 0)).thenReturn(parentStack);
+    when(stacks.getStack(childStack.getName(), 0)).thenReturn(childStack);    
+    when(stacks.getStack(grandchildStack.getName(), 0)).
+      thenReturn(grandchildStack);
+    plugins = mock(ComponentPluginFactory.class);
+    hdfs = mock(ComponentPlugin.class);
+    when(hdfs.getActiveRoles()).
+      thenReturn(new String[]{"namenode", "datanode"});
+    mapreduce = mock(ComponentPlugin.class);
+    when(mapreduce.getActiveRoles()).
+      thenReturn(new String[]{"jobtracker","tasktracker"});
+    hdfsDefn = new ComponentDefinition("hdfs", "org.apache.ambari", "0");
+    mapreduceDefn = new ComponentDefinition("mapreduce", "org.apache.ambari", 
+                                            "0");
+    when(plugins.getPlugin(hdfsDefn)).thenReturn(hdfs);
+    when(plugins.getPlugin(mapreduceDefn)).thenReturn(mapreduce);
+    parentHdfs = new Component("hdfs", "0.20.205.0", "i386", 
+                               "org.apache.ambari",
+                               new ComponentDefinition("hdfs", 
+                                   "org.apache.ambari", "0"), 
+                               new Configuration(), new ArrayList<Role>(), new UserGroup());
+    parentMapreduce = new Component("mapreduce", "0.20.205.0", "i386", 
+                                    "org.apache.ambari",
+                                    new ComponentDefinition("mapreduce", 
+                                                       "org.apache.ambari","0"), 
+                                    new Configuration(), new ArrayList<Role>(), new UserGroup());
+    List<Component> compList = new ArrayList<Component>();
+    parentStack.setComponents(compList);
+    compList.add(parentHdfs);
+    compList.add(parentMapreduce);
+    flattener = new StackFlattener(stacks, plugins);
+  }
+
+  @Test
+  public void testRepositoryFlattening() throws Exception {
+    parentStack.setPackageRepositories(Arrays.asList
+        (new RepositoryKind("kind1", "url1", "url2"),
+         new RepositoryKind("kind2", "url3", "url4")));
+    childStack.setPackageRepositories(Arrays.asList
+        (new RepositoryKind("kind3", "url5")));
+    grandchildStack.setPackageRepositories(Arrays.asList
+        (new RepositoryKind("kind1", "url7", "url8"),
+         new RepositoryKind("kind3", "url9", "url10")));
+    grandchildStack.setRevision("123");
+    Stack flat = flattener.flattenStack("grandchild", 0);
+    List<RepositoryKind> answer = flat.getPackageRepositories();
+    assertEquals(new RepositoryKind("kind1", "url7", "url8", "url1", "url2"), 
+                 answer.get(0));
+    assertEquals(new RepositoryKind("kind2", "url3", "url4"), answer.get(1));
+    assertEquals(new RepositoryKind("kind3", "url9", "url10", "url5"), 
+                 answer.get(2));
+    
+    // ensure the name and parent name are what we expect
+    assertEquals("grandchild", flat.getName());
+    assertEquals("123", flat.getRevision());
+    assertEquals(null, flat.getParentName());
+    assertEquals(0, flat.getParentRevision());
+  }
+ 
+  static void setConfigParam(Configuration conf, String category, String key,
+                             String value) {
+    for(ConfigurationCategory cat: conf.getCategory()) {
+      if (cat.getName().equals(category)) {
+        for(Property prop: cat.getProperty()) {
+          if (prop.getName().equals(key)) {
+            // if we find the right property, update it
+            prop.setValue(value);
+          }
+        }
+        // otherwise add a new property
+        cat.getProperty().add(new Property(key, value));
+      }
+    }
+    // otherwise, it is a new category
+    List<Property> propList = new ArrayList<Property>();
+    propList.add(new Property(key,value));
+    conf.getCategory().add(new ConfigurationCategory(category, propList));
+  }
+  
+  static String getConfigParam(Configuration conf, String category, 
+                               String key) {
+    for(ConfigurationCategory cat: conf.getCategory()) {
+      if (cat.getName().equals(category)) {
+        for(Property prop: cat.getProperty()) {
+          if (prop.getName().equals(key)) {
+            return prop.getValue();
+          }
+        }
+        return null;
+      }
+    }
+    return null;
+  }
+
+  @Test
+  public void testConfigFlattening() throws Exception {
+    Configuration parentConfiguration = new Configuration();
+    parentStack.setConfiguration(parentConfiguration);
+    Configuration childConfiguration = new Configuration();
+    childStack.setConfiguration(childConfiguration);
+    Configuration grandchildConfiguration = new Configuration();
+    grandchildStack.setConfiguration(grandchildConfiguration);
+    Configuration parentHdfsConfig = parentHdfs.getConfiguration();
+    Configuration parentMapredConfig = parentMapreduce.getConfiguration();
+    List<Role> hdfsRoles = new ArrayList<Role>();
+    Configuration childHdfsConfig = new Configuration();
+    childStack.getComponents().add(
+        new Component("hdfs", null, null, null, null, 
+                      childHdfsConfig, hdfsRoles, new UserGroup()));
+    Configuration nnConf = new Configuration();
+    hdfsRoles.add(new Role("namenode", nnConf));
+    setConfigParam(parentConfiguration, "ambari", "global", "global-value");
+    setConfigParam(parentConfiguration, "cat1", "b", "parent");
+    setConfigParam(parentConfiguration, "cat1", "a", "a-value");
+    setConfigParam(parentConfiguration, "cat2", "b", "cat2-value");
+    setConfigParam(childConfiguration, "cat1", "b", "child");
+    setConfigParam(parentHdfsConfig, "cat1", "b", "parent-hdfs");
+    setConfigParam(parentHdfsConfig, "cat1", "d", "d-value");
+    setConfigParam(parentMapredConfig, "cat1", "b", "parent-mapred");
+    setConfigParam(grandchildConfiguration, "cat1", "b", "grandchild");
+    setConfigParam(childHdfsConfig, "cat1", "b", "child-hdfs");
+    setConfigParam(nnConf, "cat1", "b", "nn");
+    setConfigParam(nnConf, "cat1", "c", "nn-c");
+    Stack flat = flattener.flattenStack("grandchild", 0);
+    Configuration conf = flat.getConfiguration();
+    assertEquals("a-value", getConfigParam(conf, "cat1", "a"));
+    assertEquals("cat2-value", getConfigParam(conf, "cat2", "b"));
+    assertEquals("grandchild", getConfigParam(conf, "cat1", "b"));
+    assertEquals("global-value", getConfigParam(conf, "ambari", "global"));
+    assertEquals(null, getConfigParam(conf, "cat1", "c"));
+    assertEquals(null, getConfigParam(conf, "cat1", "d"));
+    Component comp = flat.getComponents().get(0);
+    assertEquals("hdfs", comp.getName());
+    assertEquals(null, comp.getConfiguration());
+    Role role = comp.getRoles().get(0);
+    assertEquals("namenode", role.getName());
+    conf = role.getConfiguration();
+    assertEquals("a-value", getConfigParam(conf, "cat1", "a"));
+    assertEquals("cat2-value", getConfigParam(conf, "cat2", "b"));
+    assertEquals("grandchild", getConfigParam(conf, "cat1", "b"));
+    assertEquals(null, getConfigParam(conf, "ambari", "global"));
+    assertEquals("nn-c", getConfigParam(conf, "cat1", "c"));
+    assertEquals("d-value", getConfigParam(conf, "cat1", "d"));
+    role = comp.getRoles().get(1);
+    assertEquals("datanode", role.getName());
+    conf = role.getConfiguration();
+    assertEquals("a-value", getConfigParam(conf, "cat1", "a"));
+    assertEquals("cat2-value", getConfigParam(conf, "cat2", "b"));
+    assertEquals("grandchild", getConfigParam(conf, "cat1", "b"));
+    assertEquals(null, getConfigParam(conf, "ambari", "global"));
+    assertEquals(null, getConfigParam(conf, "cat1", "c"));
+    assertEquals("d-value", getConfigParam(conf, "cat1", "d"));
+  }
+}
diff --git a/common/src/main/java/org/apache/hms/common/util/ExceptionUtil.java b/controller/src/test/java/org/apache/ambari/datastore/TestStaticDataStore.java
old mode 100755
new mode 100644
similarity index 60%
copy from common/src/main/java/org/apache/hms/common/util/ExceptionUtil.java
copy to controller/src/test/java/org/apache/ambari/datastore/TestStaticDataStore.java
index 5f23e2b..72f475c
--- a/common/src/main/java/org/apache/hms/common/util/ExceptionUtil.java
+++ b/controller/src/test/java/org/apache/ambari/datastore/TestStaticDataStore.java
@@ -15,18 +15,20 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
+package org.apache.ambari.datastore;
 
-package org.apache.hms.common.util;
+import org.apache.ambari.common.rest.entities.Stack;
+import org.apache.ambari.datastore.DataStore;
+import org.apache.ambari.datastore.StaticDataStore;
+import org.testng.annotations.Test;
+import static org.testng.AssertJUnit.assertEquals;
 
-import java.io.PrintWriter;
-import java.io.StringWriter;
+public class TestStaticDataStore {
 
-public class ExceptionUtil {
-  public static String getStackTrace(Throwable t) {
-    StringWriter sw = new StringWriter();
-    PrintWriter pw = new PrintWriter(sw);
-    t.printStackTrace(pw);
-    pw.flush();
-    return sw.toString();
+  @Test
+  public void testGetStack() throws Exception {
+    DataStore ds = new StaticDataStore();
+    Stack stack = ds.retrieveStack("puppet1", -1);
+    assertEquals("can fetch revision -1", "0", stack.getRevision());
   }
 }
diff --git a/controller/src/test/java/org/apache/ambari/resource/statemachine/NoOpDispatcher.java b/controller/src/test/java/org/apache/ambari/resource/statemachine/NoOpDispatcher.java
new file mode 100644
index 0000000..e5a1d65
--- /dev/null
+++ b/controller/src/test/java/org/apache/ambari/resource/statemachine/NoOpDispatcher.java
@@ -0,0 +1,36 @@
+package org.apache.ambari.resource.statemachine;
+
+import org.apache.ambari.event.Dispatcher;
+import org.apache.ambari.event.Event;
+import org.apache.ambari.event.EventHandler;
+
+/**
+ * To test objects with state in isolation, set this no-op Dispatcher
+ */
+class NoOPDispatcher implements Dispatcher{
+  class NoOPEventHandler implements EventHandler<Event>{
+
+    @Override
+    public void handle(Event event) {
+     //no-op
+    }
+    
+  }
+  EventHandler<?> ehandler = new NoOPEventHandler();
+  
+  @Override
+  public EventHandler<?> getEventHandler() {
+    return ehandler;
+  }
+
+  @Override
+  public void register(Class<? extends Enum> eventType, EventHandler handler) {
+    //no-op
+  }
+
+  @Override
+  public void start() {
+    //no-op
+  }
+  
+}
\ No newline at end of file
diff --git a/common/src/main/java/org/apache/hms/common/util/ExceptionUtil.java b/controller/src/test/java/org/apache/ambari/resource/statemachine/StateMachineInvokerImplNoOp.java
old mode 100755
new mode 100644
similarity index 70%
copy from common/src/main/java/org/apache/hms/common/util/ExceptionUtil.java
copy to controller/src/test/java/org/apache/ambari/resource/statemachine/StateMachineInvokerImplNoOp.java
index 5f23e2b..6367349
--- a/common/src/main/java/org/apache/hms/common/util/ExceptionUtil.java
+++ b/controller/src/test/java/org/apache/ambari/resource/statemachine/StateMachineInvokerImplNoOp.java
@@ -15,18 +15,14 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
+package org.apache.ambari.resource.statemachine;
 
-package org.apache.hms.common.util;
+import org.apache.ambari.event.EventHandler;
 
-import java.io.PrintWriter;
-import java.io.StringWriter;
+class StateMachineInvokerImplNoOp implements StateMachineInvokerInterface {
 
-public class ExceptionUtil {
-  public static String getStackTrace(Throwable t) {
-    StringWriter sw = new StringWriter();
-    PrintWriter pw = new PrintWriter(sw);
-    t.printStackTrace(pw);
-    pw.flush();
-    return sw.toString();
-  }
-}
+  @Override
+  public EventHandler getAMBARIEventHandler() {
+    return new NoOPDispatcher().getEventHandler();
+  }  
+}
\ No newline at end of file
diff --git a/common/src/main/java/org/apache/hms/common/util/ExceptionUtil.java b/controller/src/test/java/org/apache/ambari/resource/statemachine/StateMachineInvokerSync.java
old mode 100755
new mode 100644
similarity index 70%
copy from common/src/main/java/org/apache/hms/common/util/ExceptionUtil.java
copy to controller/src/test/java/org/apache/ambari/resource/statemachine/StateMachineInvokerSync.java
index 5f23e2b..7ecf20f
--- a/common/src/main/java/org/apache/hms/common/util/ExceptionUtil.java
+++ b/controller/src/test/java/org/apache/ambari/resource/statemachine/StateMachineInvokerSync.java
@@ -15,18 +15,14 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
+package org.apache.ambari.resource.statemachine;
 
-package org.apache.hms.common.util;
+import org.apache.ambari.event.EventHandler;
 
-import java.io.PrintWriter;
-import java.io.StringWriter;
+class StateMachineInvokerSync implements StateMachineInvokerInterface {
 
-public class ExceptionUtil {
-  public static String getStackTrace(Throwable t) {
-    StringWriter sw = new StringWriter();
-    PrintWriter pw = new PrintWriter(sw);
-    t.printStackTrace(pw);
-    pw.flush();
-    return sw.toString();
-  }
-}
+  @Override
+  public EventHandler getAMBARIEventHandler() {
+    return new SyncDispatcher().getEventHandler();
+  }  
+}
\ No newline at end of file
diff --git a/controller/src/test/java/org/apache/ambari/resource/statemachine/SyncDispatcher.java b/controller/src/test/java/org/apache/ambari/resource/statemachine/SyncDispatcher.java
new file mode 100644
index 0000000..c5160d6
--- /dev/null
+++ b/controller/src/test/java/org/apache/ambari/resource/statemachine/SyncDispatcher.java
@@ -0,0 +1,51 @@
+package org.apache.ambari.resource.statemachine;
+
+import org.apache.ambari.event.Dispatcher;
+import org.apache.ambari.event.Event;
+import org.apache.ambari.event.EventHandler;
+
+/**
+ * To test objects with state in isolation, set this no-op Dispatcher
+ */
+class SyncDispatcher implements Dispatcher{
+  class SyncEventHandler implements EventHandler<Event>{
+
+    @Override
+    public void handle(Event event) {
+      Class<?> eventClass = event.getType().getDeclaringClass();
+      if(eventClass.equals(ClusterEventType.class)){
+        ClusterEvent cevent = (ClusterEvent)event;
+        ((EventHandler<ClusterEvent>)cevent.getCluster()).handle(cevent);
+      }
+      else if(eventClass.equals(ServiceEventType.class)){
+        ServiceEvent sevent = (ServiceEvent)event;
+        ((EventHandler<ServiceEvent>)sevent.getService()).handle(sevent);
+      }
+      else if(eventClass.equals(RoleEventType.class)){
+        RoleEvent revent = (RoleEvent)event;
+        ((EventHandler<RoleEvent>)revent.getRole()).handle(revent);
+      }
+      else {
+        throw new UnsupportedOperationException("invalid event class: " + eventClass);
+      }
+    }
+    
+  }
+  EventHandler<?> ehandler = new SyncEventHandler();
+  
+  @Override
+  public EventHandler<?> getEventHandler() {
+    return ehandler;
+  }
+
+  @Override
+  public void register(Class<? extends Enum> eventType, EventHandler handler) {
+    //no-op
+  }
+
+  @Override
+  public void start() {
+    //no-op
+  }
+  
+}
\ No newline at end of file
diff --git a/controller/src/test/java/org/apache/ambari/resource/statemachine/TestClusterImpl.java b/controller/src/test/java/org/apache/ambari/resource/statemachine/TestClusterImpl.java
new file mode 100644
index 0000000..0e00d6f
--- /dev/null
+++ b/controller/src/test/java/org/apache/ambari/resource/statemachine/TestClusterImpl.java
@@ -0,0 +1,157 @@
+package org.apache.ambari.resource.statemachine;
+
+import static org.mockito.Matchers.anyInt;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+import static org.mockito.Mockito.doAnswer;
+import static org.testng.Assert.assertEquals;
+import static org.testng.Assert.assertTrue;
+
+import java.io.IOException;
+import java.util.ArrayList;
+
+import org.apache.ambari.common.rest.entities.ClusterDefinition;
+import org.apache.ambari.common.rest.entities.ClusterState;
+import org.apache.ambari.controller.Cluster;
+import org.mockito.Mockito;
+import org.mockito.invocation.InvocationOnMock;
+import org.mockito.stubbing.Answer;
+import org.testng.annotations.BeforeMethod;
+import org.testng.annotations.Test;
+
+import com.google.inject.Guice;
+/**
+ * Test state transitions within ClusterImpl. Does not test interaction between
+ * ClusterFSM and ServiceFSM
+ *
+ */
+public class TestClusterImpl {
+  
+
+  ClusterImpl clusterImpl;
+  ClusterEventType [] startEvents = {
+      ClusterEventType.START, 
+      ClusterEventType.START_SUCCESS,
+  };
+  
+  ClusterStateFSM [] startStates = {
+      ClusterStateFSM.STARTING,
+      ClusterStateFSM.ACTIVE
+  };
+  
+  ClusterState clsState;
+  Cluster cluster;
+  int numUpdateClusterMethodCalls = 0;
+  
+  @BeforeMethod
+  public void setup() throws IOException{
+    Guice.createInjector(new TestModule());
+    ClusterDefinition clusterDef = mock(ClusterDefinition.class);
+    when(clusterDef.getEnabledServices()).thenReturn(new ArrayList<String>());
+    cluster = mock(Cluster.class);
+    clsState = new ClusterState();
+    when(cluster.getClusterDefinition(anyInt())).thenReturn(clusterDef);
+    when(cluster.getClusterState()).thenReturn(clsState);
+    clusterImpl = new ClusterImpl(cluster, 1);
+    numUpdateClusterMethodCalls = 0;
+  }
+  
+  /**
+   * Test INACTIVE to ACTIVE transition 
+   * @throws Exception
+   */
+  @Test
+  public void testInactiveToActive() throws Exception{
+    doAnswer(new Answer<Void>(){
+        public Void answer(InvocationOnMock invocation) throws Throwable {
+            ClusterState cs = (ClusterState)invocation.getArguments()[0];
+            assertTrue(cs.getState().equals(ClusterState.CLUSTER_STATE_ACTIVE));
+            numUpdateClusterMethodCalls++;
+            return null;
+        }     
+    }).when(cluster).updateClusterState(clsState);
+    verifyTransitions(ClusterStateFSM.INACTIVE, startEvents, startStates);
+    assertTrue(numUpdateClusterMethodCalls == 1);
+  }
+
+ 
+  /**
+   * Test FAIL to ACTIVE transition
+   * @throws Exception
+   */
+  @Test
+  public void testFailToActive() throws Exception{
+    doAnswer(new Answer<Void>(){
+        public Void answer(InvocationOnMock invocation) throws Throwable {
+            ClusterState cs = (ClusterState)invocation.getArguments()[0];
+            assertTrue(cs.getState().equals(ClusterState.CLUSTER_STATE_ACTIVE));
+            numUpdateClusterMethodCalls++;
+            return null;
+        }     
+    }).when(cluster).updateClusterState(clsState);
+    verifyTransitions(ClusterStateFSM.FAIL, startEvents, startStates);
+    assertTrue(numUpdateClusterMethodCalls == 1);
+  }
+  
+  ClusterEventType [] stopEvents = {
+      ClusterEventType.STOP, 
+      ClusterEventType.STOP_SUCCESS,
+  };
+  
+  ClusterStateFSM [] stopStates = {
+      ClusterStateFSM.STOPPING,
+      ClusterStateFSM.INACTIVE
+  };
+  
+  
+  /**
+   * Test ACTIVE to INACTIVE transition
+   * @throws Exception
+   */
+  @Test
+  public void testActivetoInactive() throws Exception{
+    doAnswer(new Answer<Void>(){
+        public Void answer(InvocationOnMock invocation) throws Throwable {
+            ClusterState cs = (ClusterState)invocation.getArguments()[0];
+            assertTrue(cs.getState().equals(ClusterState.CLUSTER_STATE_INACTIVE));
+            numUpdateClusterMethodCalls++;
+            return null;
+        }     
+    }).when(cluster).updateClusterState(clsState);
+    verifyTransitions(ClusterStateFSM.ACTIVE, stopEvents, stopStates);
+    assertTrue(numUpdateClusterMethodCalls == 1);
+  }
+  
+  
+  /**
+   * Test FAIL to INACTIVE transition
+   * @throws Exception
+   */
+  @Test
+  public void testFailtoInactive() throws Exception{
+    doAnswer(new Answer<Void>(){
+        public Void answer(InvocationOnMock invocation) throws Throwable {
+            ClusterState cs = (ClusterState)invocation.getArguments()[0];
+            assertTrue(cs.getState().equals(ClusterState.CLUSTER_STATE_INACTIVE));
+            numUpdateClusterMethodCalls++;
+            return null;
+        }     
+    }).when(cluster).updateClusterState(clsState);
+    verifyTransitions(ClusterStateFSM.FAIL, stopEvents, stopStates);
+    assertTrue(numUpdateClusterMethodCalls == 1);
+  }
+  
+  private void verifyTransitions(ClusterStateFSM startState, ClusterEventType[] ClusterEvents,
+      ClusterStateFSM[] ClusterStateFSMs) {
+    
+    clusterImpl.getStateMachine().setCurrentState(startState);
+    for(int i=0; i < ClusterEvents.length; i++){
+      ClusterEventType rEvent = ClusterEvents[i];
+      clusterImpl.handle(new ClusterEvent(rEvent, clusterImpl));
+      ClusterStateFSM expectedRState = ClusterStateFSMs[i];
+      assertEquals(clusterImpl.getStateMachine().getCurrentState(), expectedRState);
+    }
+    
+  }
+  
+}
diff --git a/controller/src/test/java/org/apache/ambari/resource/statemachine/TestClusterImplFailure.java b/controller/src/test/java/org/apache/ambari/resource/statemachine/TestClusterImplFailure.java
new file mode 100644
index 0000000..4b1310b
--- /dev/null
+++ b/controller/src/test/java/org/apache/ambari/resource/statemachine/TestClusterImplFailure.java
@@ -0,0 +1,153 @@
+package org.apache.ambari.resource.statemachine;
+
+import static org.mockito.Matchers.anyInt;
+import static org.mockito.Matchers.anyString;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+import static org.testng.Assert.assertEquals;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Properties;
+
+import org.apache.ambari.common.rest.entities.ClusterDefinition;
+import org.apache.ambari.components.ComponentPlugin;
+import org.apache.ambari.configuration.Configuration;
+import org.apache.ambari.controller.Cluster;
+import org.apache.ambari.controller.ControllerModule;
+import org.testng.annotations.BeforeMethod;
+import org.testng.annotations.Test;
+
+import com.google.inject.Guice;
+
+/**
+ *Test state machine handling of failure scenarios
+ */
+public class TestClusterImplFailure {
+  
+ 
+  
+  private ClusterImpl clusterImpl;
+  private ServiceImpl service;
+  private RoleImpl role;
+
+  @BeforeMethod
+  public void setup() throws IOException{
+    Guice.createInjector(new TestModule());
+    ClusterDefinition clusterDef = mock(ClusterDefinition.class);
+    List<String> services = new ArrayList<String>();
+    services.add("service1");
+    when(clusterDef.getEnabledServices()).thenReturn(services);
+    Cluster cluster = mock(Cluster.class);
+    when(cluster.getClusterDefinition(anyInt())).thenReturn(clusterDef);
+
+    String [] roles = {"role1"};
+
+    ComponentPlugin compDef = mock(ComponentPlugin.class);
+    when(compDef.getActiveRoles()).thenReturn(roles);
+    when(cluster.getComponentDefinition(anyString())).thenReturn(compDef);
+    clusterImpl = new ClusterImpl(cluster, 1);
+    service = (ServiceImpl)clusterImpl.getServices().get(0);
+    role = (RoleImpl)service.getRoles().get(0);
+  
+  }
+  
+  private static class TestConfiguration extends Configuration {
+    TestConfiguration() {
+      super(getProperties());
+    }
+    private static Properties getProperties() {
+      Properties props = new Properties();
+      props.setProperty("data.store", "test:/");
+      return props;
+    }
+  }
+  private static class TestModule extends ControllerModule {
+    @Override
+    protected void configure() {
+      super.configure();
+      bind(StateMachineInvokerInterface.class)
+      .to(StateMachineInvokerSync.class);
+      bind(Configuration.class).to(TestConfiguration.class);
+    }
+  }
+  
+  
+  /**
+   * cluster should go into fail state if PRESTART fails
+   */
+  @Test
+  public void testPrestartFail() {
+    
+    checkStates(ClusterStateFSM.INACTIVE, ServiceState.INACTIVE, RoleState.INACTIVE);
+
+    clusterImpl.activate();
+    checkStates(ClusterStateFSM.STARTING, ServiceState.PRESTART, RoleState.INACTIVE);
+    
+    service.handle(new ServiceEvent(ServiceEventType.PRESTART_FAILURE, service));
+    checkStates(ClusterStateFSM.FAIL, ServiceState.FAIL, RoleState.INACTIVE);
+  }
+
+  private void checkStates(ClusterStateFSM clusterState, ServiceState serviceState,
+      RoleState roleState) {
+    assertEquals(clusterImpl.getState(), clusterState);
+    assertEquals(service.getServiceState(), serviceState);
+    assertEquals(role.getRoleState(), roleState);    
+  }
+
+  /**
+   * cluster should go into fail state if role start fails
+   */
+  @Test
+  public void testRoleStartFail() {
+    checkStates(ClusterStateFSM.INACTIVE, ServiceState.INACTIVE, RoleState.INACTIVE);
+    
+    clusterImpl.activate();
+    checkStates(ClusterStateFSM.STARTING, ServiceState.PRESTART, RoleState.INACTIVE);
+    
+    service.handle(new ServiceEvent(ServiceEventType.PRESTART_SUCCESS, service));
+    checkStates(ClusterStateFSM.STARTING, ServiceState.STARTING, RoleState.STARTING);
+
+    role.handle(new RoleEvent(RoleEventType.START_FAILURE, role));
+    checkStates(ClusterStateFSM.FAIL, ServiceState.FAIL, RoleState.FAIL);
+    
+  }
+  
+  
+  /**
+   * cluster should go into fail state if service availability check fails
+   */
+  @Test
+  public void testServiceAvailFailure() {
+    
+    setStates(ClusterStateFSM.STARTING, ServiceState.STARTED, RoleState.ACTIVE);
+    
+    service.handle(new ServiceEvent(ServiceEventType.AVAILABLE_CHECK_FAILURE, service));
+    checkStates(ClusterStateFSM.FAIL, ServiceState.FAIL, RoleState.ACTIVE);
+    
+  }
+  
+  /**
+   * cluster should go into fail state if service availability check fails
+   */
+  @Test
+  public void testRoleStopFailure() {
+    
+    setStates(ClusterStateFSM.STOPPING, ServiceState.STOPPING, RoleState.STOPPING);
+    role.handle(new RoleEvent(RoleEventType.STOP_FAILURE, role));
+    
+    checkStates(ClusterStateFSM.FAIL, ServiceState.FAIL, RoleState.FAIL);
+    
+  }
+  
+  private void setStates(ClusterStateFSM clusterState, ServiceState serviceState,
+      RoleState roleState) {
+   clusterImpl.getStateMachine().setCurrentState(clusterState);
+   service.getStateMachine().setCurrentState(serviceState);
+   role.getStateMachine().setCurrentState(roleState);
+    
+  }
+
+  
+}
diff --git a/controller/src/test/java/org/apache/ambari/resource/statemachine/TestClusterImplServiceCreation.java b/controller/src/test/java/org/apache/ambari/resource/statemachine/TestClusterImplServiceCreation.java
new file mode 100644
index 0000000..9b99030
--- /dev/null
+++ b/controller/src/test/java/org/apache/ambari/resource/statemachine/TestClusterImplServiceCreation.java
@@ -0,0 +1,85 @@
+package org.apache.ambari.resource.statemachine;
+
+import static org.mockito.Matchers.anyInt;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+import static org.testng.Assert.assertEquals;
+
+import java.io.IOException;
+import java.util.Arrays;
+
+import org.apache.ambari.common.rest.entities.ClusterDefinition;
+import org.apache.ambari.components.ComponentPlugin;
+import org.apache.ambari.controller.Cluster;
+import org.testng.annotations.Test;
+
+public class TestClusterImplServiceCreation {
+
+  /**
+   * Create cluster with two components, both having active roles.
+   * There should be two component objects in the ClusterImpl created
+   * @throws IOException
+   */
+  @Test
+  public void testClusterImplWithTwoActiveComponents() throws IOException {
+
+    //set component plugin that returns one active role
+    ComponentPlugin pluginWActiveRole = mock(ComponentPlugin.class);
+    String[] servicesWithActive = {"abc"};
+    when(pluginWActiveRole.getActiveRoles()).thenReturn(servicesWithActive);
+
+    ClusterImpl clusterImpl = buildClusterImplWithComponents(pluginWActiveRole, pluginWActiveRole);
+    assertEquals(clusterImpl.getServices().size(), 2, "number of components with active service");     
+
+  }
+  
+  /**
+   * Create cluster with two components, only one of which has active role(s)
+   * There should be only one component object in the ClusterImpl created
+   * @throws IOException
+   */
+  @Test
+  public void testClusterImplWithOneActiveComponents() throws IOException {
+
+    //set component plugin that returns one active role
+    ComponentPlugin pluginWActiveRole = mock(ComponentPlugin.class);
+    String[] servicesWithActive = {"abc"};
+    when(pluginWActiveRole.getActiveRoles()).thenReturn(servicesWithActive);
+
+    //set component plugin that returns NO active roles
+    ComponentPlugin pluginWOActiveRole = mock(ComponentPlugin.class);
+    String[] servicesNoActive = {};
+    when(pluginWOActiveRole.getActiveRoles()).thenReturn(servicesNoActive);
+    
+    ClusterImpl clusterImpl = buildClusterImplWithComponents(pluginWActiveRole, pluginWOActiveRole);
+    assertEquals(clusterImpl.getServices().size(), 1, "number of components with active service");     
+    
+  }
+  
+  
+
+  /**
+   * Create a mocked ClusterImpl that has two components, using the ComponentPlugins args 
+   * @param componentPlugin1 - the ComponentPlugin for first component
+   * @param componentPlugin2 - the ComponentPlugin for second component
+   * @return the ClusterImpl 
+   * @throws IOException
+   */
+  private ClusterImpl buildClusterImplWithComponents(
+      ComponentPlugin componentPlugin1, ComponentPlugin componentPlugin2)
+          throws IOException {
+    //set list of components
+    ClusterDefinition cdef = mock(ClusterDefinition.class);
+    when(cdef.getEnabledServices()).thenReturn(Arrays.asList("comp1","comp2"));
+
+    Cluster cluster = mock(Cluster.class);
+    when(cluster.getClusterDefinition(anyInt())).thenReturn(cdef);
+
+    when(cluster.getComponentDefinition("comp1")).thenReturn(componentPlugin1);
+    when(cluster.getComponentDefinition("comp2")).thenReturn(componentPlugin2);
+
+    ClusterImpl clusterImpl = new ClusterImpl(cluster, 1);
+    return clusterImpl;
+  }
+
+}
diff --git a/common/src/main/java/org/apache/hms/common/util/ExceptionUtil.java b/controller/src/test/java/org/apache/ambari/resource/statemachine/TestModule.java
old mode 100755
new mode 100644
similarity index 69%
copy from common/src/main/java/org/apache/hms/common/util/ExceptionUtil.java
copy to controller/src/test/java/org/apache/ambari/resource/statemachine/TestModule.java
index 5f23e2b..4919c01
--- a/common/src/main/java/org/apache/hms/common/util/ExceptionUtil.java
+++ b/controller/src/test/java/org/apache/ambari/resource/statemachine/TestModule.java
@@ -16,17 +16,16 @@
  * limitations under the License.
  */
 
-package org.apache.hms.common.util;
+package org.apache.ambari.resource.statemachine;
 
-import java.io.PrintWriter;
-import java.io.StringWriter;
+import com.google.inject.AbstractModule;
 
-public class ExceptionUtil {
-  public static String getStackTrace(Throwable t) {
-    StringWriter sw = new StringWriter();
-    PrintWriter pw = new PrintWriter(sw);
-    t.printStackTrace(pw);
-    pw.flush();
-    return sw.toString();
+class TestModule extends AbstractModule {
+  @Override
+  protected void configure() {
+    bind(StateMachineInvokerInterface.class)
+    .to(StateMachineInvokerImplNoOp.class);
+    requestStaticInjection(RoleImpl.class, ServiceImpl.class, 
+        ClusterImpl.class);
   }
 }
diff --git a/controller/src/test/java/org/apache/ambari/resource/statemachine/TestRoleImpl.java b/controller/src/test/java/org/apache/ambari/resource/statemachine/TestRoleImpl.java
new file mode 100644
index 0000000..a93a568
--- /dev/null
+++ b/controller/src/test/java/org/apache/ambari/resource/statemachine/TestRoleImpl.java
@@ -0,0 +1,148 @@
+package org.apache.ambari.resource.statemachine;
+
+import static org.mockito.Mockito.mock;
+import static org.testng.Assert.assertEquals;
+import static org.testng.Assert.assertTrue;
+
+import java.io.IOException;
+
+import org.apache.ambari.common.state.InvalidStateTransitonException;
+import org.testng.annotations.BeforeMethod;
+import org.testng.annotations.Test;
+
+import com.google.inject.Guice;
+
+
+public class TestRoleImpl {
+  RoleImpl role;
+  
+  @BeforeMethod
+  public void setup(){
+    Guice.createInjector(new TestModule());
+    ServiceFSM service = mock(ServiceFSM.class);  
+    role = new RoleImpl(service, "role1");
+  }
+ 
+  @Test
+  public void testStateTransitionsInactiveToActive() throws IOException {     
+    //from inactive to active
+    verifyTransitions(RoleState.INACTIVE, getEventsForActivate(), getStatesToActive());
+  }
+
+  @Test
+  public void testStateTransitionsFailToActive() throws IOException {
+    //from fail to active
+    verifyTransitions(RoleState.FAIL, getEventsForActivate(), getStatesToActive());
+  }
+  
+  @Test
+  public void testStateTransitionsActiveToActive() throws IOException {
+    //start event on active state throws exception
+      verifyTransitionException(RoleState.ACTIVE, getEventsForActivate(), getStatesToActive());
+  }  
+  
+  RoleEventType[] getEventsForActivate(){
+    //events that would move role to activated
+    RoleEventType[] roleEvents = {RoleEventType.START, RoleEventType.START_SUCCESS};
+    return roleEvents;
+  }
+  
+  RoleState[] getStatesToActive(){
+    //states to active state
+    RoleState[] roleStates = {RoleState.STARTING, RoleState.ACTIVE};
+    return roleStates;
+  }
+
+  @Test
+  public void testStateTransitionFailToInactive(){
+    //from fail to inactive
+    verifyTransitions(RoleState.FAIL, getEventsForInActivate(), getStatesToInActive());
+  }
+  
+  @Test
+  public void testStateTransitionActiveToInactive(){
+    //from active to inactive
+    verifyTransitions(RoleState.ACTIVE, getEventsForInActivate(), getStatesToInActive());
+  }
+  
+  @Test
+  public void testStateTransitionInactiveToInactive(){
+    //inactive to inactive throw exception
+    verifyTransitionException(RoleState.INACTIVE, getEventsForInActivate(), getStatesToInActive());
+  } 
+  
+  
+  RoleEventType[] getEventsForInActivate(){
+    //events that would move role to activated
+    RoleEventType[] roleEvents = {RoleEventType.STOP, RoleEventType.STOP_SUCCESS};
+    return roleEvents;
+  }
+  
+  RoleState[] getStatesToInActive(){
+    //states to active state
+    RoleState[] roleStates = {RoleState.STOPPING, RoleState.INACTIVE};
+    return roleStates;
+  }
+  
+  @Test
+  public void testStateTransitionInactiveToFail(){
+    RoleEventType[] startFailEvents = {RoleEventType.START, RoleEventType.START_FAILURE};
+    RoleState[] roleStates = {RoleState.STARTING, RoleState.FAIL};
+    //inactive to inactive throw exception
+    verifyTransitions(RoleState.INACTIVE, startFailEvents, roleStates);
+  } 
+
+  @Test
+  public void testStateTransitionActiveToFail(){
+    RoleEventType[] startFailEvents = {RoleEventType.STOP, RoleEventType.STOP_FAILURE};
+    RoleState[] roleStates = {RoleState.STOPPING, RoleState.FAIL};
+    //inactive to inactive throw exception
+    verifyTransitions(RoleState.ACTIVE, startFailEvents, roleStates);
+  }
+  
+  private void verifyTransitionException(RoleState startState, RoleEventType[] roleEvents,
+      RoleState[] roleStates){
+    boolean foundException = false;
+    try{
+      verifyTransitions(startState, roleEvents, roleStates);
+    }catch(InvalidStateTransitonException e){
+      foundException = true;
+    }
+    assertTrue(foundException, "exception expected");
+  }
+  
+  private void verifyTransitions(RoleState startState, RoleEventType[] roleEvents,
+      RoleState[] roleStates) {
+    role.getStateMachine().setCurrentState(startState);
+    for(int i=0; i < roleEvents.length; i++){
+      RoleEventType rEvent = roleEvents[i];
+      role.handle(new RoleEvent(rEvent, role));
+      RoleState expectedRState = roleStates[i];
+      assertEquals(role.getRoleState(), expectedRState);
+    }
+    
+  }
+
+
+
+  
+
+  
+
+//  static class RoleImplTestModule extends AbstractModule{
+//
+//    @Override
+//    protected void configure() {
+//      bind(RoleFSM.class).to(RoleImpl.class);
+//      bind(ServiceFSM.class).to(ServiceImpl.class);
+//      bind(ClusterFSM.class).to(ClusterImpl.class);
+//
+//      install(new FactoryModuleBuilder()
+//      .implement(Cluster.class,Cluster.class)
+//      .build(ClusterFactory.class));
+//
+//    }
+//
+//  }
+
+}
diff --git a/controller/src/test/java/org/apache/ambari/resource/statemachine/TestServiceImpl.java b/controller/src/test/java/org/apache/ambari/resource/statemachine/TestServiceImpl.java
new file mode 100644
index 0000000..0b39d58
--- /dev/null
+++ b/controller/src/test/java/org/apache/ambari/resource/statemachine/TestServiceImpl.java
@@ -0,0 +1,272 @@
+package org.apache.ambari.resource.statemachine;
+
+import static org.mockito.Mockito.mock;
+import static org.testng.Assert.assertEquals;
+
+import java.io.IOException;
+import java.lang.reflect.Method;
+import java.util.Arrays;
+
+import org.testng.annotations.BeforeMethod;
+import org.testng.annotations.Test;
+
+import com.google.inject.Guice;
+/**
+ * Test state transitions within ServiceImpl. Does not test interaction between
+ * roles and service or cluster.
+ */
+public class TestServiceImpl {
+  
+
+  ServiceImpl service;
+  ServiceEventType [] startEvents = {
+      ServiceEventType.START, 
+      ServiceEventType.PRESTART_SUCCESS,
+      ServiceEventType.ROLE_START_SUCCESS,
+      ServiceEventType.AVAILABLE_CHECK_SUCCESS
+  };
+  
+  ServiceState [] startStates = {
+      ServiceState.PRESTART,
+      ServiceState.STARTING,
+      ServiceState.STARTED,
+      ServiceState.ACTIVE
+  };
+  
+  @BeforeMethod
+  public void setup() throws IOException{
+    Guice.createInjector(new TestModule());
+    String roles[] = {"role1"};
+    ClusterImpl clusterImpl = mock(ClusterImpl.class);
+    service = new ServiceImpl(roles, clusterImpl, "service1");  
+  }
+  
+  /**
+   * Test INACTIVE to ACTIVE transition with one role
+   * @throws Exception
+   */
+  @Test
+  public void testInactiveToActiveOneRole() throws Exception{
+    verifyTransitions(ServiceState.INACTIVE, startEvents, startStates);
+  }
+
+  /**
+   * Test INACTIVE to ACTIVE transition with two roles
+   * @throws Exception
+   */
+  @Test
+  public void testInactiveToActiveTwoRole() throws Exception{
+    String roles[] = {"role1", "role2"};
+    setRoles(roles);
+    ServiceEventType [] events = {
+        ServiceEventType.START, 
+        ServiceEventType.PRESTART_SUCCESS,
+        ServiceEventType.ROLE_START_SUCCESS, //1st role
+        ServiceEventType.ROLE_START_SUCCESS, //2nd role
+        ServiceEventType.AVAILABLE_CHECK_SUCCESS
+    };
+    ServiceState [] states = {
+        ServiceState.PRESTART,
+        ServiceState.STARTING,
+        ServiceState.STARTING,
+        ServiceState.STARTED,
+        ServiceState.ACTIVE
+    };
+    
+    verifyTransitions(ServiceState.INACTIVE, events, states);
+  }
+  
+  /**
+   * Test FAIL to ACTIVE transition with one roles
+   * @throws Exception
+   */
+  @Test
+  public void testFailToActiveOneRole() throws Exception{
+    verifyTransitions(ServiceState.FAIL, startEvents, startStates);
+  }
+  
+  /**
+   * Test start failure scenario 
+   * @throws Exception
+   */
+  @Test
+  public void testInactiveToFail1() throws Exception{
+
+    ServiceEventType[] events = truncateServiceEventArray(startEvents, 2,
+        ServiceEventType.PRESTART_FAILURE);
+
+    ServiceState[] states = getFailedStartSequence(2);
+    verifyTransitions(ServiceState.INACTIVE, events, states);
+    
+  }
+  
+
+  private ServiceState[] getFailedStartSequence(int i) {
+    return truncateServiceStateArray(startStates, i, ServiceState.FAIL);
+  }
+
+  /**
+   * Test start failure scenario 
+   * @throws Exception
+   */
+  @Test
+  public void testInactiveToFail2() throws Exception{
+
+    ServiceEventType[] events = truncateServiceEventArray(startEvents, 3,
+        ServiceEventType.ROLE_START_FAILURE);
+    ServiceState[] states = getFailedStartSequence(3);
+
+    verifyTransitions(ServiceState.INACTIVE, events, states);
+    
+  }
+  
+  
+  /**
+   * Test start failure scenario 
+   * @throws Exception
+   */
+  @Test
+  public void testInactiveToFail3() throws Exception{
+
+    ServiceEventType[] events = truncateServiceEventArray(startEvents, 4,
+        ServiceEventType.AVAILABLE_CHECK_FAILURE);
+    ServiceState[] states = getFailedStartSequence(4);
+
+    verifyTransitions(ServiceState.INACTIVE, events, states);
+    
+  }
+  
+  
+  /**
+   * truncate startevents with lenght n and replace with state at position n-1 
+   * with newState
+   * @param startEvents
+   * @param n
+   * @param newState
+   * @return
+   */
+  private ServiceEventType[] truncateServiceEventArray(
+      ServiceEventType[] startEvents, int n, ServiceEventType newState) {
+    return truncateArrayAndReplaceLastState(startEvents, n, newState, ServiceEventType.class);
+  }
+
+  private ServiceState[] truncateServiceStateArray(
+      ServiceState[] startEvents2, int n, ServiceState prestartFailure) {
+    return truncateArrayAndReplaceLastState(startStates, n, prestartFailure, ServiceState.class);
+  }
+
+  ServiceEventType [] stopEvents = {
+      ServiceEventType.STOP, 
+      ServiceEventType.ROLE_STOP_SUCCESS,
+  };
+  
+  ServiceState [] stopStates = {
+      ServiceState.STOPPING,
+      ServiceState.INACTIVE
+  };
+   
+
+  
+  /**
+   * Test active to inactive transition with one role
+   * @throws Exception
+   */
+  @Test
+  public void testActiveToInactiveOneRole() throws Exception{
+    verifyTransitions(ServiceState.ACTIVE, stopEvents, stopStates);
+  }
+  
+  /**
+   * Test fail to inactive transition with one role
+   * @throws Exception
+   */
+  @Test
+  public void testFailToInactiveOneRole() throws Exception{
+    verifyTransitions(ServiceState.FAIL, stopEvents, stopStates);
+  }
+  
+  /**
+   * Test failure in stop role
+   * @throws Exception
+   */
+  @Test
+  public void testActiveStopFailure() throws Exception{
+    ServiceEventType [] stopEvents = {
+        ServiceEventType.STOP, 
+        ServiceEventType.ROLE_STOP_FAILURE,
+    };
+    
+    ServiceState [] stopStates = {
+        ServiceState.STOPPING,
+        ServiceState.FAIL
+    };
+    verifyTransitions(ServiceState.ACTIVE, stopEvents, stopStates);
+  }
+  
+  
+  /**
+   * Test active to inactive transition with two roles
+   * @throws Exception
+   */
+  @Test
+  public void testActiveToInactiveTwoRoles() throws Exception{
+    String roles[] = {"role1", "role2"};
+    ServiceEventType [] stopEvents = {
+        ServiceEventType.STOP, 
+        ServiceEventType.ROLE_STOP_SUCCESS,//1st role
+        ServiceEventType.ROLE_STOP_SUCCESS,//2nd role
+    };
+    
+    ServiceState [] stopStates = {
+        ServiceState.STOPPING,
+        ServiceState.STOPPING,
+        ServiceState.INACTIVE
+    };
+    
+    setRoles(roles);
+    verifyTransitions(ServiceState.ACTIVE, stopEvents, stopStates);
+  }
+  
+  
+  /**
+   * truncate inpArr array with length n and replace state at position n-1
+   * with newState
+   * @param inpArr
+   * @param n
+   * @param newState
+   * @param tclass
+   * @return
+   */
+  private<T>  T[] truncateArrayAndReplaceLastState(
+      T[] inpArr,
+      int n, T newState, Class<?> tclass) {
+    T[] newEnumArr =  Arrays.copyOf(inpArr, n);
+    newEnumArr[n-1] = newState;
+    return newEnumArr;
+  }
+  
+  /**
+   * Call ServiceImpl.setRoles private function using reflection
+   * @param roles
+   * @throws Exception
+   */
+  private void setRoles(String[] roles) throws Exception {
+    Method method = ServiceImpl.class.getDeclaredMethod("setRoles", new Class[]{String[].class});
+    method.setAccessible(true);
+    method.invoke(service, new Object[]{roles});
+
+  }
+
+  private void verifyTransitions(ServiceState startState, ServiceEventType[] serviceEvents,
+      ServiceState[] serviceStates) {
+    service.getStateMachine().setCurrentState(startState);
+    for(int i=0; i < serviceEvents.length; i++){
+      ServiceEventType rEvent = serviceEvents[i];
+      service.handle(new ServiceEvent(rEvent, service));
+      ServiceState expectedRState = serviceStates[i];
+      assertEquals(service.getServiceState(), expectedRState);
+    }
+    
+  }
+  
+}
diff --git a/controller/conf/hms-controller-env.sh b/controller/src/test/java/org/apache/ambari/resource/statemachine/TestStateMachineInvokerImpl.java
similarity index 100%
rename from controller/conf/hms-controller-env.sh
rename to controller/src/test/java/org/apache/ambari/resource/statemachine/TestStateMachineInvokerImpl.java
diff --git a/examples/blueprint.json b/examples/blueprint.json
new file mode 100644
index 0000000..6825ca6
--- /dev/null
+++ b/examples/blueprint.json
@@ -0,0 +1,71 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+{
+  "stack": "ambari-1.0",
+  "parent": "site",
+  "parent-revision": "42",
+  "repositories": [
+    {
+      "location": "http://repos.hortonworks.com/yum",
+      "type": "yum"
+    },
+    {
+      "location": "http://incubator.apache.org/ambari/stack",
+      "type": "tar"
+    },
+  ],
+  "configuration": {
+    "hadoop-env": {
+      "HADOOP_CONF_DIR": "/etc/hadoop",
+      "HADOOP_NAMENODE_OPTS": "-Dsecurity.audit.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT",
+      "HADOOP_CLIENT_OPTS": "-Xmx128m"
+    },
+    "core-site": {
+       "fs.default.name" : "hdfs://${namenode}:8020/",
+       "hadoop.tmp.dir" : "/grid/0/hadoop/tmp",
+       "!hadoop.security.authentication" : "kerberos",
+    }
+  }
+  "components": {
+    "common": {
+      "version": "0.20.203.0"
+      "arch": "i386"
+    },
+    "hdfs": {
+      "user": "hdfs"
+    },
+    "mapreduce": {
+      "user": "mapred"
+    },
+    "hbase": {
+      "enabled": "false"
+    }
+    "pig": {
+      "version": "0.9.0"
+    }
+  },
+  "roles": {
+    "namenode": {
+      "configuration": {
+        "hdfs-site": {
+           "dfs.https.enable": "true"
+        }
+      }
+    }
+  }
+}
\ No newline at end of file
diff --git a/common/src/main/java/org/apache/hms/common/util/ExceptionUtil.java b/examples/cluster.json
old mode 100755
new mode 100644
similarity index 70%
copy from common/src/main/java/org/apache/hms/common/util/ExceptionUtil.java
copy to examples/cluster.json
index 5f23e2b..5b1aa89
--- a/common/src/main/java/org/apache/hms/common/util/ExceptionUtil.java
+++ b/examples/cluster.json
@@ -1,4 +1,4 @@
-/*
+/**
  * Licensed to the Apache Software Foundation (ASF) under one
  * or more contributor license agreements.  See the NOTICE file
  * distributed with this work for additional information
@@ -15,18 +15,16 @@
  * See the License for the specific language governing permissions and
  * limitations under the License.
  */
-
-package org.apache.hms.common.util;
-
-import java.io.PrintWriter;
-import java.io.StringWriter;
-
-public class ExceptionUtil {
-  public static String getStackTrace(Throwable t) {
-    StringWriter sw = new StringWriter();
-    PrintWriter pw = new PrintWriter(sw);
-    t.printStackTrace(pw);
-    pw.flush();
-    return sw.toString();
+{
+  "description": "alpha cluster",
+  "stack": "kryptonite",
+  "nodes": ["node000-999", "gateway0-1"],
+  "goal": "active",
+  "services": ["hdfs", "mapreduce"],
+  "roles": {
+    "namenode": ["node000"],
+    "jobtracker": ["node001"],
+    "secondary-namenode": ["node002"],
+    "gateway": ["gateway0-1"],
   }
 }
diff --git a/examples/create_hdfs_cluster.json b/examples/create_hdfs_cluster.json
new file mode 100644
index 0000000..f332280
--- /dev/null
+++ b/examples/create_hdfs_cluster.json
@@ -0,0 +1,14 @@
+{
+  "Name":"blue.dev.Cluster125",
+  "Description":"cluster125 - development cluster",
+  "StackName":"cluster125-stack",
+  "StackRevision":"0",
+  "GoalState":"ACTIVE",
+  "ActiveServices":["hdfs"],
+  "NodeRangeExpressions":["localhost"],
+  "RoleToNodesMap":{
+    "RoleToNodesMapEntries":[
+      {"RoleName":"namenode-role","NodeRangeExpressions":"localhost"}
+    ]
+  }
+}
diff --git a/pom.xml b/pom.xml
index 0be887c..ee924fd 100644
--- a/pom.xml
+++ b/pom.xml
@@ -18,48 +18,84 @@
 -->
 
 <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-	 xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
+         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
+    <name>ambari</name>
+    <description>
+      Ambari is a monitoring, administration and lifecycle management project
+      for Apache Hadoop clusters. Hadoop clusters require many inter-related
+      components that must be installed, configured, and managed across the
+      entire cluster. The stack of components that are currently supported by
+      Ambari includes HBase, HCatalog, HDFS, Hive, MapReduce, Pig, and 
+      Zookeeper.
+    </description>
+    <url>http://incubator.apache.org/ambari</url>
+    <modelVersion>4.0.0</modelVersion>
+
+    <groupId>org.apache.ambari</groupId>
+    <version>0.1.0-SNAPSHOT</version>
+    <artifactId>ambari</artifactId>
+    <packaging>pom</packaging>
+
     <properties>
         <buildtype>test</buildtype>
         <BUILD_NUMBER>${env.BUILD_NUMBER}</BUILD_NUMBER>
-        <VERSION>0.1</VERSION>
         <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> 
         <package.prefix>/usr</package.prefix>
-        <package.conf.dir>/etc/hms</package.conf.dir>
-        <package.log.dir>/var/log/hms</package.log.dir>
-        <package.pid.dir>/var/run/hms</package.pid.dir>
+        <package.conf.dir>/etc/ambari</package.conf.dir>
+        <package.log.dir>/var/log/ambari</package.log.dir>
+        <package.pid.dir>/var/run/ambari</package.pid.dir>
         <package.release>1</package.release>
-        <package.version>0.1.0</package.version>
-        <final.name>${project.artifactId}-${project.version}</final.name>
+        <package.type>tar.gz</package.type>
+        <ambari.version>0.1.0-SNAPSHOT</ambari.version>
+        <final.name>${project.artifactId}-${ambari.version}</final.name>
     </properties>
 
-    <name>Hadoop Management System</name>
-    <description>Hadoop Management System for the cloud</description>
-    <url>http://incubator.apache.org/hms</url>
-    <modelVersion>4.0.0</modelVersion>
-
-    <groupId>org.apache.hms</groupId>
-    <version>0.1.0</version>
-    <artifactId>hms</artifactId>
-    <packaging>pom</packaging>
-
-    <issueManagement>
-        <system>HMS JIRA</system>
-        <url>http://issues.apache.org/jira/browse/HMS</url>
-    </issueManagement>
+    <licenses>
+      <license>
+        <name>Apache 2</name>
+        <url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>
+        <distribution>repo</distribution>
+      </license>
+    </licenses>
 
     <scm>
-        <developerConnection>git@github.com:macroadster/hms.git</developerConnection>
-        <url>git@github.com:macroadster/hms.git</url>
-        <tag></tag>
+      <connection>scm:svn:http://svn.apache.org/repos/asf/incubator/ambari</connection>
+      <developerConnection>scm:svn:https://svn.apache.org/repos/asf/incubator/ambari</developerConnection>
+      <tag>HEAD</tag>
+      <url>http://svn.apache.org/repos/asf/incubator/ambari</url>
     </scm>
 
+    <issueManagement>
+        <system>Jira</system>
+        <url>http://issues.apache.org/jira/browse/AMBARI</url>
+    </issueManagement>
+
     <mailingLists>
         <mailingList>
-            <name>hms</name>
-            <subscribe></subscribe>
-            <unsubscribe></unsubscribe>
-            <post>mailto:general@hms.apache.com</post>
+            <name>User list</name>
+            <subscribe>mailto:ambari-user-subscribe@incubator.apache.org
+            </subscribe>
+            <unsubscribe>mailto:ambari-user-unsubscribe@incubator.apache.org
+            </unsubscribe>
+            <post>mailto:ambari-user@incubator.apache.org</post>
+            <archive></archive>
+        </mailingList>
+        <mailingList>
+            <name>Development list</name>
+            <subscribe>mailto:ambari-dev-subscribe@incubator.apache.org
+            </subscribe>
+            <unsubscribe>mailto:ambari-dev-unsubscribe@incubator.apache.org
+            </unsubscribe>
+            <post>mailto:ambari-dev@incubator.apache.org</post>
+            <archive></archive>
+        </mailingList>
+        <mailingList>
+            <name>Commit list</name>
+            <subscribe>mailto:ambari-commits-subscribe@incubator.apache.org
+            </subscribe>
+            <unsubscribe>mailto:ambari-commits-unsubscribe@incubator.apache.org
+            </unsubscribe>
+            <post>mailto:ambari-commits@incubator.apache.org</post>
             <archive></archive>
         </mailingList>
     </mailingLists>
@@ -78,111 +114,68 @@
                         </exclusion>
                 </exclusions>
         </dependency>
-        <dependency>
-                <groupId>org.apache.zookeeper</groupId>
-                <artifactId>zookeeper</artifactId>
-                <version>3.3.2</version>
-                <!-- <scope>provided</scope> -->
-                <exclusions>
-                  <exclusion>
-                    <groupId>log4j</groupId>
-                    <artifactId>log4j</artifactId>
-                  </exclusion>
-                </exclusions>
-        </dependency>
-        <dependency>
-                <groupId>org.mortbay.jetty</groupId>
-                <artifactId>jetty</artifactId>
-                <version>6.1.26</version>
-        </dependency>
-        <dependency>
-                <groupId>javax.jmdns</groupId>
-                <artifactId>jmdns</artifactId>
-                <version>3.4.0</version>
-        </dependency>
-        <dependency>
-                <groupId>commons-logging</groupId>
-                <artifactId>commons-logging</artifactId>
-                <version>1.1.1</version>
-        </dependency>
-        <dependency>
-                <groupId>commons-codec</groupId>
-                <artifactId>commons-codec</artifactId>
-                <version>1.3</version>
-                <scope>compile</scope>
-        </dependency>
-        <dependency>
-                <groupId>commons-lang</groupId>
-                <artifactId>commons-lang</artifactId>
-                <version>2.4</version>
-        </dependency>
-        <dependency>
-                <groupId>commons-httpclient</groupId>
-                <artifactId>commons-httpclient</artifactId>
-                <version>3.0.1</version>
-        </dependency>
-        <dependency>
-                <groupId>javax.servlet</groupId>
-                <artifactId>servlet-api</artifactId>
-                <version>2.5</version>
-                <scope>provided</scope>
-        </dependency>
-        <dependency>
-                <groupId>log4j</groupId>
-                <artifactId>log4j</artifactId>
-                <version>1.2.15</version>
-                <exclusions>
-                  <exclusion>
-                    <groupId>javax.mail</groupId>
-                    <artifactId>mail</artifactId>
-                  </exclusion>
-                  <exclusion>
-                    <groupId>javax.jms</groupId>
-                    <artifactId>jms</artifactId>
-                  </exclusion>
-                  <exclusion>
-                    <groupId>com.sun.jdmk</groupId>
-                    <artifactId>jmxtools</artifactId>
-                  </exclusion>
-                  <exclusion>
-                    <groupId>com.sun.jmx</groupId>
-                    <artifactId>jmxri</artifactId>
-                  </exclusion>
-                </exclusions>
-        </dependency>
-        <dependency>
-                <groupId>com.sun.jersey</groupId>
-                <artifactId>jersey-json</artifactId>
-                <version>1.6</version>
-        </dependency>
-        <dependency>
-                <groupId>com.sun.jersey</groupId>
-                <artifactId>jersey-server</artifactId>
-                <version>1.6</version>
-        </dependency>
-
-        <dependency>
-                <groupId>com.sun.jersey</groupId>
-                <artifactId>jersey-client</artifactId>
-                <version>1.6</version>
-        </dependency>
     </dependencies>
 
     <developers>
         <developer>
-            <id>eyang</id>
-            <name>Eric Yang</name>
-            <email>eric818@gmail.com</email>
-            <timezone>(GMT-08:00) Pacific Time(US &amp; Canada)</timezone>
+            <id>ddas</id>
+            <name>Devaraj Das</name>
+            <email>ddas@hortonworks.com</email>
+            <timezone>-8</timezone>
             <roles>
                 <role></role>
             </roles>
         </developer>
         <developer>
-            <id>kan</id>
+            <id>berndf</id>
+            <name>Bernd Fondermann</name>
+            <email>berndf@apache.org</email>
+            <timezone>+1</timezone>
+            <roles>
+                <role></role>
+            </roles>
+        </developer>
+        <developer>
+            <id>vgogate</id>
+            <name>Vitthal Suhas Gogate</name>
+            <email>vgogate@apache.org</email>
+            <timezone>-8</timezone>
+            <roles>
+                <role></role>
+            </roles>
+        </developer>
+        <developer>
+            <id>omalley</id>
+            <name>Owen O'Malley</name>
+            <email>omalley@apache.org</email>
+            <timezone>-8</timezone>
+            <roles>
+                <role></role>
+            </roles>
+        </developer>
+        <developer>
+            <id>jagane</id>
+            <name>Jagane Sundar</name>
+            <email>jagane@apache.org</email>
+            <timezone>-8</timezone>
+            <roles>
+                <role></role>
+            </roles>
+        </developer>
+        <developer>
+            <id>eyang</id>
+            <name>Eric Yang</name>
+            <email>eyang@apache.org</email>
+            <timezone>-8</timezone>
+            <roles>
+                <role></role>
+            </roles>
+        </developer>
+        <developer>
+            <id>kzhang</id>
             <name>Kan Zhang</name>
             <email>kanzhangmail@yahoo.com</email>
-            <timezone>(GMT-08:00) Pacific Time(US &amp; Canada)</timezone>
+            <timezone>-8</timezone>
             <roles>
                 <role></role>
             </roles>
@@ -195,121 +188,177 @@
     </organization>
 
     <build>
-        <resources>
-            <resource>
-                <directory>src/main/resources</directory>
-                <filtering>true</filtering>
-            </resource>
-        </resources>
+      <resources>
+        <resource>
+          <directory>src/main/resources</directory>
+          <filtering>true</filtering>
+        </resource>
+      </resources>
+      <plugins>
+        <plugin>
+          <groupId>org.apache.maven.plugins</groupId>
+          <artifactId>maven-javadoc-plugin</artifactId>
+          <version>2.8</version>
+          <configuration>
+            <doctitle>Ambari API for ${project.name} ${project.version}</doctitle>
+          </configuration>
+          <executions>
+            <execution>
+              <id>aggregate</id>
+              <goals>
+                <goal>aggregate</goal>
+              </goals>
+              <phase>site</phase>
+              <configuration>
+                <doctitle>Ambari API for ${project.name} ${project.version}</doctitle>
+              </configuration>
+            </execution>
+          </executions>
+        </plugin>
+        <plugin>
+          <groupId>org.apache.maven.plugins</groupId>
+          <artifactId>maven-antrun-plugin</artifactId>
+          <version>1.4</version>
+          <executions>
+            <execution>
+              <phase>validate</phase>
+              <configuration>
+                <tasks name="setup">
+                  <mkdir dir="${basedir}/target"/>
+                  <echo message="${project.version}" file="${basedir}/target/VERSION"/>
+                  <mkdir dir="${basedir}/target/clover"/>
+                  <chmod dir="${basedir}/target/clover" perm="a+w" />
+                </tasks>
+              </configuration>
+              <goals>
+                <goal>run</goal>
+              </goals>
+            </execution>
+          </executions>
+        </plugin>
+      </plugins>
+      <pluginManagement>
         <plugins>
-            <plugin>
-                <groupId>org.apache.maven.plugins</groupId>
-                <artifactId>maven-resources-plugin</artifactId>
-                <version>2.4.3</version>
-                <configuration>
-                    <encoding>UTF-8</encoding>
-                </configuration>
-            </plugin>
-            <plugin>
-                <groupId>org.apache.maven.plugins</groupId>
-                <artifactId>maven-jar-plugin</artifactId>
-                <version>2.3.1</version>
-                <executions>
-                    <execution>
-                        <goals>
-                            <goal>test-jar</goal>
-                        </goals>
-                    </execution>
-                </executions>
-            </plugin>
-            <plugin>
-                <groupId>org.apache.maven.plugins</groupId>
-                <artifactId>maven-compiler-plugin</artifactId>
-                <version>2.3.2</version>
-                <configuration>
-                    <compilerVersion>1.5</compilerVersion>
-                    <source>1.6</source>
-                    <target>1.6</target>
-                </configuration>
-            </plugin>
-
-            <plugin>
-                <groupId>org.apache.maven.plugins</groupId>
-                <artifactId>maven-source-plugin</artifactId>
-                <version>2.1.1</version>
-                <executions>
-                    <execution>
-                        <phase>prepare-package</phase>
-                        <goals>
-                            <goal>jar-no-fork</goal>
-                        </goals>
-                    </execution>
-                </executions>
-            </plugin>
-            <plugin>
-                <groupId>org.apache.maven.plugins</groupId>
-                <artifactId>maven-surefire-plugin</artifactId>
-                <version>2.5</version>
-                <configuration>
-                    <phase>test</phase>
-                    <argLine>-Xmx1024m</argLine>
-                    <includes>
-                        <include>**/Test*.java</include>
-                    </includes>
-                    <excludes>
-                        <exclude>**/IntegrationTest*.java</exclude>
-                        <exclude>**/PerformanceTest*.java</exclude>
-                    </excludes>
-                    <skipTests>${skipTests}</skipTests>
-                    <reportsDirectory>${project.build.directory}/test-reports</reportsDirectory>
-                    <systemProperties>
-                        <property>
-                            <name>HMS_LOG_DIR</name>
-                            <value>${project.build.directory}/logs</value>
-                        </property>
-                    </systemProperties>
-                </configuration>
-            </plugin>
-            <plugin>
-                <artifactId>maven-assembly-plugin</artifactId>
-                <configuration>
-                    <tarLongFileMode>gnu</tarLongFileMode>
-                    <descriptors>
-                        <descriptor>src/packages/tarball/all.xml</descriptor>
-                    </descriptors>
-                </configuration>
-                <executions>
-                    <execution>
-                        <id>build-tarball</id>
-                        <phase>package</phase>
-                        <goals>
-                            <goal>single</goal>
-                        </goals>
-                    </execution>
-                </executions>
-            </plugin>
-            <plugin>
-                <groupId>org.apache.maven.plugins</groupId>
-                <artifactId>maven-antrun-plugin</artifactId>
-                <version>1.4</version>
-                <executions>
-                    <execution>
-                        <phase>validate</phase>
-                        <configuration>
-                            <tasks name="setup">
-                                <mkdir dir="${basedir}/target"/>
-                                <echo message="0.1.0" file="${basedir}/target/VERSION"/>
-                                <mkdir dir="${basedir}/target/clover"/>
-                                <chmod dir="${basedir}/target/clover" perm="a+w" />
-                            </tasks>
-                        </configuration>
-                        <goals>
-                            <goal>run</goal>
-                        </goals>
-                    </execution>
-                </executions>
-            </plugin>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-javadoc-plugin</artifactId>
+            <version>2.8</version>
+          </plugin>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-resources-plugin</artifactId>
+            <version>2.4.3</version>
+            <configuration>
+              <encoding>UTF-8</encoding>
+            </configuration>
+          </plugin>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-jar-plugin</artifactId>
+            <version>2.3.2</version>
+            <executions>
+              <execution>
+                <goals>
+                  <goal>test-jar</goal>
+                </goals>
+              </execution>
+            </executions>
+          </plugin>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-compiler-plugin</artifactId>
+            <version>2.3.2</version>
+            <configuration>
+              <compilerVersion>1.5</compilerVersion>
+              <source>1.6</source>
+              <target>1.6</target>
+            </configuration>
+          </plugin>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-source-plugin</artifactId>
+            <version>2.1.1</version>
+            <executions>
+              <execution>
+                <phase>prepare-package</phase>
+                <goals>
+                  <goal>jar-no-fork</goal>
+                </goals>
+              </execution>
+            </executions>
+          </plugin>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-surefire-plugin</artifactId>
+            <version>2.5</version>
+            <configuration>
+              <phase>test</phase>
+              <argLine>-Xmx1024m</argLine>
+              <includes>
+                <include>**/Test*.java</include>
+              </includes>
+              <excludes>
+                <exclude>**/IntegrationTest*.java</exclude>
+                <exclude>**/PerformanceTest*.java</exclude>
+              </excludes>
+              <skipTests>${skipTests}</skipTests>
+              <reportsDirectory>${project.build.directory}/test-reports</reportsDirectory>
+              <systemProperties>
+                <property>
+                  <name>AMBARI_LOG_DIR</name>
+                  <value>${project.build.directory}/logs</value>
+                </property>
+              </systemProperties>
+            </configuration>
+          </plugin>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-site-plugin</artifactId>
+            <version>3.0</version>
+          </plugin>
+          <plugin>
+            <groupId>org.apache.rat</groupId>
+            <artifactId>apache-rat-plugin</artifactId>
+            <version>0.7</version>
+            <executions>
+              <execution>
+                <phase>verify</phase>
+                <goals>
+                  <goal>check</goal>
+                </goals>
+              </execution>
+            </executions>
+            <configuration>
+              <numUnapprovedLicenses>0</numUnapprovedLicenses>
+              <excludes>
+                <!-- notice files -->
+                <exclude>CHANGES.txt</exclude>
+                <!-- generated files -->
+                <exclude>**/target/**</exclude>
+                <exclude>**/.classpath</exclude>
+                <exclude>**/.project</exclude>
+                <exclude>**/.settings/**</exclude>
+                <!-- bsd/gpl dual licensed files -->
+                <exclude>src/main/webapps/css/smoothness/jquery-ui-1.8.13.custom.css</exclude>
+                <exclude>src/main/webapps/js/jquery.dataTables.min.js</exclude>
+                <exclude>**/application-grammars.xml</exclude>
+  		<exclude>**/wadl.xsl</exclude>
+              </excludes>
+            </configuration>
+          </plugin>
+          <plugin>
+            <groupId>org.apache.maven.wagon</groupId>
+            <artifactId>wagon-ssh-external</artifactId>
+            <version>1.0</version>
+          </plugin>
         </plugins>
+      </pluginManagement>
+      <extensions>
+        <extension>
+          <groupId>org.apache.maven.wagon</groupId>
+          <artifactId>wagon-ssh-external</artifactId>
+        </extension>
+      </extensions>
     </build>
 
     <profiles>
@@ -345,9 +394,9 @@
                         <version>2.6.2</version>
                         <configuration>
                             <licenseLocation>conf/clover/clover.license</licenseLocation>
-                            <snapshot>/tmp/hms_clover</snapshot>
-                            <cloverDatabase>/tmp/hms</cloverDatabase>
-                            <cloverMergeDatabase>/tmp/hms</cloverMergeDatabase>
+                            <snapshot>/tmp/ambari_clover</snapshot>
+                            <cloverDatabase>/tmp/ambari</cloverDatabase>
+                            <cloverMergeDatabase>/tmp/ambari</cloverMergeDatabase>
                         </configuration>
                         <executions>
                             <execution>
@@ -394,7 +443,7 @@
                                     <argLine>-Xmx1024m -Djava.library.path=.
                                     </argLine>
                                     <includes>
-                                        <include>**/*Test.java</include>
+                                        <include>**/Test*.java</include>
                                     </includes>
                                     <excludes>
                                         <exclude>**/IntegrationTest.java</exclude>
@@ -428,7 +477,7 @@
                                         <include>**/IntegrationTest*.java</include>
                                     </includes>
                                     <excludes>
-                                        <exclude>**/*Test.java</exclude>
+                                        <exclude>**/Test*.java</exclude>
                                         <exclude>**/PerformanceTest*.java</exclude>
                                     </excludes>
                                     <skipTests>false</skipTests>
@@ -489,11 +538,9 @@
               </property>
             </activation>
             <modules>
-              <module>common</module>
               <module>agent</module>
-              <module>controller</module>
               <module>client</module>
-              <module>beacon</module>
+              <module>controller</module>
             </modules>
         </profile>
 
@@ -514,6 +561,54 @@
         <profile>
             <id>src</id>
             <build>
+              <plugins>
+                <plugin>
+                  <artifactId>maven-assembly-plugin</artifactId>
+                  <configuration>
+                    <tarLongFileMode>gnu</tarLongFileMode>
+                    <descriptors>
+                      <descriptor>src/packages/tarball/source.xml</descriptor>
+                    </descriptors>
+                    <finalName>${project.artifactId}-${project.version}-source</finalName>
+                  </configuration>
+                  <executions>
+                    <execution>
+                      <id>build-source-tarball</id>
+                      <phase>package</phase>
+                      <goals>
+                        <goal>single</goal>
+                      </goals>
+                    </execution>
+                  </executions>
+                </plugin>
+              </plugins>
+            </build>
+        </profile>
+
+        <profile>
+            <id>binary</id>
+            <build>
+              <plugins>
+                <plugin>
+                  <artifactId>maven-assembly-plugin</artifactId>
+                  <configuration>
+                    <tarLongFileMode>gnu</tarLongFileMode>
+                    <descriptors>
+                      <descriptor>src/packages/tarball/binary.xml</descriptor>
+                    </descriptors>
+                    <finalName>${project.artifactId}-${project.version}</finalName>
+                  </configuration>
+                  <executions>
+                    <execution>
+                      <id>build-tarball</id>
+                      <phase>package</phase>
+                      <goals>
+                        <goal>single</goal>
+                      </goals>
+                    </execution>
+                  </executions>
+                </plugin>
+              </plugins>
             </build>
         </profile>
 
@@ -580,113 +675,81 @@
             </build>
         </profile>
 
-        <profile>
-            <id>docs</id>
-            <activation />
-            <build>
-                <plugins>
-                    <plugin>
-                        <groupId>org.apache.maven.plugins</groupId>
-                        <artifactId>maven-javadoc-plugin</artifactId>
-                        <executions>
-                            <execution>
-                                <phase>package</phase>
-                                <goals>
-                                    <goal>jar</goal>
-                                </goals>
-                            </execution>
-                        </executions>
-                    </plugin>
-                    <plugin>
-                        <groupId>org.apache.maven.plugins</groupId>
-                        <artifactId>maven-site-plugin</artifactId>
-                        <executions>
-                            <execution>
-                                <phase>package</phase>
-                                <goals>
-                                    <goal>jar</goal>
-                                </goals>
-                            </execution>
-                        </executions>
-                    </plugin>
-                </plugins>
-            </build>
-        </profile>
-
-        <profile>
-            <id>report</id>
-            <activation />
-            <reporting>
-                <plugins>
-                    <plugin>
-                        <artifactId>maven-javadoc-plugin</artifactId>
-                        <configuration>
-                            <links>
-                                <link>http://java.sun.com/j2se/1.5.0/docs/api/</link>
-                            </links>
-                            <doclet>org.umlgraph.doclet.UmlGraphDoc</doclet>
-                                <docletArtifact>
-                                    <groupId>org.umlgraph</groupId>
-                                    <artifactId>doclet</artifactId>
-                                    <version>5.1</version>
-                                </docletArtifact>
-                                <additionalparam>-inferrel -inferdep -useimports -postfixpackage -nodefontsize 9 -nodefontpackagesize 7 -hide java.* -hide org.*</additionalparam>
-                                <destDir>withUML</destDir>
-                                <show>public</show>
-                        </configuration>
-                    </plugin>
-                    <plugin>
-                        <artifactId>maven-jxr-plugin</artifactId>
-                    </plugin>
-<!--                    <plugin>
-                        <groupId>com.atlassian.maven.plugins</groupId>
-                        <artifactId>maven-clover2-plugin</artifactId>
-                        <version>2.6.2</version>
-                        <configuration>
-                            <licenseLocation>conf/clover/clover.license</licenseLocation>
-                            <cloverDatabase>/tmp/cc</cloverDatabase>
-                            <cloverMergeDatabase>/tmp/hms</cloverMergeDatabase>
-                        </configuration>
-                    </plugin> -->
-                    <plugin>
-                        <artifactId>maven-pmd-plugin</artifactId>
-                        <reportSets>
-                            <reportSet>
-                                <reports>
-                                    <report>pmd</report>
-                                    <report>cpd</report>
-                                </reports>
-                            </reportSet>
-                        </reportSets>
-                        <configuration>
-                            <targetJdk>1.5</targetJdk>
-                        </configuration>
-                    </plugin>
-                    <plugin>
-                        <groupId>org.codehaus.mojo</groupId>
-                        <artifactId>findbugs-maven-plugin</artifactId>
-                        <configuration>
-                            <threshold>Normal</threshold>
-                            <effort>Max</effort>
-                        </configuration>
-                    </plugin>
-                    <plugin>
-                        <groupId>org.codehaus.mojo</groupId>
-                        <artifactId>javancss-maven-plugin</artifactId>
-                    </plugin>
-                    <plugin>
-                        <groupId>org.codehaus.mojo</groupId>
-                        <artifactId>jdepend-maven-plugin</artifactId>
-                    </plugin>
-                    <plugin>
-                        <groupId>org.codehaus.mojo</groupId>
-                        <artifactId>taglist-maven-plugin</artifactId>
-                    </plugin>
-                </plugins>
-            </reporting>
-        </profile>
     </profiles>
 
+    <reporting>
+        <plugins>
+            <plugin>
+                <groupId>org.apache.maven.plugins</groupId>
+                <artifactId>maven-project-info-reports-plugin</artifactId>
+                <version>2.4</version>
+                <configuration>
+                    <dependencyLocationsEnabled>false</dependencyLocationsEnabled>
+                </configuration>
+            </plugin>
+            <plugin>
+                <groupId>org.apache.maven.plugins</groupId>
+                <artifactId>maven-javadoc-plugin</artifactId>
+                <version>2.7</version>
+                <reportSets>
+                    <reportSet>
+                        <id>javadoc</id>
+                        <configuration>
+                            <aggregate>true</aggregate>
+                            <doctitle>Ambari API for ${project.name} ${project.version}</doctitle>
+                        </configuration>
+                        <reports>
+                            <report>javadoc</report>
+                        </reports>
+                    </reportSet>
+                    <reportSet>
+                        <id>aggregate</id>
+                        <reports>
+                            <report>aggregate</report>
+                        </reports>
+                    </reportSet>
+                </reportSets>
+            </plugin>
+            <plugin>
+                <artifactId>maven-jxr-plugin</artifactId>
+            </plugin>
+            <plugin>
+                <artifactId>maven-pmd-plugin</artifactId>
+                <reportSets>
+                    <reportSet>
+                        <reports>
+                            <report>pmd</report>
+                                <report>cpd</report>
+                            </reports>
+                        </reportSet>
+                </reportSets>
+                <configuration>
+                    <targetJdk>1.5</targetJdk>
+                </configuration>
+            </plugin>
+            <plugin>
+                <groupId>org.codehaus.mojo</groupId>
+                <artifactId>findbugs-maven-plugin</artifactId>
+                <configuration>
+                    <threshold>Normal</threshold>
+                    <effort>Max</effort>
+                </configuration>
+            </plugin>
+            <plugin>
+                <groupId>org.codehaus.mojo</groupId>
+                <artifactId>javancss-maven-plugin</artifactId>
+            </plugin>
+            <plugin>
+                <groupId>org.codehaus.mojo</groupId>
+                <artifactId>jdepend-maven-plugin</artifactId>
+            </plugin>
+            <plugin>
+                <groupId>org.codehaus.mojo</groupId>
+                <artifactId>taglist-maven-plugin</artifactId>
+            </plugin>
+        </plugins>
+    </reporting>
+
     <repositories>
         <repository>
             <id>maven2-repository.dev.java.net</id>
@@ -714,4 +777,12 @@
         </dependencies>
     </dependencyManagement>
 
+  <distributionManagement>
+    <site>
+      <id>apache-website</id>
+      <name>Apache website</name>
+      <url>scpexe://people.apache.org/www/incubator.apache.org/ambari</url>
+    </site>
+  </distributionManagement>
+
 </project>
diff --git a/src/main/resources/log4j.properties b/src/main/resources/log4j.properties
deleted file mode 100644
index 3fe45a5..0000000
--- a/src/main/resources/log4j.properties
+++ /dev/null
@@ -1,28 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-log4j.rootLogger=INFO, R
-log4j.appender.R=org.apache.log4j.RollingFileAppender
-log4j.appender.R.File=${HMS_LOG_DIR}/hms.log
-log4j.appender.R.MaxFileSize=10MB
-log4j.appender.R.MaxBackupIndex=10
-log4j.appender.R.layout=org.apache.log4j.PatternLayout
-log4j.appender.R.layout.ConversionPattern=%d{ISO8601} %p %t %c{1} - %m%n
-
-log4j.appender.stdout=org.apache.log4j.ConsoleAppender
-log4j.appender.stdout.follow=true
-log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
-log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} %p %t %c{1} - %m%n
-
diff --git a/src/packages/tarball/all.xml b/src/packages/tarball/all.xml
index e98cbe6..53bf219 100644
--- a/src/packages/tarball/all.xml
+++ b/src/packages/tarball/all.xml
@@ -1,4 +1,20 @@
 <?xml version="1.0"?>
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
 <assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.1"
           xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
           xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.1 http://maven.apache.org/xsd/assembly-1.1.1.xsd">
diff --git a/beacon/src/packages/tarball/all.xml b/src/packages/tarball/binary.xml
old mode 100755
new mode 100644
similarity index 66%
copy from beacon/src/packages/tarball/all.xml
copy to src/packages/tarball/binary.xml
index 24c0cd7..53bf219
--- a/beacon/src/packages/tarball/all.xml
+++ b/src/packages/tarball/binary.xml
@@ -1,4 +1,20 @@
 <?xml version="1.0"?>
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
 <assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.1"
           xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
           xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.1 http://maven.apache.org/xsd/assembly-1.1.1.xsd">
@@ -23,34 +39,21 @@
       <directory>src</directory>
     </fileSet>
     <fileSet>
-      <directory>src/main/webapps</directory>
-      <outputDirectory>webapps</outputDirectory>
-    </fileSet>
-    <fileSet>
-      <directory>src/main/resources</directory>
-      <outputDirectory>var/run</outputDirectory>
-      <directoryMode>0755</directoryMode>
-      <excludes>
-        <exclude>*</exclude>
-      </excludes>
-    </fileSet>
-    <fileSet>
       <directory>conf</directory>
     </fileSet>
     <fileSet>
-      <directory>../bin</directory>
+      <directory>bin</directory>
       <outputDirectory>bin</outputDirectory>
       <fileMode>755</fileMode>
     </fileSet>
-    <fileSet>
+<!--    <fileSet>
       <directory>target</directory>
       <outputDirectory>/</outputDirectory>
       <includes>
           <include>${artifactId}-${project.version}.jar</include>
           <include>${artifactId}-${project.version}-tests.jar</include>
-          <include>VERSION</include>
       </includes>
-    </fileSet>
+    </fileSet> -->
     <fileSet>
       <directory>target/site</directory>
       <outputDirectory>docs</outputDirectory>
@@ -66,6 +69,7 @@
   </fileSets>
   <dependencySets>
     <dependencySet>
+      <useProjectArtifact>false</useProjectArtifact>
       <outputDirectory>/lib</outputDirectory>
       <unpack>false</unpack>
       <scope>runtime</scope>
diff --git a/beacon/src/packages/tarball/all.xml b/src/packages/tarball/source.xml
old mode 100755
new mode 100644
similarity index 66%
copy from beacon/src/packages/tarball/all.xml
copy to src/packages/tarball/source.xml
index 24c0cd7..53bf219
--- a/beacon/src/packages/tarball/all.xml
+++ b/src/packages/tarball/source.xml
@@ -1,4 +1,20 @@
 <?xml version="1.0"?>
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
 <assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.1"
           xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
           xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.1 http://maven.apache.org/xsd/assembly-1.1.1.xsd">
@@ -23,34 +39,21 @@
       <directory>src</directory>
     </fileSet>
     <fileSet>
-      <directory>src/main/webapps</directory>
-      <outputDirectory>webapps</outputDirectory>
-    </fileSet>
-    <fileSet>
-      <directory>src/main/resources</directory>
-      <outputDirectory>var/run</outputDirectory>
-      <directoryMode>0755</directoryMode>
-      <excludes>
-        <exclude>*</exclude>
-      </excludes>
-    </fileSet>
-    <fileSet>
       <directory>conf</directory>
     </fileSet>
     <fileSet>
-      <directory>../bin</directory>
+      <directory>bin</directory>
       <outputDirectory>bin</outputDirectory>
       <fileMode>755</fileMode>
     </fileSet>
-    <fileSet>
+<!--    <fileSet>
       <directory>target</directory>
       <outputDirectory>/</outputDirectory>
       <includes>
           <include>${artifactId}-${project.version}.jar</include>
           <include>${artifactId}-${project.version}-tests.jar</include>
-          <include>VERSION</include>
       </includes>
-    </fileSet>
+    </fileSet> -->
     <fileSet>
       <directory>target/site</directory>
       <outputDirectory>docs</outputDirectory>
@@ -66,6 +69,7 @@
   </fileSets>
   <dependencySets>
     <dependencySet>
+      <useProjectArtifact>false</useProjectArtifact>
       <outputDirectory>/lib</outputDirectory>
       <unpack>false</unpack>
       <scope>runtime</scope>
diff --git a/src/site/apt/cli.apt b/src/site/apt/cli.apt
new file mode 100644
index 0000000..7ca63ce
--- /dev/null
+++ b/src/site/apt/cli.apt
@@ -0,0 +1,535 @@
+~~ Licensed to the Apache Software Foundation (ASF) under one or more
+~~ contributor license agreements.  See the NOTICE file distributed with
+~~ this work for additional information regarding copyright ownership.
+~~ The ASF licenses this file to You under the Apache License, Version 2.0
+~~ (the "License"); you may not use this file except in compliance with
+~~ the License.  You may obtain a copy of the License at
+~~
+~~     http://www.apache.org/licenses/LICENSE-2.0
+~~
+~~ Unless required by applicable law or agreed to in writing, software
+~~ distributed under the License is distributed on an "AS IS" BASIS,
+~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+~~ See the License for the specific language governing permissions and
+~~ limitations under the License.
+~~
+Command Line Interface
+
+  Ambari Client implements a convenient command line interface for
+  administrators to manage the full life of Hadoop clusters. The CLI
+  is a thin client, which is implemented using Ambari REST APIs. By
+  design, there is a close mapping between CLI and REST APIs and the
+  logic is implemented on by server.
+
+  General syntax of the CLI command is as follows, 
+
+  * Ambari commands are divided into multiple command categories
+  typically named after the resources they operate on e.g. cluster,
+  node, stack etc.
+
+  * Each command starts with command category followed by one of the
+  commands in that category, followed by one or more options available
+  for the command e.g.
+
+    * ambari \[command_category\] \[command_name\] \[command_options\]
+
+  * Command options can follow in any order. Options are prefixed with "-". 
+
+  * Each option would has at most one value associated with it. The value
+  is either a string, e.g. -name "MyCluster", or a key=value pair,
+  e.g. -role rolename="hostname1, hostname[10-20]"
+
+  * Certain command options can be repeated multiple times on the
+  command line, they are marked accordingly using the tag
+  <<[REPEATABLE]>>.
+
+  * All the required options are marked as <<[REQUIRED]>>
+
+  * Generic commands do not have any category associated with them
+    e.g. help, version
+
+  []
+
+  Ambari Commands
+
+  * <<Cluster commands>>
+
+    * ambari cluster \{create | update | list | get | delete | rename | stack | 
+    nodes\}
+
+  * <<Stack commands>>
+
+    * ambari stack \{create | update | list | get | delete | history \}
+
+  * <<Node commands>>
+
+    * ambari nodes \{list | get\}
+
+  * <<Configuration commands>>
+
+    * ambari configure
+
+    * ambari add-user
+
+  * <<Server commands>>
+
+    * ambari controller
+
+    * ambari agent
+
+  * <<Generic commands>>
+
+    * ambari help
+
+    * ambari version
+
+
+Cluster Commands
+
+  * <<ambari cluster  create>> 
+
+    * Submit the cluster creation request to Ambari. It may take some
+    time to bring the cluster to the desired state and hence the
+    cluster state should be checked either via the <<list>> command or
+    the <<-wait>> option should be used to wait for the cluster to
+    reach the goal state.
+
+    * <<Options:>>
+
+      * <<-name>> [ NAME ] <<[REQUIRED]>>
+
+      * <<-stack>> [ STACK NAME ]  <<[REQUIRED]>>
+
+        * Name of the user defined stack defining cluster configuration
+
+      * <<-stack_revision>> [ INTEGER ]
+
+        * Revision of the stack. If not specified latest one is used.
+
+      * <<-stack-file>> [ FILENAME]
+
+        * Update the stack with the same name as the cluster using the
+        given file and use the updated stack for creating the new
+        cluster. This option is mutually exclusive with <<-stack>> and
+        <<-stack_revision>>
+
+      * <<-nodes>> [node_range_exp1, node_range_exp2]  <<[REQUIRED]>>
+
+        * Specify range of nodes associated with the cluster. One or
+        more node range expressions can be specified separated by
+        commas. If the user does not bind roles to specific nodes
+        using the <<-role>> option, Ambari will assign the roles to
+        nodes based on the nodes' attributes. If user wants to view
+        the role to nodes association generated by Ambari, use
+        <<-dry_run>> option
+
+      * <<-desc>> [ DESCRIPTION ]
+
+        * Ambari will add default description
+
+      * <<-goalstate>> [ACTIVE]
+
+        * Default is INACTIVE
+
+      * <<-components>> [component-1, component-2, component-3]
+
+        * By default all the components will be activated upon cluster
+        activation.  If user specifies specific ones then only those
+        will be activated upon cluster activation
+
+      * <<-role>>  rolename=[node_range_exp1, node_range_exp2, …] 
+      <<[REPEATABLE]>>
+
+        * One or more -role options can be used to specify the
+        association of nodes to various roles. Nodes not explicitly
+        associated with any role will be associated with various roles
+        based on node attributes.
+
+      * <<-dry_run>>
+
+        * Execute the command without actually making the changes effective. 
+
+      * <<-wait>>
+
+        * Optionally wait for cluster to get to goal state. The progress of
+        activating the cluster will be printed to the user.
+
+  []
+
+  * <<ambari cluster update>> 
+
+    * It updates the cluster definition. All the options except name
+    are OPTIONAL
+
+    * <<Options:>>
+
+      * <<-name>>  [ NAME ] <<[REQUIRED]>> 
+
+      * <<-desc>>   [ DESCRIPTION ]
+
+      * <<-stack>> [ STACK NAME ]   
+
+      * <<-stack_revision>> [ INTEGER ]
+
+        * Revision of the stack. If not specified latest one is used.
+
+      * <<-stack-file>> [ FILENAME]
+
+        * Update the stack with the same name as the cluster using the
+        given file and use the updated stack in the updated
+        cluster. This option is mutually exclusive with <<-stack>> and
+        <<-stack_revision>>
+
+      * <<-goalstate>> [ACTIVE/INACTIVE/ATTIC]
+
+      * <<-components>> [component, component, component]    
+
+        * Specify the list of desired active components that will be run
+        when the cluster is active.
+
+      * <<-nodes>>  [node_range_exp1, node_range_exp2]
+
+        * Specify range of nodes associated with cluster. Ambari
+        controller will figure out the changes and accordingly
+        allocate/deallocate the nodes.
+
+      * <<-role>>  rolename=[node_exp1, node_exp2, …]  <<[REPEATABLE]>>
+
+        * Change the nodes associated with each role. The nodes will be
+        re-deployed and re-configured appropriately. If nodes are not assigned
+        to a role, Ambari will automatically assign them.
+
+      * <<-dry-run>>
+
+        * This will show the details of the changes being made to
+        cluster definition without actually submitting them to Ambari
+        controller
+
+      * <<-wait>>
+
+        * Wait for cluster to get to desired goal state. The progress of
+        activating the cluster will be printed to the user.
+
+  []
+
+  * <<ambari cluster list>> 
+
+    * List clusters. In a non-verbose mode, list cluster will return
+    the list of cluster names.
+
+    * <<Options:>>
+
+      * <<-state>> [cluster_state]
+
+        * Optionally specify the cluster state to list the cluster(s)
+        in the specified state.
+
+      * <<-verbose>>
+
+        * This option will provide verbose cluster information, which includes
+        the name, creating user, state, and current number of nodes.
+
+  []
+
+  * <<ambari cluster get>> 
+
+    * Get the detailed cluster information displayed.
+
+    * <<Options:>>
+
+      * <<-name>> [NAME] <<[REQUIRED]>>
+
+        * Name of the cluster
+
+  []
+
+  * <<ambari cluster delete>>
+
+    * Delete the cluster. It deactivates the cluster. Controller in
+    the background would free all the associated nodes and then remove
+    the cluster definition at the end.
+
+    * <<Options:>>
+
+      * <<-name>> [NAME] <<[REQUIRED]>> 
+
+      * <<-wait>>
+
+        * Optionally wait for cluster to be deleted. The progress of deleting
+        the cluster will be displayed to the user.
+
+  []
+
+  * <<ambari cluster rename>> 
+
+    * Renames the cluster
+
+    * <<Options:>>
+
+      * <<-name>> [NAME] <<[REQUIRED]>> 
+
+      * <<-newname>> [NEW NAME] <<[REQUIRED]>>
+
+  []
+
+  * <<ambari cluster stack>>
+
+    * It displays the stack associated with the cluster. The
+    <<-expand>> option provides the derived stack configuration as
+    generated by recursively expanding the parent stack.
+
+    * <<Options:>>
+
+      * <<-name>> [CLUSTER_NAME] <<[REQUIRED]>>
+
+        * Cluster name
+
+      * <<-file>> [Local File Path]
+
+        * Optionally stores the configuration to local file path. If
+        not specified it displays it on the console.
+
+      * <<-expand>>
+
+        * Expand the stack by inline the parent stack into it.
+
+  [] 
+
+  * <<ambari cluster nodes>> 
+
+    * List the nodes associated with specified cluster
+
+    * <<Options:>>
+
+      * <<-name>> [CLUSTER_NAME] <<[REQUIRED]>> 
+
+        * Cluster name
+
+      * <<-alive>> [true/false]
+
+        * Only list the nodes that are alive (or dead). Alive nodes
+        are ones regularly heart beating with Ambari controller. If
+        this option is not specified then all nodes are returned.
+
+      * <<-role>> [ROLE_NAME] 
+
+        * Only list the nodes that are associated with the given role. If
+        it is not given, all nodes are listed.
+
+  []
+
+Stack Commands
+
+  * <<ambari stack create>> 
+
+    * Add or update the stack. If stack exists, it will be
+    updated to create new revision else new stack is created.
+   
+
+    * <<Options:>>
+
+      * <<-name>> [NAME] <<[REQUIRED]>>
+
+        * Name of the stack.
+
+      * <<-location>> [FILE_PATH/URL] <<[REQUIRED]>>
+
+        * Specify local file path or the URL from where stack (in
+        XML format) is to be imported into Ambari
+
+  []
+
+  * <<ambari stack update>>
+
+    * Update the stack. Only the properties set in file will be updated.
+
+    * <<Options:>>
+
+      * <<-name>> [NAME] <<[REQUIRED]>>
+
+        * Name of the stack.
+
+      * <<-location>> [FILE_PATH/URL] <<[REQUIRED]>>
+
+        * Specify local file path or the URL from where stack (in
+        XML format) is to be imported into Ambari
+
+  []
+
+  * <<ambari stack list>> 
+
+    * List all the stacks. In non-verbose mode, just lists the stacks' names.
+
+    * <<Options:>>
+
+      * <<-name>> [STACK_NAME]
+
+        * Optionally specify the stack name to list the specific one
+
+      * <<-verbose>>
+
+        * Lists the stack's name, current revision, parent stack, and the 
+        date it was last modified.
+
+      * <<-tree>>
+
+        * Optionally ask for the derivation hierarchy
+
+  [] 
+
+  * <<ambari stack get>> 
+
+    * Get the stack document in JSON format
+
+    * <<Options:>>
+
+      * <<-name>> [NAME] <<[REQUIRED]>>
+
+      * <<-revision>> [NUMBER]
+
+        * Optionally specify revision number else returns latest one
+
+      * <<-file>> [Local File Path]
+
+        * Optionally stores the configuration to local file path. If
+        not specified it displays it on the console.
+
+  []
+
+  * <<ambari stack delete>> 
+
+    * Delete the stack.
+
+    * <<Options:>>
+
+      * <<-name>> [NAME] <<[REQUIRED]>>
+
+        * Name of the stack.
+
+  []
+
+  * <<ambari stack history>>
+
+    * List all the revisions of a specified stack
+
+    * <<Options:>>
+
+      * <<-name>> [NAME] <<[REQUIRED]>>
+
+        * Name of the stack.
+
+      * <<-tree>>
+
+        * Optionally ask for the derivation hierarchy
+
+  []
+
+Node Commands 
+
+  * <<ambari node list>>
+
+    * Lists the nodes being managed by Ambari. In non-verbose mode, just lists
+    the full host names.
+
+    * <<Options:>>
+
+      * <<-verbose>>
+
+        * Optionally verbose option will display the nodes' name, cluster, 
+        current state, and roles.
+
+      * <<-allocated>> [true/false]
+
+        * Only include the nodes that allocated (or unallocated) to any
+        cluster.  If not specified, both allocated and free nodes are
+        included.
+
+      * <<-alive>> [true/false]
+
+        * Only include nodes that are alive (or dead). If not specified,
+        implies both alive and dead nodes.
+
+  []
+
+  * <<ambari node get>>
+
+    * All of the information about the node is displayed including the
+    node name, machine attributes, node state, associated cluster,
+    list of roles and the current state of the servers.
+
+    * <<Options:>>
+
+      * <<-name>> [NAME] <<[REQUIRED]>>
+
+        * Node name
+
+  []
+
+Configuration Commands
+
+  * <<ambari configure>>
+
+    * Configures Ambari on this machine by re-writing the configuration file.
+
+    * <<Options:>>
+
+      * <<-agent-password>> [PASSWORD]
+
+        * Set the password for the Ambari agent to authenticate to the 
+          controller.
+
+      * <<-controller>> [CONTROLLER URL]
+
+        * Set the URL for the Ambari controller REST API.
+
+  * <<ambari add-user>>
+
+    * Add a new user as an Ambari administrator.
+
+    * <<Options:>>
+
+      * <<-user>> [USER NAME] [REQUIRED]
+
+        * Add the given username to the database of authorized administrators.
+
+      * <<-kerberos>>
+
+        * The user should authenticate using Kerberos.
+
+      * <<-password>> [PASSWORD]
+
+        * The user should provide the given password when authenticating.
+
+Server Commands
+
+  * <<ambari controller>>
+
+    * Start the Ambari controller on this machine
+
+    * <<Options:>>
+
+      * <<-autostart>> [BOOLEAN]
+
+        * In addition to starting the server, update the system to
+          (not) restart the controller when the system reboots.
+
+  * <<ambari agent>>
+
+    * Start the Ambari agent on this machine
+
+    * <<Options:>>
+
+      * <<-autostart>> [BOOLEAN]
+
+        * In addition to starting the server, update the system to
+          (not) restart the agent when the system reboots.
+
+Generic Commands
+
+  * <<ambari help>>
+
+    * Provides the CLI help
+
+  * <<ambari version>>
+
+    * Ambari CLI version
diff --git a/src/site/apt/download.apt b/src/site/apt/download.apt
new file mode 100644
index 0000000..a800bec
--- /dev/null
+++ b/src/site/apt/download.apt
@@ -0,0 +1,18 @@
+~~ Licensed to the Apache Software Foundation (ASF) under one or more
+~~ contributor license agreements.  See the NOTICE file distributed with
+~~ this work for additional information regarding copyright ownership.
+~~ The ASF licenses this file to You under the Apache License, Version 2.0
+~~ (the "License"); you may not use this file except in compliance with
+~~ the License.  You may obtain a copy of the License at
+~~
+~~     http://www.apache.org/licenses/LICENSE-2.0
+~~
+~~ Unless required by applicable law or agreed to in writing, software
+~~ distributed under the License is distributed on an "AS IS" BASIS,
+~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+~~ See the License for the specific language governing permissions and
+~~ limitations under the License.
+~~
+Ambari Downloads
+
+  This are no downloads yet.
\ No newline at end of file
diff --git a/src/site/apt/index.apt b/src/site/apt/index.apt
new file mode 100644
index 0000000..05bf0a1
--- /dev/null
+++ b/src/site/apt/index.apt
@@ -0,0 +1,380 @@
+~~ Licensed to the Apache Software Foundation (ASF) under one or more
+~~ contributor license agreements.  See the NOTICE file distributed with
+~~ this work for additional information regarding copyright ownership.
+~~ The ASF licenses this file to You under the Apache License, Version 2.0
+~~ (the "License"); you may not use this file except in compliance with
+~~ the License.  You may obtain a copy of the License at
+~~
+~~     http://www.apache.org/licenses/LICENSE-2.0
+~~
+~~ Unless required by applicable law or agreed to in writing, software
+~~ distributed under the License is distributed on an "AS IS" BASIS,
+~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+~~ See the License for the specific language governing permissions and
+~~ limitations under the License.
+~~
+Introduction
+
+  Apache Ambari™ is a monitoring, administration and lifecycle
+  management project for Apache Hadoop™ clusters. Hadoop clusters
+  require many inter-related components that must be installed,
+  configured, and managed across the entire cluster. The set of
+  components that are currently supported by Ambari includes:
+
+  * {{{http://hbase.apache.org} Apache HBase™}}
+
+  * {{{http://incubator.apache.org/hcatalog} Apache HCatalog™}}
+
+  * {{{http://hadoop.apache.org/hdfs} Apache Hadoop HDFS™}}
+
+  * {{{http://hive.apache.org} Apache Hive™}}
+
+  * {{{http://hadoop.apache.org/mapreduce} Apache Hadoop MapReduce™}}
+
+  * {{{http://pig.apache.org} Apache Pig™}}
+
+  * {{{http://zookeeper.apache.org} Apache Zookeeper™}}
+
+  []
+
+  Ambari's audience is operators responsible for managing Hadoop clusters.
+  It allow them to:
+
+  * Deploy and configure Hadoop
+
+    * Define a set of nodes as a cluster
+
+    * Assign roles to particular nodes or let Ambari pick a mapping for them.
+
+    * Override the default versions of components or configure 
+    particular values.
+
+  * Upgrade a cluster
+
+    * Modify the versions or configuration of each component
+
+    * Upgrade easily without losing data
+
+  * Monitoring and other maintenance tasks
+
+    * Check which servers are currently running across the cluster
+
+    * Starting and stopping Hadoop services (like HDFS, MR, HBase)
+
+  * Integrate with other tools
+
+    * Provide a REST interface for defining or manipulating clusters.
+
+  []
+
+  Ambari provides a REST, command line, and graphical interface. The command 
+  line and graphical interface are implemented using the REST interface and 
+  all three have the same functionality. The graphical interface is 
+  browser-based using JSON and JavaScript. 
+
+  Ambari requires that the base operating system has been deployed and
+  managed via existing tools, such as Chef or Puppet. Ambari is solely focused
+  on simplifying configuring and managing the Hadoop stack. Ambari does support
+  adding third party software packages to be deployed as part of the Hadoop 
+  cluster.
+
+Key concepts
+
+  * <<Nodes>> are machines in the datacenter that are managed by Ambari to
+  run Hadoop clusters.
+
+  * <<Components>> are the individual software products that are
+  installed to create a complete Hadoop cluster. Some components
+  are active and include servers, such as HDFS, and some are passive
+  libraries, such as Pig. The servers of active components provide a 
+  <<service>>.
+
+  * Components consist of <<roles>> that represent the different
+  configurations required by the component. Components have a client
+  role and a role for each server. HDFS roles, for example, are
+  'client,' 'namenode,' 'secondary namenode,' and 'datanode.' The
+  client role installs the client software and configuration, while
+  each server role installs the appropriate software and configuration.
+
+  * <<Stacks>> define the software and configuration for a
+  cluster. Stacks can inherit from each and only need to specify
+  the part that differ from their parent. Thus, although stacks
+  can specify the version for each component, most will not.
+
+  * A <<cluster>> uses a stack and a set of nodes to form a
+  cluster. When a cluster is defined, the user may specify the nodes
+  for each role or let Ambari automatically assign the roles based on
+  the nodes characteristics.  Clusters' state can either be active,
+  inactive, or retired. Active clusters will be started, inactive
+  clusters have reserved nodes, but and will be stopped. Retired
+  clusters will keep their definition, but their nodes are released.
+
+Configuration
+
+  Ambari abstracts cluster configuration into groups of string
+  key/value pairs. This abstraction lets us manage and manipulate the
+  configurations in a consistent and component agnostic way. The
+  groups are named for the file that they end up in, and the groups
+  are defined by the set of components. For Hadoop, the groups are:
+ 
+  * hadoop/hadoop-env
+
+  * hadoop/capacity-scheduler
+
+  * hadoop/core-site
+
+  * hadoop/hdfs-site
+
+  * hadoop/log4j.properties
+
+  * hadoop/mapred-queue-acl
+
+  * hadoop/mapred-site
+
+  * hadoop/metrics2.properties
+
+  * hadoop/task-controller
+
+* Configuration example
+
+  Although users will typically define configurations via the web UI,
+  it is useful to examine a sample JSON expression that would define a
+  configuration in the REST api.
+
+------
+{
+  "hadoop/hadoop-env": {
+    "HADOOP_CONF_DIR": "/etc/hadoop",
+    "HADOOP_NAMENODE_OPTS": "-Dsecurity.audit.logger=INFO,DRFAS",
+    "HADOOP_CLIENT_OPTS": "-Xmx128m"
+  },
+  "hadoop/core-site": {
+     "fs.default.name" : "hdfs://${namenode}:8020/",
+     "hadoop.tmp.dir" : "/grid/0/hadoop/tmp",
+     "hadoop.security.authentication" : "kerberos",
+  }
+  "hadoop/hdfs-site": {
+     "hdfs.user": "hdfs"
+  }
+}
+------
+
+Stacks
+
+  Stacks form the basis of defining what software needs to be
+  installed and run and the configuration for that software. Rather
+  than have the administrator define the entire stack from scratch,
+  stacks inherit most of their properties from their parent. This
+  allows the administrator to take a default stack and only modify the
+  properties that need to be changed without dealing with a lot of
+  boilerplate.
+
+  Stacks include a list of repositories that contain the rpms or
+  tarballs. The repositories will be searched in the given order and
+  if the required component versions are not found, the next one will
+  be searched. If the required file isn't found, the parent stack's
+  repository list will be searched and so on.
+
+  Stacks define the version of each component that they need. Most
+  of the versions will come from the stack, but the operator can
+  override the version as needed.
+
+  The stack define the configuration parameters to be used by this
+   stack.  To keep the stacks generic, the configuration values may
+   refer to the nodes that hold a particular role. Thus,
+   <<<fs.default.name>>> may be configured to
+   <<<hdfs://${namenode}/>>> and the name of the namenode will be
+   filled in during the configuration.  A few configuration settings
+   need to set exclusively for particular roles. For example, the
+   NameNode needs to enable the https security option.
+
+* Stack example
+
+  Here's a example JSON expression for defining a stack.
+
+------
+{
+  "parent": "site",        /* declare parent as site, r42 */
+  "parent-revision": "42",
+  "repositories": {
+    "yum": ["http://incubator.apache.org/ambari/stack/yum"],
+    "tar": ["http://incubator.apache.org/ambari/stack/tar"]
+  },
+  "configuration": {    /* define the general configuration */
+    "hadoop/hadoop-env": {
+      "HADOOP_CONF_DIR": "/etc/hadoop",
+      "HADOOP_NAMENODE_OPTS": "-Dsecurity.audit.logger=INFO,DRFAS",
+      "HADOOP_CLIENT_OPTS": "-Xmx128m"
+    },
+    "hadoop/core-site": {
+       "fs.default.name" : "hdfs://${namenode}:8020/",
+       "hadoop.tmp.dir" : "/grid/0/hadoop/tmp",
+       "hadoop.security.authentication" : "kerberos",
+    }
+    "hadoop/hdfs-site": {
+       "hdfs.user": "hdfs"
+    }
+  }
+  "components": {
+    "common": {
+      "version": "0.20.204.1" /* define a new version for common */
+      "arch": "i386"
+    },
+    "hdfs": {
+      "roles": {
+        "namenode": { /* override one value on the namenode */
+          "hadoop/hdfs-site": {
+            "dfs.https.enable": "true"
+          }
+        }
+      }
+    },
+    "pig": {
+      "version": "0.9.0"
+    }
+  }
+}
+------
+
+Component Definitions
+
+  We are designing the Ambari infrastructure with a generic interface
+  for defining components. The current version of Ambari doesn't
+  publicize the interface, but the intention is to open it up to
+  support thirrd party components. Ambari will search the configured
+  repositories for the component definition and use that definition to
+  install, manage, run, and remove the component. To have consistency
+  in the architecture, the standard Hadoop services will also be
+  plugged in to Ambari using the same mechanism.
+
+  The component definitions are written as a text file that provides
+  the commands to perform each kind of action, such as install, start,
+  stop, or remove. There will be well defined environment that the
+  commands run in to provide consistency between platforms.
+
+Clusters
+
+  Defining a cluster, involves picking a stack and assigning nodes to the
+  cluster.
+
+  Clusters have a goal state, which can be one of three values:
+
+  * <<Active>> -- the user wants the cluster to be started
+
+  * <<Inactive>> -- the user wants the cluster to be stopped
+
+  * <<Retired>> -- the user wants the cluster to be stopped, the nodes 
+  released, and the data deleted. This is useful, if the user expects
+  to recreate the cluster eventually, but wants to release the nodes.
+
+  []
+
+  Clusters also have a list of active components that should be running. This 
+  overrides the stack and provides a mechanism for the administrator to
+  shutdown a service temporarily.
+
+* Cluster example
+
+------
+{
+  "description": "alpha cluster",
+  "stack": "kryptonite",
+  "nodes": ["node000-999", "gateway0-1"],
+  "goal": "active",
+  "services": ["hdfs", "mapreduce"],
+  "roles": {
+    "namenode": ["node000"],
+    "jobtracker": ["node001"],
+    "secondary-namenode": ["node002"],
+    "gateway": ["gateway0-1"],
+  }
+}
+------
+
+Stack Deployment
+
+  Ambari will deploy the software for its clusters from either
+  OS-specific packages (rpms and debs) or tarballs. Rpms have the
+  advantage of putting the software in a user-convenient location,
+  such as <<</usr/bin>>>, but they are specific to an OS and don't
+  support having multiple versions installed at once, while tarballs
+  require rebuilding the entire deployment to change one component
+  version.
+
+  The layout on the nodes looks like:
+
+------
+${ambari}/clusters/${cluster}-${role}/stack/
+                                     /logs/
+                                     /data/disk-${0 to N}/
+                                     /pkgs/
+------
+
+  The software and configuration for the role are installed in
+  <<<stack>>>. The logs for the managed cluster are put into
+  <<<logs>>>. The cluster's data is in <<<data>>> with symlinks to
+  each of the disks that machine should use. Finally, the component
+  tarballs are placed in the <<<pkgs>>> directory to be installed by
+  the component.
+
+Ambari Installation
+
+  Ambari will be packaged as both OS-specific packages (rpms and debs)
+  and tarballs, which need to be installed on each node. The user
+  chooses one node as the Ambari controller, which is the point of
+  interaction for both the web UI and the REST interface. If the user
+  doesn't already have a Zookeeper service for Ambari to use, Ambari
+  will run one internally for its own use.
+
+Monitoring
+
+  Monitoring the current state of the cluster is an important part of
+  operating Hadoop. Ambari current supports running basic
+  health-checks on processes running on nodes. The status will be
+  aggregated up as the health of the corresponding Hadoop
+  services). Roughly these checks will consist of pinging the RPC port
+  of the server to see if it responds.
+
+High-level Design
+
+  Ambari is managed by the Ambari <<Controller>> – a central server, which
+  provides the user interface and that directs the agent on each node.
+  The agent is responsible for installing, configuring, running and
+  cleaning up components of the Hadoop stack on the local node. Each agent will
+  contact the controller when it has finished its work or N seconds have 
+  passed. The controller stores all of the information about the clusters and
+  stacks in Zookeeper, which is highly available and redundant.
+
+  Ambari abstracts out the configuration and software stack in the
+  cluster as stack. Every stack release provides a default
+  stack. If a site has multiple clusters, they can define a "site"
+  stack that provides the site-wide defaults and have the cluster 
+  stacks derive from it. Ambari will keep the revision history of
+  stacks to enable operators to diagnose problems and track changes.
+
+Roadmap
+
+  In the future, Ambari would integrate with and use existing
+  datacenter management and monitoring infrastructure - Nagios,
+  etc. The other area where Ambari will focus on is a store for
+  metrics data. HBase is a likely candidate for such a store.
+
+  We also need to support adding and removing nodes from a running cluster
+  without brining it down first. This will require doing decommissioning
+  of nodes before they are removed.
+
+  A lot of support needs to be added for supporting secure clusters,
+  including providing a single interface to manage access control lists
+  for the cluster.
+
+  Ambari would also host a KDC, especially for servers like the
+  tasktracker and the datanode to have their own keytabs generated and
+  deployed by Ambari. The native KDC could optionally hook up to the
+  Corporate KDC for user management, or host user management within
+  itself. Continuing on the security aspects, Ambari would also have a
+  convenient way of allowing administrators to specify ACLs for
+  services and queues.
+
+  We plan to integrate an SNMP interface for integration with other cluster
+  management tools.
+
diff --git a/src/site/apt/scenarios.apt b/src/site/apt/scenarios.apt
new file mode 100644
index 0000000..5253d7d
--- /dev/null
+++ b/src/site/apt/scenarios.apt
@@ -0,0 +1,149 @@
+~~ Licensed to the Apache Software Foundation (ASF) under one or more
+~~ contributor license agreements.  See the NOTICE file distributed with
+~~ this work for additional information regarding copyright ownership.
+~~ The ASF licenses this file to You under the Apache License, Version 2.0
+~~ (the "License"); you may not use this file except in compliance with
+~~ the License.  You may obtain a copy of the License at
+~~
+~~     http://www.apache.org/licenses/LICENSE-2.0
+~~
+~~ Unless required by applicable law or agreed to in writing, software
+~~ distributed under the License is distributed on an "AS IS" BASIS,
+~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+~~ See the License for the specific language governing permissions and
+~~ limitations under the License.
+~~
+Ambari Example Scenarios
+
+  These are examples scenarios of typical usage cases.
+
+* Installing Ambari
+
+  On controller:
+
+-------
+$ rpm -i ambari-0.1.0-1.rpm
+$ ambari configure -agent-password my.pw
+$ ambari add-user -user sue -kerberos
+$ ambari -autostart true controller
+-------
+
+  If you have password-less ssh and pdsh installed, you can install the agents
+  directly on node00 to node99 using:
+
+-------
+$ pdsh -w node00-99 \
+   'rpm -i ambari-0.1.0-1.rpm; \
+    ambari configure -controller controller.my.domain.com \
+           -agent-password my.pw ; \
+    ambari -autostart true agent'
+-------
+
+* Creating a simple cluster using CLI
+
+  Create file cluster1.json with the specific configuration
+  alterations for cluster1.
+
+-------
+{"@parentName":"hadoop-security",
+ "configuration": {
+   "category": [
+     {"@name": "ambari",
+      "property": [
+        {"@name": "data.dirs", "@value": "/data/*"},
+        {"@name": "user.realm", "@value": "MY.DOMAIN.COM"}]}]}}
+-------
+
+  Run the following commands using the CLI.
+  The first command creates the stack and the second
+  creates a cluster based on that stack using the machines host00-99.
+
+-------
+$ ambari cluster create -name cluster1 -stack-file cluster1.json \
+    -nodes host00-99 -goalstate active -role namenode=host00 \
+    -role jobtracker=host01 -role client=host98-99 -wait
+-------
+
+* Upgrading a cluster using CLI
+
+  Create file cluster1-update.json that updates the version of the Hadoop 
+  component to a new version.
+
+-------
+{"components":
+  {"@name":"hadoop", "@version":"0.20.205.1"}
+}
+-------
+
+  Run the command to update the stack and cluster to the new version and wait
+  for the command to complete.
+
+-------
+$ ambari cluster update -name cluster1 -stack-file cluster1-update.json -wait
+-------
+
+* Creating a more complicated cluster using CLI
+
+  If you want to bump up the memory for the clients' JVM to 256MB and the
+  NameNode to 512MB, you can define a cluster like the below:
+
+------
+{"@parentName":"hadoop-security",
+ "configuration": {
+   "category": [
+     {"@name": "ambari",
+      "property": [
+        {"@name": "data.dirs", "@value": "/data/*"},
+        {"@name": "keytab.dir", "@value": " /etc/security/keytab"},
+        {"@name": "user.realm", "@value": "MY.DOMAIN.COM"}]},
+     {"@name": "hadoop-env",
+      "property": [
+        {"@name": "HADOOP_CLIENT_OPTS", "-Xmx256m"}]}]},
+ "components": [
+   {"@name":"hdfs",
+    "configuration": {
+      "category": [
+        {"@name": "hadoop-env",
+         property: [
+           {"@name": "HADOOP_NAMENODE_OPTS",
+            "@value": "-Xmx512m -Dsecurity.audit.logger=INFO,DRFAS
+                       -Dhdfs.audit.logger=INFO,DRFAAUDIT"}]}]}}]}
+------
+
+  And run the command to create the new cluster:
+
+-------
+$ ambari cluster create -name cluster2 -stack-file cluster2.json \
+    -nodes host00-99 -goalstate active -role namenode=host00 \
+    -role jobtracker=host01 -role client=host98-99 -wait
+-------
+
+* Creating a cluster using REST
+
+-------
+$ curl --negotiate -X PUT -T cluster1.json -H 'ContentType: application/json' \
+     http://ambari.my.domain.com:4080/rest/stack/cluster1
+$ curl --negotiate -X PUT -T - -H 'ContentType: application/json' \
+     http://ambari.my.domain.com:4080/rest/cluster/cluster1 << EOF
+{"@stackName": "cluster1",
+ "@goalState": "active",
+ "@nodes": "host00-99"
+ "roleToNodes": [{"@role": "namenode", "@nodes": "host00"},
+                 {"@role": "jobtracker", "@nodes": "host01"},
+                 {"@role": "client", "@nodes": "host98,host99"}]
+}
+EOF
+-------
+
+* Upgrade a cluster using REST
+
+-------
+$ curl --negotiate -X PUT -T - -H 'ContentType: application/json' \
+     http://ambari.my.domain.com:4080/rest/stack/cluster1 << EOF
+{"components": {"@name":"hadoop", "@version":"0.20.205.1"}}
+EOF
+$ curl --negotiate -X PUT -T - -H 'ContentType: application/json' \
+     http://ambari.my.domain.com:4080/rest/cluster/cluster1 << EOF
+{"@stackName": "cluster1"}
+EOF
+-------
diff --git a/src/site/resources/images/apache-ambari-project.png b/src/site/resources/images/apache-ambari-project.png
new file mode 100644
index 0000000..7a39966
--- /dev/null
+++ b/src/site/resources/images/apache-ambari-project.png
Binary files differ
diff --git a/src/site/site.xml b/src/site/site.xml
new file mode 100644
index 0000000..e7a1518
--- /dev/null
+++ b/src/site/site.xml
@@ -0,0 +1,68 @@
+<?xml version="1.0" encoding="ISO-8859-1"?>
+
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<project name="Ambari">
+  <bannerLeft>
+    <name>Ambari</name>
+    <src>http://incubator.apache.org/ambari/images/apache-ambari-project.png</src>
+    <href>http://incubator.apache.org/ambari</href>
+  </bannerLeft>
+  <body>
+    <head>
+       <!-- Start of Google analytics -->
+       <script type="text/javascript">
+         var _gaq = _gaq || [];
+         _gaq.push(['_setAccount', 'UA-27188762-1']);
+         _gaq.push(['_trackPageview']);
+
+         (function() {
+            var ga = document.createElement('script'); 
+            ga.type = 'text/javascript'; ga.async = true;
+            ga.src = ('https:' == document.location.protocol ? 
+                      'https://ssl' : 'http://www') + 
+                     '.google-analytics.com/ga.js';
+            var s = document.getElementsByTagName('script')[0]; 
+            s.parentNode.insertBefore(ga, s);
+          })();
+       </script>
+       <!-- End of Google analytics -->
+    </head>
+
+    <links>
+      <item name="Apache" href="http://www.apache.org/" />
+      <item name="Hadoop" href="http://hadoop.apache.org/"/>
+      <item name="HBase" href="http://hbase.apache.org/"/>
+      <item name="Hive" href="http://hive.apache.org/"/>
+      <item name="Pig" href="http://pig.apache.org/"/>
+      <item name="HCatalog" href="http://incubator.apache.org/hcatalog/"/>
+      <item name="Zookeeper" href="http://zookeeper.apache.org/"/>
+    </links>
+
+    <menu name="Ambari">
+      <item name="Introduction" href="index.html"/>
+      <item name="Download" href="download.html"/>
+      <item name="REST API" href="application.html"/>
+      <item name="CLI" href="cli.html"/>
+      <item name="Scenarios" href="scenarios.html"/>
+    </menu>
+
+    <menu ref="reports"/>
+
+  </body>
+</project>