Documentation for distributed deployment and schedulers (#747)

* added initial scheduler documentation

* Documentation for Deployment Overview and configuration. Also modified Aurora Cluster, Mesos Cluster, Local Cluster and Slurm Cluster

* incorporated feedbacks on English corrections

* empty new line at the end

* incorporate feedbacks on English

* incorporate additional feedbacks

* removed setup from toc

* fixed feedbacks

* change mdash to use ---

* reworded text and removed the link
diff --git a/heron/uploaders/src/java/com/twitter/heron/uploader/hdfs/sample.yaml b/heron/uploaders/src/java/com/twitter/heron/uploader/hdfs/sample.yaml
index 859e0b1..99edce2 100644
--- a/heron/uploaders/src/java/com/twitter/heron/uploader/hdfs/sample.yaml
+++ b/heron/uploaders/src/java/com/twitter/heron/uploader/hdfs/sample.yaml
@@ -1,5 +1,6 @@
-# Directory of Config files for hadoop client to read from
+# Directory of config files for local hadoop client to read from
 heron.uploader.hdfs.config.directory:              "/home/hadoop/hadoop/conf/"
 
 # The URI of the directory for uploading topologies in the hdfs uploader
-heron.uploader.hdfs.topologies.directory.uri:      "hdfs:///heron/topology/"
\ No newline at end of file
+heron.uploader.hdfs.topologies.directory.uri:      "hdfs:///heron/topology/"
+
diff --git a/website/content/docs/concepts/architecture.md b/website/content/docs/concepts/architecture.md
index f71e391..0834260 100644
--- a/website/content/docs/concepts/architecture.md
+++ b/website/content/docs/concepts/architecture.md
@@ -37,26 +37,26 @@
 
 ## Heron Design Goals
 
-* **Isolation** — [Topologies](../topologies) should be process based
+* **Isolation** --- [Topologies](../topologies) should be process based
   rather than thread based, and each process should run in isolation for the
   sake of easy debugging, profiling, and troubleshooting.
-* **Resource constraints** — Topologies should use only those resources
+* **Resource constraints** --- Topologies should use only those resources
   that they are initially allocated and never exceed those bounds. This makes
   Heron safe to run in shared infrastructure.
-* **Compatibility** — Heron is fully API and data model compatible with
+* **Compatibility** --- Heron is fully API and data model compatible with
   [Apache Storm](http://storm.apache.org), making it easy for developers
   to transition between systems.
-* **Back pressure** — In a distributed system like Heron, there are no
+* **Back pressure** --- In a distributed system like Heron, there are no
   guarantees that all system components will execute at the same speed. Heron
   has built-in [back pressure mechanisms]({{< ref "#stream-manager" >}}) to ensure that
   topologies can self-adjust in case components lag.
-* **Performance** &mdash; Many of Heron's design choices have enabled Heron to
+* **Performance** --- Many of Heron's design choices have enabled Heron to
   achieve higher throughput and lower latency than Storm while also offering
   enhanced configurability to fine-tune potential latency/throughput trade-offs.
-* **Semantic guarantees** &mdash; Heron provides support for both
+* **Semantic guarantees** --- Heron provides support for both
   [at-most-once and at-least-once](https://kafka.apache.org/08/design.html#semantics)
   processing semantics.
-* **Efficiency** &mdash; Heron was built with the goal of achieving all of the
+* **Efficiency** --- Heron was built with the goal of achieving all of the
   above with the minimal possible resource usage.
 
 ## Topology Components
diff --git a/website/content/docs/contributors/codebase.md b/website/content/docs/contributors/codebase.md
index 1979512..09f3815 100644
--- a/website/content/docs/contributors/codebase.md
+++ b/website/content/docs/contributors/codebase.md
@@ -31,16 +31,16 @@
 
 ## Main Tools
 
-* **Build tool** &mdash; Heron uses [Bazel](http://bazel.io/) as its build tool.
+* **Build tool** --- Heron uses [Bazel](http://bazel.io/) as its build tool.
 Information on setting up and using Bazel for Heron can be found in [Compiling
 Heron](../../developers/compiling/compiling).
 
-* **Inter-component communication** &mdash; Heron uses [Protocol
+* **Inter-component communication** --- Heron uses [Protocol
 Buffers](https://developers.google.com/protocol-buffers/?hl=en) for
 communication between components. Most `.proto` definition files can be found in
 [`heron/proto`]({{% githubMaster %}}/heron/proto).
 
-* **Cluster coordination** &mdash; Heron relies heavily on ZooKeeper for cluster
+* **Cluster coordination** --- Heron relies heavily on ZooKeeper for cluster
 coordination for distributed deployment, be it for [Mesos/Aurora](../../operators/deployment/schedulers/aurora),
 [Mesos alone](../../operators/deployment/schedulers/mesos), or for a [custom
 scheduler](../custom-scheduler) that you build. More information on ZooKeeper
diff --git a/website/content/docs/contributors/custom-metrics-sink.md b/website/content/docs/contributors/custom-metrics-sink.md
index 07ea52a..a443724 100644
--- a/website/content/docs/contributors/custom-metrics-sink.md
+++ b/website/content/docs/contributors/custom-metrics-sink.md
@@ -20,15 +20,15 @@
 implementing your own.
 
 * [`GraphiteSink`](/api/metrics/com/twitter/heron/metricsmgr/sink/GraphiteSink.html)
-  &mdash; Sends each `MetricsRecord` object to a
+  --- Sends each `MetricsRecord` object to a
   [Graphite](http://graphite.wikidot.com/) instance according to a Graphite
   prefix.
 * [`ScribeSink`](/api/metrics/com/twitter/heron/metricsmgr/sink/ScribeSink.html)
-  &mdash; Sends each `MetricsRecord` object to a
+  --- Sends each `MetricsRecord` object to a
   [Scribe](https://github.com/facebookarchive/scribe) instance according to a
   Scribe category and namespace.
 * [`FileSink`](/api/metrics/com/twitter/heron/metricsmgr/sink/FileSink.html)
-  &mdash; Writes each `MetricsRecord` object to a JSON file at a specified path.
+  --- Writes each `MetricsRecord` object to a JSON file at a specified path.
 
 More on using those sinks in a Heron cluster can be found in [Metrics
 Manager](../../operators/configuration/metrics-manager).
@@ -62,7 +62,7 @@
 [`IMetricsSink`](http://heronproject.github.io/metrics-api/com/twitter/heron/metricsmgr/IMetricsSink)
 interface, which requires you to implement the following methods:
 
-* `void init(Map<String, Object> conf, SinkContext context)` &mdash; Defines the
+* `void init(Map<String, Object> conf, SinkContext context)` --- Defines the
   initialization behavior of the sink. The `conf` map is the configuration that
   is passed to the sink by the `.yaml` configuration file at
   `heron/config/metrics_sink.yaml`; the
@@ -70,12 +70,12 @@
   object enables you to access values from the sink's runtime context
   (the ID of the metrics manager, the ID of the sink, and the name of the
   topology).
-* `void processRecord(MetricsRecord record)` &mdash; Defines how each
+* `void processRecord(MetricsRecord record)` --- Defines how each
   `MetricsRecord` that passes through the sink is processed.
-* `void flush()` &mdash; Flush any buffered metrics; this function is called at
+* `void flush()` --- Flush any buffered metrics; this function is called at
   the interval specified by the `flush-frequency-ms`. More info can be found in
   the [Stream Manager](../../operators/configuration/stmgr) document.
-* `void close()` &mdash; Closes the stream and releases any system resources
+* `void close()` --- Closes the stream and releases any system resources
   associated with it; if the stream is already closed, invoking `close()` has no
   effect.
 
@@ -144,11 +144,11 @@
 
 For each sink you need to specify the following:
 
-* `class` &mdash; The Java class name of your custom implementation of the
+* `class` --- The Java class name of your custom implementation of the
   `IMetricsSink` interface, e.g. `biz.acme.heron.metrics.PrintSink`.
-* `flush-frequency-ms` &mdash; The frequency (in milliseconds) at which the
+* `flush-frequency-ms` --- The frequency (in milliseconds) at which the
   `flush()` method is called in your implementation of `IMetricsSink`.
-* `sink-restart-attempts` &mdash; The number of times that a sink will attempt to
+* `sink-restart-attempts` --- The number of times that a sink will attempt to
   restart if it throws exceptions and dies. If you do not set this, the default
   is 0; if you set it to -1, the sink will attempt to restart forever.
 
diff --git a/website/content/docs/operators/configuration/config-intro.md b/website/content/docs/operators/configuration/config-intro.md
index 989d4cd..4ed6b6e 100644
--- a/website/content/docs/operators/configuration/config-intro.md
+++ b/website/content/docs/operators/configuration/config-intro.md
@@ -4,9 +4,9 @@
 
 Heron can be configured at two levels:
 
-1. **The system level** &mdash; System-level configurations apply to the whole
+1. **The system level** --- System-level configurations apply to the whole
 Heron cluster rather than to any specific topology.
-2. **The topology level** &mdash; Topology configurations apply only to a
+2. **The topology level** --- Topology configurations apply only to a
 specific topology and can be modified at any stage of the topology's
 [lifecycle](../../../concepts/topologies#topology-lifecycle).
 
diff --git a/website/content/docs/operators/deployment/configuration.md b/website/content/docs/operators/deployment/configuration.md
new file mode 100644
index 0000000..c125877
--- /dev/null
+++ b/website/content/docs/operators/deployment/configuration.md
@@ -0,0 +1,93 @@
+# Configuring a Cluster
+
+To setup a Heron cluster, you need to configure a few files. Each file configures 
+a component of the Heron streaming framework.
+
+* **scheduler.yaml** --- This file specifies the required classes for launcher,
+scheduler, and for managing the topology at runtime. Any other specific parameters
+for the scheduler go into this file.
+
+* **statemgr.yaml** --- This file contains the classes and the configuration for state manager.
+The state manager maintains the running state of the topology as logical plan, physical plan,
+scheduler state, and execution state.
+
+* **uploader.yaml** --- This file specifies the classes and configuration for the uploader,
+which uploads the topology jars to storage. Once the containers are scheduled, they will 
+download these jars from the storage for running. 
+
+* **heron_internals.yaml** --- This file contains parameters that control
+how heron behaves. Tuning these parameters requires advanced knowledge of heron architecture and its
+components. For starters, the best option is just to copy the file provided with sample
+configuration. Once you are familiar with the system you can tune these parameters to achieve
+high throughput or low latency topologies.
+
+* **metrics_sinks.yaml** --- This file specifies where the run-time system and topology metrics
+will be routed. By default, the `file sink` and `tmaster sink` need to be present. In addition,
+`scribe sink` and `graphite sink` are also supported.
+
+* **packing.yaml** --- This file specifies the classes for `packing algorithm`, which defaults
+to Round Robin, if not specified.
+
+* **client.yaml** --- This file controls the behavior of the `heron` client. This is optional.
+
+# Assembling the Configuration
+
+All configuration files are assembled together to form the cluster configuration. For example,
+a cluster named `devcluster` that uses the Aurora for scheduler, ZooKeeper for state manager and
+HDFS for uploader will have the following set of configurations.
+
+## scheduler.yaml (for Aurora)
+
+```yaml
+# scheduler class for distributing the topology for execution
+heron.class.scheduler: com.twitter.heron.scheduler.aurora.AuroraScheduler
+
+# launcher class for submitting and launching the topology
+heron.class.launcher: com.twitter.heron.scheduler.aurora.AuroraLauncher
+
+# location of java 
+heron.directory.sandbox.java.home: /usr/lib/jvm/java-1.8.0-openjdk-amd64/
+
+# Invoke the IScheduler as a library directly
+heron.scheduler.is.service: False
+```
+
+## statemgr.yaml (for ZooKeeper)
+
+```yaml
+# zookeeper state manager class for managing state in a persistent fashion
+heron.class.state.manager: com.twitter.heron.statemgr.zookeeper.curator.CuratorStateManager
+
+# zookeeper state manager connection string
+heron.statemgr.connection.string:  "127.0.0.1:2181"
+
+# path of the root address to store the state in zookeeper  
+heron.statemgr.root.path: "/heron"
+
+# create the zookeeper nodes, if they do not exist
+heron.statemgr.zookeeper.is.initialize.tree: True
+```
+
+## uploader.yaml (for HDFS)
+```yaml
+# Directory of config files for hadoop client to read from
+heron.uploader.hdfs.config.directory:              "/home/hadoop/hadoop/conf/"
+
+# The URI of the directory for uploading topologies in the hdfs
+heron.uploader.hdfs.topologies.directory.uri:      "hdfs:///heron/topology/"
+```
+
+## packing.yaml (for Round Robin)
+```yaml
+# packing algorithm for packing instances into containers
+heron.class.packing.algorithm:    com.twitter.heron.packing.roundrobin.RoundRobinPacking
+```
+
+## client.yaml (for heron cli)
+```yaml
+# should the role parameter be required
+heron.config.role.required: false
+
+# should the environ parameter be required
+heron.config.env.required: false
+```
diff --git a/website/content/docs/operators/deployment/index.md b/website/content/docs/operators/deployment/index.md
index 3a6f0c5..e9ced79 100644
--- a/website/content/docs/operators/deployment/index.md
+++ b/website/content/docs/operators/deployment/index.md
@@ -2,12 +2,53 @@
 title: Deploying Heron
 ---
 
-Heron is designed to be run in clustered, scheduler-driven environments. It
-currently supports three scheduler options out of the box:
+Heron is designed to be run in clustered, scheduler-driven environments. It can
+be run in a `multi-tenant` or `dedicated` clusters. Furthermore, Heron supports 
+`multiple clusters` and a user can submit topologies to any of these clusters. Each
+of the cluster can use `different scheduler`. A typical Heron deployment is shown 
+in the following figure.
 
-* [Aurora](schedulers/aurora)
-* [Mesos](schedulers/mesos)
-* [Local scheduler](schedulers/local)
+<br />
+![Heron Deployment](/img/heron-deployment.png)
+<br/>
 
-To implement a new scheduler, see
-[Implementing a Custom Scheduler](../../contributors/custom-scheduler).
+A Heron deployment requires several components working together. The following must
+be deployed to run Heron topologies in a cluster:
+
+* **Scheduler** --- Heron requires a scheduler to run its topologies. It can 
+be deployed on an existing cluster running alongside other big data frameworks. 
+Alternatively, it can be deployed on a cluster of its own. Heron currently 
+supports several scheduler options:
+  * [Aurora](schedulers/aurora)
+  * [Local](schedulers/local)
+  * [Mesos](schedulers/mesos)
+  * [Slurm](schedulers/slurm)
+
+* **State Manager** --- Heron state manager tracks the state of all deployed
+topologies. The topology state includes its logical plan, 
+physical plan, and execution state. Heron supports the following state managers:
+  * [Local File System] (statemanagers/localfs)
+  * [Zookeeper] (statemanagers/zookeeper) 
+
+* **Uploader** --- The Heron uploader distributes the topology jars to the 
+servers that run them. Heron supports several uploaders 
+  * [HDFS] (uploaders/hdfs)
+  * [Local File System] (uploaders/localfs)
+  * [Amazon S3] (uploaders/s3)
+
+* **Metrics Sinks** --- Heron collects several metrics during topology execution.
+These metrics can be routed to a sink for storage and offline analysis.
+Currently, Heron supports the following sinks
+
+  * `File Sink`
+  * `Graphite Sink`
+  * `Scribe Sink`
+
+* **Heron Tracker** --- Tracker serves as the gateway to explore the topologies.
+It exposes a REST API for exploring logical plan, physical plan of the topologies and
+also for fetching metrics from them.
+
+* **Heron UI** --- The UI provides the ability to find and explore topologies visually.
+UI displays the DAG of the topology and how the DAG is mapped to physical containers 
+running in clusters. Furthermore, it allows the ability to view logs, take heap dump, memory 
+histograms, show metrics, etc. 
diff --git a/website/content/docs/operators/deployment/schedulers/aurora.md b/website/content/docs/operators/deployment/schedulers/aurora.md
index 15b0577..46218ad 100644
--- a/website/content/docs/operators/deployment/schedulers/aurora.md
+++ b/website/content/docs/operators/deployment/schedulers/aurora.md
@@ -1,5 +1,5 @@
 ---
-title: Aurora
+title: Aurora Cluster
 ---
 
 Heron supports deployment on [Apache Aurora](http://aurora.apache.org/) out of
@@ -9,7 +9,7 @@
 ## How Heron on Aurora Works
 
 Aurora doesn't have a Heron scheduler *per se*. Instead, when a topology is
-submitted to Heron, `heron-cli` interacts with Aurora to automatically stand up
+submitted to Heron, `heron` cli interacts with Aurora to automatically deploy
 all the [components](../../../../concepts/architecture) necessary to [manage
 topologies](../../../heron-cli).
 
@@ -21,59 +21,63 @@
 
 ## Hosting Binaries
 
-In order to deploy Heron, your Aurora cluster will need to have access to a
-variety of Heron binaries, which can be hosted wherever you'd like, so long as
+To deploy Heron, the Aurora cluster needs access to the
+Heron core binary, which can be hosted wherever you'd like, so long as
 it's accessible to Aurora (for example in [Amazon
-S3](https://aws.amazon.com/s3/) or using a local blob storage solution). You can
-build those binaries using the instructions in [Creating a New Heron
-Release](../../../../developers/compiling#building-a-full-release-package).
+S3](https://aws.amazon.com/s3/) or using a local blob storage solution). You 
+can download the core binary from github or build it using the instructions 
+in [Creating a New Heron Release](../../../../developers/compiling#building-a-full-release-package).
 
 Once your Heron binaries are hosted somewhere that is accessible to Aurora, you
 should run tests to ensure that Aurora can successfully fetch them.
 
-**Note**: Setting up a Heron cluster involves changing a series of configuration
-files in the actual `heron` repository itself (documented in the sections
-below). You should build a Heron release for deployment only *after* you've made
-those changes.
+## Aurora Scheduler Configuration
+
+To configure Heron to use Aurora scheduler, modify the `scheduler.yaml` 
+config file specific for the Heron cluster. The following must be specified
+for each cluster:
+
+* `heron.class.scheduler` --- Indicates the class to be loaded for Aurora scheduler. 
+You should set this to `com.twitter.heron.scheduler.aurora.AuroraScheduler`
+
+* `heron.class.launcher` --- Specifies the class to be loaded for launching and
+submitting topologies. To configure the Aurora launcher, set this to 
+`com.twitter.heron.scheduler.aurora.AuroraLauncher`
+
+* `heron.package.core.uri` --- Indicates the location of the heron core binary package.
+The local scheduler uses this URI to download the core package to the working directory.
+
+* `heron.directory.sandbox.java.home` --- Specifies the java home to 
+be used when running topologies in the containers.
+
+* `heron.scheduler.is.service` --- This config indicates whether the scheduler
+is a service. In the case of Aurora, it should be set to `False`. 
+
+### Example Aurora Scheduler Configuration
+
+```yaml
+# scheduler class for distributing the topology for execution
+heron.class.scheduler: com.twitter.heron.scheduler.aurora.AuroraScheduler
+
+# launcher class for submitting and launching the topology
+heron.class.launcher: com.twitter.heron.scheduler.aurora.AuroraLauncher
+
+# location of the core package
+heron.package.core.uri: file:///vagrant/.herondata/dist/heron-core-release.tar.gz
+
+# location of java - pick it up from shell environment
+heron.directory.sandbox.java.home: /usr/lib/jvm/java-1.8.0-openjdk-amd64/
+
+# Invoke the IScheduler as a library directly
+heron.scheduler.is.service: False
+```
 
 ## Working with Topologies
 
-Once you've set up ZooKeeper and generated an Aurora-accessible Heron release,
-any machine that has the `heron-cli` tool can be used to manage Heron topologies
-(i.e. can submit topologies, activate and deactivate them, etc.).
+After setting up ZooKeeper and generating an Aurora-accessible Heron core binary 
+release, any machine that has the `heron` cli tool can be used to manage Heron 
+topologies (i.e. can submit topologies, activate and deactivate them, etc.).
 
-The most important thing at this stage is to ensure that `heron-cli` is synced
-across all machines that will be [working with topologies](../../../heron-cli).
-Once that has been ensured, you can use Aurora as a scheduler by specifying the
-proper configuration and configuration loader when managing topologies.
-
-### Specifying a Configuration
-
-You'll need to specify a scheduler configuration at all stages of a topology's
-[lifecycle](../../../../concepts/topologies#topology-lifecycle) by using the
-`--config-file` flag to point at a configuration file. There is a default Aurora
-configuration located in the Heron repository at
-`heron/cli/src/python/aurora_scheduler.conf`. You can use this file as is,
-modify it, or use an entirely different configuration.
-
-Here's an example CLI command using this configuration:
-
-```bash
-$ heron-cli activate \
-    # Set scheduler overrides, point to a topology JAR, etc.
-    --config-file=/Users/janedoe/heron/heron/cli/src/python/aurora_scheduler.conf` \
-    # Other parameters
-```
-
-### Specifying the Configuration Loader
-
-You can use Heron's Aurora configuration loader by setting the
-`--config-loader` flag to `com.twitter.heron.scheduler.aurora.AuroraConfigLoader`.
-Here's an example CLI command:
-
-```bash
-$ heron-cli submit \
-    # Set scheduler overrides, point to a topology JAR, etc.
-    --config-loader=com.twitter.heron.scheduler.aurora.AuroraConfigLoader \
-    # Other parameters
-```
+The most important thing at this stage is to ensure that `heron` cli is available
+across all machines. Once the cli is available, Aurora as a scheduler 
+can be enabled by specifying the proper configuration when managing topologies.
diff --git a/website/content/docs/operators/deployment/schedulers/local.md b/website/content/docs/operators/deployment/schedulers/local.md
index 9285289..9845da6 100644
--- a/website/content/docs/operators/deployment/schedulers/local.md
+++ b/website/content/docs/operators/deployment/schedulers/local.md
@@ -1,5 +1,5 @@
 ---
-title: Local Deployment
+title: Local Cluster
 ---
 
 In addition to out-of-the-box schedulers for [Mesos](../mesos) and
@@ -8,103 +8,61 @@
 experimenting with Heron's features, testing a wide variety of possible cluster
 events, and so on.
 
-When deploying locally, you can use one of two coordination mechanisms:
+One of two state managers can be used for coordination when deploying locally:
 
-1. A locally-running [ZooKeeper](#zookeeper)
-2. [The local filesystem](#local-filesystem)
+* [ZooKeeper](../../statemanagers/zookeeper)
+* [Local File System](../../statemanagers/localfs)
 
 **Note**: Deploying a Heron cluster locally is not to be confused with Heron's
-[simulator mode](../../../../developers/simulator-mode). Simulator mode enables you to run
-topologies in a cluster-agnostic JVM process for the purpose of development and
-debugging, while the local scheduler stands up a Heron cluster on a single
-machine.
+[simulator mode](../../../../developers/simulator-mode). Simulator mode enables 
+you to run topologies in a cluster-agnostic JVM process for the purpose of 
+development and debugging, while the local scheduler stands up a Heron cluster 
+on a single machine.
 
 ## How Local Deployment Works
 
-Using the local scheduler is similar to deploying Heron on other systems in
-that you use the [Heron CLI](../../../heron-cli) to manage topologies. The
-difference is in the configuration and [scheduler
-overrides](../../../heron-cli#submitting-a-topology) that you provide when
-you [submit a topology](../../../heron-cli#submitting-a-topology).
+Using the local scheduler is similar to deploying Heron on other schedulers.
+The [Heron] (../../../heron-cli) cli is used to deploy and manage topologies 
+as would be done using a distributed scheduler. The main difference is in
+the configuration.
 
-### Required Scheduler Overrides
+## Local Scheduler Configuration
 
-For the local scheduler, you'll need to provide the following scheduler
-overrides:
+To configure Heron to use local scheduler, specify the following in `scheduler.yaml`
+config file.
 
-* `heron.local.working.directory` &mdash; The local directory to be used as
-  Heron's sandbox directory.
-* `state.manager.class` &mdash; This will depend on whether you want to use
-  [ZooKeeper](#zookeeper) or the [local filesystem](#local-filesystem) for
-  coordination.
+* `heron.class.scheduler` --- Indicates the class to be loaded for local scheduler.
+Set this to `com.twitter.heron.scheduler.local.LocalScheduler`
 
-For info on scheduler overrides, see the documentation on using the [Heron
-CLI](../../../heron-cli).
+* `heron.class.launcher` --- Specifies the class to be loaded for launching 
+topologies. Set this to `com.twitter.heron.scheduler.local.LocalLauncher`
 
-### Optional Scheduler Overrides
+* `heron.scheduler.local.working.directory` --- Provides the working 
+directory for topology. The working directory is essentially a scratch pad where 
+topology jars, heron core release binaries, topology logs, etc are generated and kept.
 
-The `heron.core.release.package` parameter is optional. It specifies the path to
-a local TAR file for the `core` component of the desired Heron release. Assuming
-that you've built a full [Heron release](../../../../developers/compiling#building-a-full-release-package), this TAR will be
-located by default at `bazel-genfiles/release/heron-core-unversioned.tar`,
-relative to the root of your Heron repository. If you set
-`heron.core.release.package`, Heron will update all local binaries in Heron's
-working directory; if you don't set `heron.core.release.package`, Heron will use
-the binaries already contained in Heron's working directory.
+* `heron.package.core.uri` --- Indicates the location of the heron core binary package.
+The local scheduler uses this URI to download the core package to the working directory.
 
-### CLI Flags
+* `heron.directory.sandbox.java.home` --- Specifies the java home to
+be used when running topologies in the containers. Set to `${JAVA_HOME}` to
+use the value set in the bash environment variable $JAVA_HOME.
 
-In addition to setting scheduler overrides, you'll need to set the following
-[CLI flags](../../../heron-cli):
+### Example Local Scheduler Configuration
 
-* `--config-file` &mdash; This flag needs to point to the `local_scheduler.conf`
-  file in `heron/cli/src/python/local_scheduler.conf`.
-* `--config-loader` &mdash; You should set this to
-  `com.twitter.heron.scheduler.util.DefaultConfigLoader`.
+```yaml
+# scheduler class for distributing the topology for execution
+heron.class.scheduler: com.twitter.heron.scheduler.local.LocalScheduler
 
-## ZooKeeper
+# launcher class for submitting and launching the topology
+heron.class.launcher: com.twitter.heron.scheduler.local.LocalLauncher
 
-To run the local scheduler using ZooKeeper for coordination, you'll need to set
-the following scheduler overrides:
+# working directory for the topologies
+heron.scheduler.local.working.directory: ${HOME}/.herondata/topologies/${CLUSTER}/${TOPOLOGY}
 
-* `state.manager.class` should be set to
-  `com.twitter.heron.state.curator.CuratorStateManager`
-* `zk.connection.string` should specify a ZooKeeper connection string, such as
-  `localhost:2818`.
-* `state.root.address` should specify a root
-  [ZooKeeper node](https://zookeeper.apache.org/doc/trunk/zookeeperOver.html#Nodes+and+ephemeral+nodes)
-  for Heron, such as `/heron`.
+# location of the core package
+heron.package.core.uri: file://${HERON_DIST}/heron-core.tar.gz
 
-### Example Submission Command for ZooKeeper
-
-```bash
-$ heron-cli submit \
-    "heron.local.working.directory=/Users/janedoe/heron-sandbox \
-    state.manager.class=com.twitter.heron.state.curator.CuratorStateManager \
-    zk.connection.string=localhost:2181 \
-    state.root.address=/heron" \
-    /Users/janedoe/topologies/topology1.jar \
-    biz.acme.topologies.TestTopology \
-    --config-file=/Users/janedoe/heron/cli/src/python/local_scheduler.conf \
-    --config-loader=com.twitter.heron.scheduler.util.DefaultConfigLoader     
-```
-
-## Local Filesystem
-
-To run the local scheduler using your machine's filesystem for coordination,
-you'll need to set the following scheduler override:
-
-* `state.manager.class` should be set to
-  `com.twitter.heron.state.localfile.LocalFileStateManager`.
-
-### Example Submission Command for Local Filesystem
-
-```bash
-$ heron-cli submit \
-    "heron.local.working.directory=/Users/janedoe/heron-sandbox \
-    state.manager.class=com.twitter.heron.state.localfile.LocalFileStateManager" \
-    /Users/janedoe/topologies/topology1.jar \
-    biz.acme.topologies.TestTopology \
-    --config-file=/Users/janedoe/heron/cli/src/python/local_scheduler.conf \
-    --config-loader=com.twitter.heron.scheduler.util.DefaultConfigLoader    
+# location of java - pick it up from shell environment
+heron.directory.sandbox.java.home: ${JAVA_HOME}
 ```
diff --git a/website/content/docs/operators/deployment/schedulers/mesos.md b/website/content/docs/operators/deployment/schedulers/mesos.md
index d89151b..ea071de 100644
--- a/website/content/docs/operators/deployment/schedulers/mesos.md
+++ b/website/content/docs/operators/deployment/schedulers/mesos.md
@@ -1,9 +1,9 @@
 ---
-title: Mesos
+title: Mesos (Experimental)
 ---
 
-Heron supports deployment on [Apache Mesos](http://mesos.apache.org/) out of
-the box. You can also run Heron on Mesos using [Apache Aurora](../aurora) as
+Heron supports deployment on [Apache Mesos](http://mesos.apache.org/). 
+Heron can also run on Mesos using [Apache Aurora](../aurora) as
 a scheduler or using a [local scheduler](../local).
 
 ## How Heron on Mesos Works
diff --git a/website/content/docs/operators/deployment/schedulers/slurm.md b/website/content/docs/operators/deployment/schedulers/slurm.md
index 9a6a99d..e10c4ef 100644
--- a/website/content/docs/operators/deployment/schedulers/slurm.md
+++ b/website/content/docs/operators/deployment/schedulers/slurm.md
@@ -1,5 +1,5 @@
 ---
-title: Slurm
+title: Slurm Cluster (Experimental)
 ---
 
 In addition to out-of-the-box schedulers for [Mesos](../mesos) and
@@ -8,56 +8,65 @@
 
 ## How Slurm Deployment Works
 
-Using the Slurm scheduler is similar to deploying Heron on other systems in
-that you use the [Heron CLI](../../../heron-cli) to manage topologies. The
-difference is in the configuration and [scheduler
-overrides](../../../heron-cli#submitting-a-topology) that you provide when
-you [submit a topology](../../../heron-cli#submitting-a-topology).
+Using the Slurm scheduler is similar to deploying Heron on other systems. The Heron 
+(../../heron-cli) cli is used to deploy and manage topologies similar to other 
+schedulers. The main difference is in the configuration.
 
-A set of default configurations are provided with Heron in the `conf/slurm` directory.
-The default configurations use the file system based state manager.
+A set of default configuration files are provided with Heron in the [conf/slurm]
+(https://github.com/twitter/heron/tree/master/heron/config/src/yaml/conf/slurm) directory. 
+The default configuration uses the local file system based state manager. It is
+possible that the local file system is mounted using NFS.
 
-When a Heron topology is submitted, the Slurm scheduler allocates the nodes required to
-run the job and starts the Heron processes in those nodes. It uses a `slurm.sh` script found in
-`conf/slum` directory to submit the topoloy as a batch job to the slurm scheduler.
+When a Heron topology is submitted, the Slurm scheduler allocates the nodes required to 
+run the job and starts the Heron processes in those nodes. It uses a `slurm.sh` script found in 
+[conf/slum](https://github.com/twitter/heron/tree/master/heron/config/src/yaml/conf/slurm)
+directory to submit the topoloy as a batch job to the slurm scheduler.
 
-### Useful Configuration files
+## Slurm Scheduler Configuration
 
-These are some of the useful configuration files found in `conf/slurm` directory.
+To configure Heron to use slurm scheduler, specify the following in `scheduler.yaml`
+config file:
 
-#### scheduler.yaml
+* `heron.class.scheduler` --- Indicates the class to be loaded for slurm scheduler.
+Set this to `com.twitter.heron.scheduler.slurm.SlurmScheduler`
 
-This configuration file specifies the scheduler implementation to use and
-properties for that scheduler.
+* `heron.class.launcher` --- Specifies the class to be loaded for launching
+topologies. Set this to `com.twitter.heron.scheduler.slurm.SlurmLauncher`
 
-* `heron.local.working.directory` &mdash; The shared directory to be used as
-  Heron's sandbox directory.
+* `heron.scheduler.local.working.directory` --- The shared directory to be used as
+Heron sandbox directory.
 
-#### statemgr.yaml
+* `heron.package.core.uri` --- Indicates the location of the heron core binary package.
+The local scheduler uses this URI to download the core package to the working directory.
 
-This is the configuration for the state manager.
+* `heron.directory.sandbox.java.home` --- This is used to specify the java home to
+be used when running topologies in the containers. Set to `${JAVA_HOME}` to use
+the value set in the bash environment variable $JAVA_HOME.
 
-* `heron.class.state.manager` &mdash; Specifies the state manager.
-   By default it uses the local state manager. Refer the `conf/localzk/statemgr.yaml` for zookeeper
-   based state manager configurations.
+* `heron.scheduler.is.service` --- Indicate whether the scheduler
+is a service. In the case of Slurm, it should be set to `False`.
 
-#### slurm.sh
+### Example Slurm Scheduler Configuration
 
-This is the script used by the scheduler to submit the Heron job to the Slurm scheduler. You can
-change this file for specific slurm settings like time, account.     
+```yaml
+# scheduler class for distributing the topology for execution
+heron.class.scheduler: com.twitter.heron.scheduler.slurm.SlurmScheduler
 
-### Example Submission Command
+# launcher class for submitting and launching the topology
+heron.class.launcher: com.twitter.heron.scheduler.slurm.SlurmLauncher
 
-Here is an example command to submit the MultiSpoutExclamationTopology that comes with Heron.
+# working directory for the topologies
+heron.scheduler.local.working.directory: ${HOME}/.herondata/topologies/${CLUSTER}/${TOPOLOGY}
 
-```bash
-$ heron submit slurm HERON_HOME/heron/examples/heron-examples.jar com.twitter.heron.examples.MultiSpoutExclamationTopology Name    
+# location of java - pick it up from shell environment
+heron.directory.sandbox.java.home: ${JAVA_HOME}
+
+# Invoke the IScheduler as a library directly
+heron.scheduler.is.service: False
 ```
 
-## Example Kill Command
-
-To kill the topology you can use the kill command with the cluster name and topolofy name.
-
-```bash
-$ heron kill cluster_name Topology_name
-```
+## Slurm Script `slurm.sh`
+   
+The script `slurm.sh` is used by the scheduler to submit the Heron job to the Slurm scheduler. 
+Edit this file to set specific slurm settings like time, account. The script and `scheduler.yaml`
+must be included with other cluster configuration files. 
diff --git a/website/content/docs/operators/deployment/statemanagers/localfs.md b/website/content/docs/operators/deployment/statemanagers/localfs.md
index 89c2aa6..401be01 100644
--- a/website/content/docs/operators/deployment/statemanagers/localfs.md
+++ b/website/content/docs/operators/deployment/statemanagers/localfs.md
@@ -14,17 +14,17 @@
 `statemgr.yaml` config file specific for the Heron cluster. You'll
 need to specify the following for each cluster:
 
-* `heron.class.state.manager` &mdash; Indicates the class to be loaded for local file system
+* `heron.class.state.manager` --- Indicates the class to be loaded for local file system
 state manager. You should set this to `com.twitter.heron.statemgr.localfs.LocalFileSystemStateManager`
 
-* `heron.statemgr.connection.string` &mdash; This should be `LOCALMODE` since it always localhost.
+* `heron.statemgr.connection.string` --- This should be `LOCALMODE` since it always localhost.
 
-* `heron.statemgr.root.path` &mdash; The root path in the local file system where state information
+* `heron.statemgr.root.path` --- The root path in the local file system where state information
 is stored.  We recommend providing Heron with an exclusive directory; if you do not, make sure that
 the following sub-directories are unused: `/tmasters`, `/topologies`, `/pplans`, `/executionstate`,
 `/schedulers`.
 
-* `heron.statemgr.localfs.is.initialize.file.tree` &mdash; Indicates whether the nodes under root
+* `heron.statemgr.localfs.is.initialize.file.tree` --- Indicates whether the nodes under root
 `/tmasters`, `/topologies`, `/pplans`, `/executionstate`, and `/schedulers` need to created, if they
 are not found. Set it to `True`, if you could like Heron to create those directories. If those
 directories are already there, set it to `False`. The absence of this configuration implies `True`.
diff --git a/website/content/docs/operators/deployment/statemanagers/zookeeper.md b/website/content/docs/operators/deployment/statemanagers/zookeeper.md
index 66b89b5..ad38505 100644
--- a/website/content/docs/operators/deployment/statemanagers/zookeeper.md
+++ b/website/content/docs/operators/deployment/statemanagers/zookeeper.md
@@ -23,29 +23,29 @@
 `statemgr.yaml` config file specific for the Heron cluster. You'll
 need to specify the following for each cluster:
 
-* `heron.class.state.manager` &mdash; Indicates the class to be loaded for managing
+* `heron.class.state.manager` --- Indicates the class to be loaded for managing
 the state in ZooKeeper and this class is loaded using reflection. You should set this
 to `com.twitter.heron.statemgr.zookeeper.curator.CuratorStateManager`
 
-* `heron.statemgr.connection.string` &mdash; The host IP address and port to connect to ZooKeeper
+* `heron.statemgr.connection.string` --- The host IP address and port to connect to ZooKeeper
 cluster (e.g) "127.0.0.1:2181".
 
-* `heron.statemgr.root.path` &mdash; The root ZooKeeper node to be used by Heron. We recommend
+* `heron.statemgr.root.path` --- The root ZooKeeper node to be used by Heron. We recommend
 providing Heron with an exclusive root node; if you do not, make sure that the following child
 nodes are unused: `/tmasters`, `/topologies`, `/pplans`, `/executionstate`, `/schedulers`.
 
-* `heron.statemgr.zookeeper.is.initialize.tree` &mdash; Indicates whether the nodes under ZooKeeper
+* `heron.statemgr.zookeeper.is.initialize.tree` --- Indicates whether the nodes under ZooKeeper
 root `/tmasters`, `/topologies`, `/pplans`, `/executionstate`, and `/schedulers` need to created,
 if they are not found. Set it to `True` if you could like Heron to create those nodes. If those
 nodes are already there, set it to `False`. The absence of this configuration implies `True`.
 
-* `heron.statemgr.zookeeper.session.timeout.ms` &mdash; Specifies how much time in milliseconds
+* `heron.statemgr.zookeeper.session.timeout.ms` --- Specifies how much time in milliseconds
 to wait before declaring the ZooKeeper session is dead.
 
-* `heron.statemgr.zookeeper.connection.timeout.ms` &mdash; Specifies how much time in milliseconds
+* `heron.statemgr.zookeeper.connection.timeout.ms` --- Specifies how much time in milliseconds
 to wait before the connection to ZooKeeper is dead.
 
-* `heron.statemgr.zookeeper.retry.count` &mdash; Count of the number of retry attempts to connect
+* `heron.statemgr.zookeeper.retry.count` --- Count of the number of retry attempts to connect
 to ZooKeeper
 
 * `heron.statemgr.zookeeper.retry.interval.ms`: Time in milliseconds to wait between each retry
diff --git a/website/content/docs/operators/deployment/uploaders/hdfs.md b/website/content/docs/operators/deployment/uploaders/hdfs.md
index 02878fc..d264d35 100644
--- a/website/content/docs/operators/deployment/uploaders/hdfs.md
+++ b/website/content/docs/operators/deployment/uploaders/hdfs.md
@@ -17,13 +17,13 @@
 You can make Heron use HDFS uploader by modifying the `uploader.yaml` config file specific
 for the Heron cluster. You'll need to specify the following for each cluster:
 
-* `heron.class.uploader` &mdash; Indicate the uploader class to be loaded. You should set this
+* `heron.class.uploader` --- Indicate the uploader class to be loaded. You should set this
 to `com.twitter.heron.uploader.hdfs.HdfsUploader`
 
-* `heron.uploader.hdfs.config.directory` &mdash; Specifies the directory of the config files
+* `heron.uploader.hdfs.config.directory` --- Specifies the directory of the config files
 for hadoop. This is used by hadoop client to upload the topology jar
 
-* `heron.uploader.hdfs.topologies.directory.uri` &mdash; URI of the directory name for uploading
+* `heron.uploader.hdfs.topologies.directory.uri` --- URI of the directory name for uploading
 topology jars. The name of the directory should be unique per cluster, if they are sharing the
 storage. In those cases, you could use the Heron environment variable `${CLUSTER}` that will be
 substituted by cluster name for distinction.
diff --git a/website/content/docs/operators/deployment/uploaders/localfs.md b/website/content/docs/operators/deployment/uploaders/localfs.md
index 52599a0..4e9e2a9 100644
--- a/website/content/docs/operators/deployment/uploaders/localfs.md
+++ b/website/content/docs/operators/deployment/uploaders/localfs.md
@@ -21,10 +21,10 @@
 `uploader.yaml` config file specific for the Heron cluster. You'll need to specify
 the following for each cluster:
 
-* `heron.class.uploader` &mdash; Indicate the uploader class to be loaded. You should set this
+* `heron.class.uploader` --- Indicate the uploader class to be loaded. You should set this
 to `com.twitter.heron.uploader.localfs.LocalFileSystemUploader`
 
-* `heron.uploader.localfs.file.system.directory` &mdash; Provides the name of the directory where
+* `heron.uploader.localfs.file.system.directory` --- Provides the name of the directory where
 the topology jar should be uploaded. The name of the directory should be unique per cluster
 You could use the Heron environment variables `${CLUSTER}` that will be substituted by cluster
 name.
diff --git a/website/content/docs/operators/deployment/uploaders/s3.md b/website/content/docs/operators/deployment/uploaders/s3.md
index a40f26c..64864d5 100644
--- a/website/content/docs/operators/deployment/uploaders/s3.md
+++ b/website/content/docs/operators/deployment/uploaders/s3.md
@@ -12,16 +12,16 @@
 You can make Heron use S3 uploader by modifying the `uploader.yaml` config file specific
 for the Heron cluster. You'll need to specify the following for each cluster:
 
-* `heron.class.uploader` &mdash; Indicate the uploader class to be loaded. You should set this
+* `heron.class.uploader` --- Indicate the uploader class to be loaded. You should set this
 to `com.twitter.heron.uploader.s3.S3Uploader`
 
-* `heron.uploader.s3.bucket` &mdash; Specifies the S3 bucket where the topology jar should be
+* `heron.uploader.s3.bucket` --- Specifies the S3 bucket where the topology jar should be
 uploaded.
 
-* `heron.uploader.s3.access_key` &mdash; Specify the access key of the AWS account that has
+* `heron.uploader.s3.access_key` --- Specify the access key of the AWS account that has
 write access to the bucket
 
-* `heron.uploader.s3.secret_key` &mdash; Specify the secret access of the AWS account that has
+* `heron.uploader.s3.secret_key` --- Specify the secret access of the AWS account that has
 write access to the bucket
 
 ### Example S3 Uploader Configuration
diff --git a/website/content/docs/operators/heron-cli.md b/website/content/docs/operators/heron-cli.md
index 11372b0..c010d3d 100644
--- a/website/content/docs/operators/heron-cli.md
+++ b/website/content/docs/operators/heron-cli.md
@@ -37,12 +37,12 @@
 All topology management commands (`submit`, `activate`, `deactivate`,
 `restart`, and `kill`) take the following required arguments:
 
-* `cluster` &mdash; The name of the cluster where the command needs to be executed.
+* `cluster` --- The name of the cluster where the command needs to be executed.
 
-* `role` &mdash; This represents the user or the group depending on deployment.
+* `role` --- This represents the user or the group depending on deployment.
   If not provided, it defaults to the unix user.
 
-* `env` &mdash; This is a tag for including additional information (e.g) a
+* `env` --- This is a tag for including additional information (e.g) a
    topology can be tagged as PROD or DEVEL to indicate whether it is in production
    or development. If `env` is not provided, it is given a value `default`
 
@@ -56,16 +56,16 @@
 CLI supports a common set of optional flags for all topology management commands
 (`submit`, `activate`, `deactivate`, `restart`, and `kill`):
 
-* `--config-path` &mdash; Every heron cluster must provide a few configuration
+* `--config-path` --- Every heron cluster must provide a few configuration
   files that are kept under a directory named after the cluster. By default,
   when a cluster is provided in the command, it searches the `conf` directory
   for a directory with the cluster name. This flag enables you to specify a
   non standard directory to search for the cluster directory.
 
-* `--config-property` &mdash; Heron supports several configuration parameters
+* `--config-property` --- Heron supports several configuration parameters
   that be overridden. These parameters are specified in the form of `key=value`.
 
-* `--verbose` &mdash; When this flag is provided, `heron` CLI prints logs
+* `--verbose` --- When this flag is provided, `heron` CLI prints logs
   that provide detailed information about the execution.
 
 Below is an example topology management command that uses one of these flags:
@@ -102,18 +102,18 @@
 
 Arguments of the `submit` command:
 
-* **cluster/[role]/[env]** &mdash; The cluster where topology needs to be submitted,
+* **cluster/[role]/[env]** --- The cluster where topology needs to be submitted,
   optionally taking the role and environment. For example,`local/ads/PROD` or just `local`
 
-* **topology-file-name** &mdash; The path of the file in which you've packaged the
+* **topology-file-name** --- The path of the file in which you've packaged the
   topology's code. For Java topologies this will be a `.jar` file; for
   topologies in other languages (not yet supported), this could be a
   `.tar` file. For example, `/path/to/topology/my-topology.jar`
 
-* **topology-class-name** &mdash; The name of the class containing the `main` function
+* **topology-class-name** --- The name of the class containing the `main` function
   for the topology. For example, `com.example.topologies.MyTopology`
 
-* **topology-args** (optional) &mdash; Arguments specific to the topology.
+* **topology-args** (optional) --- Arguments specific to the topology.
   You will need to supply additional args only if the `main` function for your
   topology requires them.
 
@@ -157,10 +157,10 @@
 
 Arguments of the `activate` command:
 
-* **cluster/[role]/[env]** &mdash; The cluster where topology needs to be submitted,
+* **cluster/[role]/[env]** --- The cluster where topology needs to be submitted,
   optionally taking the role and environment. For exampple, `local/ads/PROD` or just `local`
 
-* **topology-name**  &mdash; The name of the already-submitted topology that you'd
+* **topology-name**  --- The name of the already-submitted topology that you'd
   like to activate.
 
 ### Example Topology Activation Command
@@ -191,10 +191,10 @@
 
 Arguments of the `deactivate` command:
 
-* **cluster/[role]/[env]** &mdash; The cluster where topology needs to be submitted,
+* **cluster/[role]/[env]** --- The cluster where topology needs to be submitted,
   optionally taking the role and environment. For example, `local/ads/PROD` or just `local`
 
-* **topology-name** &mdash; The name of the topology that you'd like to deactivate.
+* **topology-name** --- The name of the topology that you'd like to deactivate.
 
 ## Restarting a Topology
 
@@ -218,12 +218,12 @@
 
 Arguments of the `restart` command:
 
-* **cluster/[role]/[env]** &mdash; The cluster where topology needs to be submitted,
+* **cluster/[role]/[env]** --- The cluster where topology needs to be submitted,
   optionally taking the role and environment. For example, `local/ads/PROD` or just `local`
 
-* **topology-name** &mdash; The name of the topology that you'd like to restart.
+* **topology-name** --- The name of the topology that you'd like to restart.
 
-* **container-id** (optional) &mdash; This enables you to specify the container ID to be
+* **container-id** (optional) --- This enables you to specify the container ID to be
   restarted if you want to restart only a specific container of the topology.
 
 ### Example Topology Restart Command
@@ -244,11 +244,11 @@
 
 Arguments of the `kill` command:
 
-* **cluster/[role]/[env]** &mdash; The cluster where topology needs to be submitted,
+* **cluster/[role]/[env]** --- The cluster where topology needs to be submitted,
   optionally taking the role and environment.  For example, `local/ads/PROD` or just
   `local`
 
-* **topology-name** &mdash; The name of the topology that you'd like to kill.
+* **topology-name** --- The name of the topology that you'd like to kill.
 
 ### Example Topology Kill Command
 
diff --git a/website/content/docs/operators/heron-tracker.md b/website/content/docs/operators/heron-tracker.md
index 44d1f24..dc1587e 100644
--- a/website/content/docs/operators/heron-tracker.md
+++ b/website/content/docs/operators/heron-tracker.md
@@ -33,13 +33,13 @@
 
 All Heron Tracker endpoints return a JSON object with the following information:
 
-* `status` &mdash; One of the following: `success`, `failure`.
-* `executiontime` &mdash; The time it took to return the HTTP result, in seconds.
-* `message` &mdash; Some endpoints return special messages in this field for certain
+* `status` --- One of the following: `success`, `failure`.
+* `executiontime` --- The time it took to return the HTTP result, in seconds.
+* `message` --- Some endpoints return special messages in this field for certain
   requests. Often, this field will be an empty string.
-* `result` &mdash; The result payload of the request. The contents will depend on
+* `result` --- The result payload of the request. The contents will depend on
   the endpoint.
-* `version` &mdash; The Heron release version used to build the currently running
+* `version` --- The Heron release version used to build the currently running
   Tracker executable.
 
 ## Endpoints
@@ -79,7 +79,7 @@
 
 #### Optional parameters
 
-* `dc` &mdash; The data center. If the data center you provide is valid, the JSON
+* `dc` --- The data center. If the data center you provide is valid, the JSON
   payload will list machines only in that data center. You will receive a 404
   if the data center is invalid. Example:
 
@@ -87,14 +87,14 @@
   $ curl "http://heron-tracker-url/machines?dc=datacenter1"
   ```
 
-* `environ` &mdash; The environment. Must be either `devel` or `prod`, otherwise you
+* `environ` --- The environment. Must be either `devel` or `prod`, otherwise you
   will receive a 404. Example:
 
   ```bash
   $ curl "http://heron-tracker-url/machines?environ=devel"
   ```
 
-* `topology` (repeated) &mdash; Both `dc` and `environ` are required if the
+* `topology` (repeated) --- Both `dc` and `environ` are required if the
   `topology` parameter is present
 
   ```bash
@@ -127,7 +127,7 @@
 
 #### Optional Parameters
 
-* `dc` &mdash; The data center. If the data center you provide is valid, the JSON
+* `dc` --- The data center. If the data center you provide is valid, the JSON
   payload will list topologies only in that data center. You will receive a 404
   if the data center is invalid. Example:
 
@@ -135,7 +135,7 @@
   $ curl "http://heron-tracker-url/topologies?dc=datacenter1"
   ```
 
-* `environ` &mdash; Lists topologies by the environment in which they're running.
+* `environ` --- Lists topologies by the environment in which they're running.
   Example:
 
   ```bash
@@ -169,7 +169,7 @@
 
 #### Optional Parameters
 
-* `dc` &mdash; The data center. If the data center you provide is valid, the JSON
+* `dc` --- The data center. If the data center you provide is valid, the JSON
   payload will list topologies only in that data center. You will receive a 404
   if the data center is invalid. Example:
 
@@ -177,7 +177,7 @@
   $ curl "http://heron-tracker-url/topologies/states?dc=datacenter1"
   ```
 
-* `environ` &mdash; Lists topologies by the environment in which they're running.
+* `environ` --- Lists topologies by the environment in which they're running.
   Example:
 
   ```bash
@@ -206,21 +206,21 @@
 
 Each execution state object lists the following:
 
-* `release_username` &mdash; The user that generated the Heron release for the
+* `release_username` --- The user that generated the Heron release for the
   topology
-* `has_tmaster_location` &mdash; Whether the topology's Topology Master
+* `has_tmaster_location` --- Whether the topology's Topology Master
   currently has a location
-* `release_tag` &mdash; This is a legacy
-* `uploader_version` &mdash; TODO
-* `dc` &mdash; The data center in which the topology is running
-* `jobname` &mdash; TODO
-* `release_version` &mdash; TODO
-* `environ` &mdash; The environment in which the topology is running
-* `submission_user` &mdash; The user that submitted the topology
-* `submission_time` &mdash; The time at which the topology was submitted
+* `release_tag` --- This is a legacy
+* `uploader_version` --- TODO
+* `dc` --- The data center in which the topology is running
+* `jobname` --- TODO
+* `release_version` --- TODO
+* `environ` --- The environment in which the topology is running
+* `submission_user` --- The user that submitted the topology
+* `submission_time` --- The time at which the topology was submitted
   (timestamp in milliseconds)
-* `role` &mdash; TODO
-* `has_physical_plan` &mdash; Whether the topology currently has a physical plan
+* `role` --- TODO
+* `has_physical_plan` --- Whether the topology currently has a physical plan
 
 ***
 
@@ -228,9 +228,9 @@
 
 #### Required Parameters
 
-* `dc` &mdash; The data center in which the topology is running
-* `environ` &mdash; The environment in which the topology is running
-* `topology` &mdash; The name of the topology
+* `dc` --- The data center in which the topology is running
+* `environ` --- The environment in which the topology is running
+* `topology` --- The name of the topology
 
 #### Example Request
 
@@ -242,17 +242,17 @@
 
 The value of the `result` field should lists the following:
 
-* `name` &mdash; The name of the topology
-* `tmaster_location` &mdash; Information about the machine on which the topology's
+* `name` --- The name of the topology
+* `tmaster_location` --- Information about the machine on which the topology's
   Topology Master (TM) is running, including the following: the controller port, the
   host, the master port, the stats port, and the ID of the TM.
-* `physical_plan` &mdash; A JSON representation of the physical plan of the
+* `physical_plan` --- A JSON representation of the physical plan of the
   topology, which includes configuration information for the topology as well
   as information about all current spouts, bolts, state managers, and
   instances.
-* `logical_plan` &mdash; A JSON representation of the logical plan of the topology,
+* `logical_plan` --- A JSON representation of the logical plan of the topology,
   which includes information about all of the spouts and bolts in the topology.
-* `execution_state` &mdash; The execution state of the topology. For more on
+* `execution_state` --- The execution state of the topology. For more on
   execution state, see the section regarding the `/topologies/states` endpoint
   above.
 
@@ -265,9 +265,9 @@
 
 #### Required Parameters
 
-* `dc` &mdash; The data center in which the topology is running
-* `environ` &mdash; The environment in which the topology is running
-* `topology` &mdash; The name of the topology
+* `dc` --- The data center in which the topology is running
+* `environ` --- The environment in which the topology is running
+* `topology` --- The name of the topology
 
 #### Example Request
 
@@ -283,15 +283,15 @@
 TODO
 ```
 
-* `spouts` &mdash; A set of JSON objects representing each spout in the topology.
+* `spouts` --- A set of JSON objects representing each spout in the topology.
   The following information is listed for each spout:
-  * `source` &mdash; The source of tuples for the spout.
-  * `version` &mdash; The Heron release version for the topology.
-  * `type` &mdash; The type of the spout, e.g. `kafka`, `kestrel`, etc.
-  * `outputs` &mdash; A list of streams to which the spout outputs tuples.
-* `bolts` &mdash; A set of JSON objects representing each bolt in the topology.
-  * `outputs` &mdash; A list of outputs for the bolt.
-  * `inputs` &mdash; A list of inputs for the bolt.
+  * `source` --- The source of tuples for the spout.
+  * `version` --- The Heron release version for the topology.
+  * `type` --- The type of the spout, e.g. `kafka`, `kestrel`, etc.
+  * `outputs` --- A list of streams to which the spout outputs tuples.
+* `bolts` --- A set of JSON objects representing each bolt in the topology.
+  * `outputs` --- A list of outputs for the bolt.
+  * `inputs` --- A list of inputs for the bolt.
 
 ***
 
@@ -302,9 +302,9 @@
 
 #### Required Parameters
 
-* `dc` &mdash; The data center in which the topology is running
-* `environ` &mdash; The environment
-* `topology` &mdash; The name of the topology
+* `dc` --- The data center in which the topology is running
+* `environ` --- The environment
+* `topology` --- The name of the topology
 
 #### Example Request
 
@@ -324,9 +324,9 @@
 
 #### Required Parameters
 
-* `dc` &mdash; The data center in which the topology is running
-* `environ` &mdash; The environment in which the topology is running
-* `topology` &mdash; The name of the topology
+* `dc` --- The data center in which the topology is running
+* `environ` --- The environment in which the topology is running
+* `topology` --- The name of the topology
 
 #### Example Request
 
@@ -369,10 +369,10 @@
 
 #### Required Parameters
 
-* `dc` &mdash; The data center in which the topology is running
-* `environ` &mdash; The environment in which the topology is running
-* `topology` &mdash; The name of the topology
-* `instance` &mdash; The instance ID of the desired Heron instance
+* `dc` --- The data center in which the topology is running
+* `environ` --- The environment in which the topology is running
+* `topology` --- The name of the topology
+* `instance` --- The instance ID of the desired Heron instance
 
 #### Response
 
@@ -384,10 +384,10 @@
 
 #### Required Parameters
 
-* `dc` &mdash; The data center
-* `environ` &mdash; The environment
-* `topology` &mdash; The name of the topology
-* `instance` &mdash; The instance ID of the desired Heron instance
+* `dc` --- The data center
+* `environ` --- The environment
+* `topology` --- The name of the topology
+* `instance` --- The instance ID of the desired Heron instance
 
 #### Response
 
diff --git a/website/data/toc.yaml b/website/data/toc.yaml
index 906567f..261f561 100644
--- a/website/data/toc.yaml
+++ b/website/data/toc.yaml
@@ -15,31 +15,33 @@
     sublinks:
       - name: Overview
         url: /docs/operators/deployment
+      - name: Configuration
+        url: /docs/operators/deployment/configuration
   - name: State Managers
     sublinks:
-      - name: Setup Zookeeper
+      - name: Zookeeper
         url: /docs/operators/deployment/statemanagers/zookeeper
-      - name: Setup Local FS
+      - name: Local FS
         url: /docs/operators/deployment/statemanagers/localfs
   - name: Uploaders
     sublinks:
-      - name: Setup Local FS
+      - name: Local FS
         url: /docs/operators/deployment/uploaders/localfs
-      - name: Setup HDFS
+      - name: HDFS
         url: /docs/operators/deployment/uploaders/hdfs
-      - name: Setup S3
+      - name: S3
         url: /docs/operators/deployment/uploaders/s3
   - name: Schedulers
     sublinks:
-      - name: Local Cluster
-        url: /docs/operators/deployment/schedulers/local
       - name: Aurora Cluster
         url: /docs/operators/deployment/schedulers/aurora
+      - name: Local Cluster
+        url: /docs/operators/deployment/schedulers/local
       - name: Mesos Cluster
         url: /docs/operators/deployment/schedulers/mesos
       - name: Slurm Cluster
         url: /docs/operators/deployment/schedulers/slurm
-  - name: Configuration
+  - name: System Configuration
     sublinks:
       - name: Overview
         url: /docs/operators/configuration/config-intro
@@ -111,9 +113,5 @@
     sublinks:
       - name: Community
         url: /docs/contributors/community
-      - name: Roadmap
-        url: /docs/contributors/roadmap
       - name: Governance
         url: /docs/contributors/governance
-      - name: Support
-        url: /docs/contributors/support