Joshfischer/update scheduler docs (#3297)

* adding README

* removing template docs.  Updating readme

* adding missed aurora page. first pass of k8s by hand

* k8s by hand

* finishing kubernetes by hand

* finishing k8s with helm

* aurora scheduler complete

* cleaning up missed link

* finishing local cluster scheduler section

* standalone cluster docs finished,

* fixing links for nomad

* updating mesos docs

* updating slurm docs

* yarn docs updated

* fixing broken image link in README
diff --git a/README.md b/README.md
index c6efacb..19d1990 100644
--- a/README.md
+++ b/README.md
@@ -1,6 +1,6 @@
 [![Build Status](https://travis-ci.org/apache/incubator-heron.svg?&branch=master)](https://travis-ci.org/apache/incubator-heron)
 
-![logo](website/static/img/HeronTextLogo.png)
+![logo](website2/docs/assets/HeronTextLogo.png)
 
 Heron is realtime analytics platform developed by Twitter.  It has a wide array of architectural improvements over it's predecessor.
 
diff --git a/website2/docs/schedulers-aurora-cluster.md b/website2/docs/schedulers-aurora-cluster.md
index 3bd5657..bd0c037 100644
--- a/website2/docs/schedulers-aurora-cluster.md
+++ b/website2/docs/schedulers-aurora-cluster.md
@@ -6,21 +6,21 @@
 
 Heron supports deployment on [Apache Aurora](http://aurora.apache.org/) out of
 the box. A step by step guide on how to setup Heron with Apache Aurora locally 
-can be found in [Setting up Heron with Aurora Cluster Locally on Linux](../aurora-local-setup). You can also run Heron on
-a [local scheduler](../local). 
+can be found in [Setting up Heron with Aurora Cluster Locally on Linux](schedulers-aurora-local). You can also run Heron on
+a [local scheduler](schedulers-local). 
 
 ## How Heron on Aurora Works
 
 Aurora doesn't have a Heron scheduler *per se*. Instead, when a topology is
 submitted to Heron, `heron` cli interacts with Aurora to automatically deploy
-all the [components](../../../../concepts/architecture) necessary to [manage
-topologies](../../../heron-cli).
+all the [components](heron-architecture) necessary to [manage
+topologies](user-manuals-heron-cli).
 
 ## ZooKeeper
 
 To run Heron on Aurora, you'll need to set up a ZooKeeper cluster and configure
 Heron to communicate with it. Instructions can be found in [Setting up
-ZooKeeper](../../statemanagers/zookeeper).
+ZooKeeper](state-managers-zookeeper).
 
 ## Hosting Binaries
 
@@ -29,7 +29,7 @@
 it's accessible to Aurora (for example in [Amazon
 S3](https://aws.amazon.com/s3/) or using a local blob storage solution). You
 can download the core binary from github or build it using the instructions
-in [Creating a New Heron Release](../../../../developers/compiling#building-a-full-release-package).
+in [Creating a New Heron Release](compiling-overview#building-all-components).
 
 Command for fetching the binary is in the `heron.aurora` config file. By default it is 
 using a `curl` command to fetch the binary. For example, if the binary is hosted in 
@@ -55,7 +55,7 @@
 the topology to its sandbox. The configuration for an uploader is in the `uploader.yaml` 
 config file. For distributed Aurora deployments, Heron can use `HdfsUploader` or `S3Uploader`. 
 Details on configuring the uploaders can be found in the documentation for the 
-[HDFS](../../uploaders/hdfs) and [S3](../../uploaders/s3) uploaders. 
+[HDFS](uploaders-hdfs) and [S3](uploaders-amazon-s3) uploaders. 
 
 After configuring an uploader, the `heron.aurora` config file needs to be modified accordingly to 
 fetch the topology. 
diff --git a/website2/docs/schedulers-aurora-local.md b/website2/docs/schedulers-aurora-local.md
new file mode 100644
index 0000000..9428d5b
--- /dev/null
+++ b/website2/docs/schedulers-aurora-local.md
@@ -0,0 +1,314 @@
+---
+id: schedulers-aurora-local
+title: Setting up Heron with Aurora Cluster Locally on Linux
+sidebar_label:  Aurora Locally
+---
+
+
+It is possible to setup Heron with a locally running Apache Aurora cluster.
+This is a step by step guide on how to configure and setup all the necessary
+components.
+
+## Setting Up Apache Aurora Cluster locally
+
+You first need to setup Apache Aurora locally. More detailed description of the
+following steps can be found in [A local Cluster with Vagrant](http://aurora.apache.org/documentation/latest/getting-started/vagrant/)
+
+### Step 1: Install VirtualBox and Vagrant
+
+Download and install VirtualBox and Vagrant on your machine. If vagrant is successfully
+installed in your machine the following command should list several common commands
+for this tool
+
+```bash
+$ vagrant
+```
+
+### Step 2: Clone the Aurora repository
+
+You can get the source repository for Aurora with the following command
+
+```bash
+$ dgit clone git://git.apache.org/aurora.git
+```
+
+Once the clone is complete cd into the aurora folder
+
+```bash
+$ cd aurora
+```
+
+### Step 3: Starting Local Aurora Cluster
+
+To start the local cluster all you have to do is execute the following command. It will install all the needed dependencies like Apache Mesos and Zookeeper in the VM.
+
+```bash
+$ vagrant up
+```
+
+Additionally to get rid of some of the warning messages that you get during up command execute the following command
+
+```bash
+$ vagrant plugin install vagrant-vbguest
+```
+
+You can verify that the Aurora cluster is properly running by opening the following links in your web-browser
+
+* Scheduler - http://192.168.33.7:8081
+* Observer - http://192.168.33.7:1338
+* Mesos Master - http://192.168.33.7:5050
+* Mesos Agent - http://192.168.33.7:5051
+
+If you go into http://192.168.33.7:8081/scheduler you can notice that the name of the default cluster that is setup in aurora is
+named `devcluster` this will be important to note when submitting typologies from heron.
+
+![Heron topology](assets/aurora-local-cluster-start.png)
+
+## Installing Heron within the Cluster VM
+
+Now that the Aurora cluster is setup you need to install heron within the cluster VM in order to be able to get the Heron
+deployment working. Since this is a fresh VM instance you will have to install the basic software such as "unzip" and set
+the JAVA_HOME path as an environmental variable ( Just need to add this to .bashrc file). After you have the basic stuff
+working follow the following steps to install Heron in the VM. You can ssh into the VM with the following command
+
+```bash
+$ vagrant ssh
+```
+
+### Step 1.a : Download installation script files
+
+You can download the script files that match your Linux distribution from
+https://github.com/apache/incubator-heron/releases/tag/{{% heronVersion %}}
+
+For example for the {{% heronVersion %}} release the files you need to download For Ubuntu will be the following.
+
+* `heron-install-{{% heronVersion %}}-ubuntu.sh`
+
+Optionally - You want need the following for the steps in the blog post
+
+* `heron-api-install-{{% heronVersion %}}-ubuntu.sh`
+* `heron-core-{{% heronVersion %}}-ubuntu.tar.gz`
+
+### Step 1.b: Execute the client and tools shell scripts
+
+
+```bash
+$ chmod +x heron-install-VERSION-PLATFORM.sh
+$ ./heron-install-VERSION-PLATFORM.sh --user
+Heron client installer
+----------------------
+
+Uncompressing......
+Heron is now installed!
+
+Make sure you have "/home/vagrant/bin" in your path.
+```
+
+After this you need to add the path "/home/vagrant/bin". You can just execute the following command
+or add it to the end of  .bashrc file ( which is more convenient ).
+
+```bash
+$ export PATH=$PATH:/home/vagrant/bin
+```
+
+Install the following packages to make sure that you have all the needed dependencies in the VM.
+You might have to do sudo apt-get update before you execute the following.
+
+```bash
+$ sudo apt-get install git build-essential automake cmake libtool zip libunwind-setjmp0-dev zlib1g-dev unzip pkg-config -y
+```
+
+## Configuring State Manager ( Apache Zookeeper )
+
+Since Heron only uses Apache Zookeeper for coordination the load on the Zookeeper
+node is minimum. Because of this it is sufficient to use a single Zookeeper node or
+if you have an Zookeeper instance running for some other task you can simply use that.
+Since Apache Aurora already uses an Zookeeper instance you can directly use that instance
+to execute State Manager tasks of Heron. First you need to configure Heron to work with
+the Zookeeper instance. You can find meanings of each attribute in [Setting Up ZooKeeper
+State Manager](state-managers-zookeeper). Configurations for State manager are
+located in the directory `/home/vagrant/.heron/conf/aurora`.
+
+Open the file `statemgr.yaml` using vim ( or some other text editor you prefer )
+and add/edit the file to include the following.
+
+```yaml
+# local state manager class for managing state in a persistent fashion
+heron.class.state.manager: org.apache.heron.statemgr.zookeeper.curator.CuratorStateManager
+
+# local state manager connection string
+heron.statemgr.connection.string:  "127.0.0.1:2181"
+
+# path of the root address to store the state in a local file system
+heron.statemgr.root.path: "/heronroot"
+
+# create the zookeeper nodes, if they do not exist
+heron.statemgr.zookeeper.is.initialize.tree: True
+
+# timeout in ms to wait before considering zookeeper session is dead
+heron.statemgr.zookeeper.session.timeout.ms: 30000
+
+# timeout in ms to wait before considering zookeeper connection is dead
+heron.statemgr.zookeeper.connection.timeout.ms: 30000
+
+# timeout in ms to wait before considering zookeeper connection is dead
+heron.statemgr.zookeeper.retry.count: 10
+
+# duration of time to wait until the next retry
+heron.statemgr.zookeeper.retry.interval.ms: 10000
+```
+
+## Creating Paths in Zookeeper
+
+Next you need to create some paths within Zookeeper since some of the paths
+are not created by Heron automatically. So you need to create them manually.
+Since Aurora installation already installed Zookeeper, you can use the Zookeeper
+cli to create the manual paths.
+
+```bash
+$ sudo ./usr/share/zookeeper/bin/zkCli.sh
+```
+
+This will connect to the Zookeeper instance running locally. Then execute the
+following commands from within the client to create paths `/heronroot/topologies`
+and `/heron/topologies`. Later in "Associating new Aurora cluster into Heron UI"
+you will see that you only need to create `/heronroot/topologies` but for now lets
+create both to make sure you don't get any errors when you run things.
+
+```bash
+create /heronroot null
+create /heronroot/topologies null
+```
+
+```bash
+create /heron null
+create /heron/topologies null
+```
+
+## Configuring Scheduler ( Apache Aurora )
+
+Next you need to configure Apache Aurora to be used as the Scheduler for our Heron
+local cluster. In order to do this you need to edit the `scheduler.yaml` file that is
+also located in `/home/vagrant/.heron/conf/aurora`. Add/Edit the file to include the
+following. More information regarding parameters can be found in [Aurora Cluster](schedulers-aurora-cluster)
+
+```yaml
+# scheduler class for distributing the topology for execution
+heron.class.scheduler: org.apache.heron.scheduler.aurora.AuroraScheduler
+
+# launcher class for submitting and launching the topology
+heron.class.launcher: org.apache.heron.scheduler.aurora.AuroraLauncher
+
+# location of the core package
+heron.package.core.uri: file:///home/vagrant/.heron/dist/heron-core.tar.gz
+
+# location of java - pick it up from shell environment
+heron.directory.sandbox.java.home: /usr/lib/jvm/java-1.8.0-openjdk-amd64/
+
+# Invoke the IScheduler as a library directly
+heron.scheduler.is.service: False
+```
+
+Additionally edit the `client.yaml` file and change the core uri to make it consistant.
+
+```yaml
+# location of the core package
+heron.package.core.uri: file:///home/vagrant/.heron/dist/heron-core.tar.gz
+```
+
+### Important Step: Change folder name `aurora` to `devcluster`
+
+Next you need to change the folder name of `/home/vagrant/.heron/conf/aurora` to
+`/home/vagrant/.heron/conf/devcluster`. This is because the name of your aurora
+cluster is `devcluster` as you noted in a previous step. You can do this with the
+following commands
+
+```bash
+$ cd /home/vagrant/.heron/conf/
+$ mv aurora devcluster
+```
+
+## Submitting Example Topology to Aurora cluster
+
+Now you can submit a topology to the aurora cluster. this can be done with the following command.
+
+```bash
+$ heron submit devcluster/heronuser/devel --config-path ~/.heron/conf/ ~/.heron/examples/heron-api-examples.jar org.apache.heron.examples.api.ExclamationTopology ExclamationTopology
+```
+
+Now you should be able to see the topology in the Aurora UI ( http://192.168.33.7:8081/scheduler/heronuser ) .
+
+![Heron topology](assets/aurora-local-topology-submitted.png)
+
+### Understanding the parameters
+
+
+below is a brief explanation on some of the important parameters that are used in this command. the first
+parameter `devcluster/heronuser/devel` defines cluster, role and env ( env can have values `prod | devel | test | staging` ).
+The cluster is the name of the aurora cluster which is `devcluster` in our case. You can give something like your
+name for the role name and for env you need to choose from one of the env values.
+
+`--config-path` points to the config folder. the program will automatically look for a folder with the cluster name.
+This is why you had to change the name of the aurora conf folder to devcluster.
+
+Now that everything is working you need to perform one last step to be able to see the typologies that you can see in Aurora UI in Heron UI.
+
+## Associating new Aurora cluster into Heron UI
+
+Heron UI uses information that is gets from the heron tracker when displaying the information in the heron UI interface.
+So in-order to allow the Heron UI to show Aurora cluster information you need to modify configuration of the Heron tracker
+so that it can identify the Aurora Cluster.
+
+Heron Tracker configurations are located at `/home/vagrant/.herontools/conf` the configuration file is named `heron_tracker.yaml`.
+By default you should see the following in the file
+
+```yaml
+statemgrs:
+  -
+    type: "file"
+    name: "local"
+    rootpath: "~/.herondata/repository/state/local"
+    tunnelhost: "localhost"
+  -
+    type: "zookeeper"
+    name: "localzk"
+    hostport: "localhost:2181"
+    rootpath: "/heron"
+    tunnelhost: "localhost"
+```
+
+You can see that there already two entries. Before, you had to create paths in Zookeeper for `/heron/topologies` this is
+because the entry named `localzk` in this file. If you remove this you will not need to create that path in Zookeeper.
+Now all you have to is to add a new entry for the aurora cluster into this file ( lets comment out `localzk` ).
+Then the file would look like below.
+
+```yaml
+statemgrs:
+  -
+    type: "file"
+    name: "local"
+    rootpath: "~/.herondata/repository/state/local"
+    tunnelhost: "localhost"
+  #-
+   #type: "zookeeper"
+   # name: "localzk"
+   # hostport: "localhost:2181"
+   # rootpath: "/heron"
+   # tunnelhost: "localhost"
+  -
+    type: "zookeeper"
+    name: "devcluster"
+    hostport: "localhost:2181"
+    rootpath: "/heronroot"
+    tunnelhost: "localhost"
+```
+
+Now you can start Heron tracker and then Heron UI, Now you will be able to see the aurora cluster from the
+Heron UI ( http://192.168.33.7:8889/topologies ) as below
+
+```bash
+$ heron-tracker
+$ heron-ui
+```
+
+![Heron topology](assets/heron-ui-topology-submitted.png)
diff --git a/website2/docs/schedulers-k8s-by-hand.md b/website2/docs/schedulers-k8s-by-hand.md
index b5a4399..b22bcf8 100644
--- a/website2/docs/schedulers-k8s-by-hand.md
+++ b/website2/docs/schedulers-k8s-by-hand.md
@@ -98,7 +98,7 @@
 
 ### Managing topologies
 
-Once all of the [components](#components) have been successfully started up, you need to open up a proxy port to your Minikube Kubernetes cluster using the [`kubectl proxy`](https://kubernetes.io/docs/tasks/access-kubernetes-api/http-proxy-access-api/) command:
+Once all of the [components](#starting-components) have been successfully started up, you need to open up a proxy port to your Minikube Kubernetes cluster using the [`kubectl proxy`](https://kubernetes.io/docs/tasks/access-kubernetes-api/http-proxy-access-api/) command:
 
 ```bash
 $ kubectl proxy -p 8001
@@ -272,7 +272,7 @@
 
 ### Managing topologies
 
-Once all of the [components](#components) have been successfully started up, you need to open up a proxy port to your GKE Kubernetes cluster using the [`kubectl proxy`](https://kubernetes.io/docs/tasks/access-kubernetes-api/http-proxy-access-api/) command:
+Once all of the [components](#starting-components) have been successfully started up, you need to open up a proxy port to your GKE Kubernetes cluster using the [`kubectl proxy`](https://kubernetes.io/docs/tasks/access-kubernetes-api/http-proxy-access-api/) command:
 
 ```bash
 $ kubectl proxy -p 8001
@@ -375,7 +375,7 @@
 
 ### Managing topologies
 
-Once all of the [components](#components) have been successfully started up, you need to open up a proxy port to your GKE Kubernetes cluster using the [`kubectl proxy`](https://kubernetes.io/docs/tasks/access-kubernetes-api/http-proxy-access-api/) command:
+Once all of the [components](#starting-components) have been successfully started up, you need to open up a proxy port to your GKE Kubernetes cluster using the [`kubectl proxy`](https://kubernetes.io/docs/tasks/access-kubernetes-api/http-proxy-access-api/) command:
 
 ```bash
 $ kubectl proxy -p 8001
@@ -427,7 +427,7 @@
 
 The [Heron UI](user-manuals-heron-ui) is an in-browser dashboard that you can use to monitor your Heron [topologies](heron-topology-concepts). It should already be running in your GKE cluster.
 
-You can access [Heron UI](../../../heron-ui) in your browser by navigating to http://localhost:8001/api/v1/proxy/namespaces/default/services/heron-ui:8889.
+You can access [Heron UI](user-manuals-heron-ui) in your browser by navigating to http://localhost:8001/api/v1/proxy/namespaces/default/services/heron-ui:8889.
 
 ## Heron on Kubernetes configuration
 
diff --git a/website2/docs/schedulers-k8s-with-helm.md b/website2/docs/schedulers-k8s-with-helm.md
index 367c4e0..709b0d0 100644
--- a/website2/docs/schedulers-k8s-with-helm.md
+++ b/website2/docs/schedulers-k8s-with-helm.md
@@ -4,7 +4,7 @@
 sidebar_label:  Kubernetes with Helm
 ---
 
-> If you'd prefer to install Heron on Kubernetes *without* using the [Helm](https://helm.sh) package manager, see the [Heron on Kubernetes by hand](../kubernetes) document.
+> If you'd prefer to install Heron on Kubernetes *without* using the [Helm](https://helm.sh) package manager, see the [Heron on Kubernetes by hand](schedulers-k8s-by-hand) document.
 
 [Helm](https://helm.sh) is an open source package manager for [Kubernetes](https://kubernetes.io) that enables you to quickly and easily install even the most complex software systems on Kubernetes. Heron has a Helm [chart](https://docs.helm.sh/developing_charts/#charts) that you can use to install Heron on Kubernetes using just a few commands. The chart can be used to install Heron on the following platforms:
 
@@ -92,7 +92,7 @@
 :--------|:---
 [Minikube](#minikube) | `minikube`
 [Google Kubernetes Engine](#google-kubernetes-engine) | `gke`
-[Amazon Web Services](#amazone-web-services) | `aws`
+[Amazon Web Services](#amazon-web-services) | `aws`
 [Bare metal](#bare-metal) | `baremetal`
 
 #### Minikube
@@ -189,8 +189,8 @@
 
 Configuration | Description
 :-------------|:-----------
-[`small.yaml`](https://github.com/apache/incubator-heron/blob/master/deploy/kubernetes/gcp/small.yaml) | Smaller Heron cluster intended for basic testing, development, and experimentation
-[`large.yaml`](https://github.com/apache/incubator-heron/blob/master/deploy/kubernetes/gcp/large.yaml) | Larger Heron cluster intended for production usage
+[`small.yaml`](https://github.com/apache/incubator-heron/blob/master/deploy/kubernetes/gke/small.yaml) | Smaller Heron cluster intended for basic testing, development, and experimentation
+[`medium.yaml`](https://github.com/apache/incubator-heron/blob/master/deploy/kubernetes/gke/medium.yaml) | Closer geared for production usage
 
 To apply the `small` configuration, for example:
 
@@ -267,7 +267,7 @@
 
 ## Running topologies on Heron on Kubernetes
 
-Once you have a Heron cluster up and running on Kubernetes via Helm, you can use the [`heron` CLI tool](../../../heron-cli) like normal if you set the proper URL for the [Heron API server](../../../heron-api-server). When running Heron on Kubernetes, that URL is:
+Once you have a Heron cluster up and running on Kubernetes via Helm, you can use the [`heron` CLI tool](user-manuals-heron-cli) like normal if you set the proper URL for the [Heron API server](deployment-api-server). When running Heron on Kubernetes, that URL is:
 
 ```bash
 $ http://localhost:8001/api/v1/namespaces/default/services/heron-kubernetes-apiserver:9000/proxy
diff --git a/website2/docs/schedulers-local.md b/website2/docs/schedulers-local.md
index 1b1b70e..6ab74d1 100644
--- a/website2/docs/schedulers-local.md
+++ b/website2/docs/schedulers-local.md
@@ -5,18 +5,18 @@
 ---
 
 In addition to out-of-the-box schedulers for
-[Aurora](../aurora), Heron can also be deployed in a local environment, which
+[Aurora](schedulers-aurora-cluster), Heron can also be deployed in a local environment, which
 stands up a mock Heron cluster on a single machine. This can be useful for
 experimenting with Heron's features, testing a wide variety of possible cluster
 events, and so on.
 
 One of two state managers can be used for coordination when deploying locally:
 
-* [ZooKeeper](../../statemanagers/zookeeper)
-* [Local File System](../../statemanagers/localfs)
+* [ZooKeeper](state-managers-zookeeper)
+* [Local File System](state-managers-local-fs)
 
 **Note**: Deploying a Heron cluster locally is not to be confused with Heron's
-[simulator mode](../../../../developers/simulator-mode). Simulator mode enables
+[simulator mode](guides-simulator-mode). Simulator mode enables
 you to run topologies in a cluster-agnostic JVM process for the purpose of
 development and debugging, while the local scheduler stands up a Heron cluster
 on a single machine.
@@ -24,7 +24,7 @@
 ## How Local Deployment Works
 
 Using the local scheduler is similar to deploying Heron on other schedulers.
-The [Heron] (../../../heron-cli) cli is used to deploy and manage topologies
+The [Heron](user-manuals-heron-cli) cli is used to deploy and manage topologies
 as would be done using a distributed scheduler. The main difference is in
 the configuration.
 
diff --git a/website2/docs/schedulers-mesos-local-mac.md b/website2/docs/schedulers-mesos-local-mac.md
index d5714e2..ce6074e 100644
--- a/website2/docs/schedulers-mesos-local-mac.md
+++ b/website2/docs/schedulers-mesos-local-mac.md
@@ -8,7 +8,7 @@
 This is a step by step guide to run Heron on a Mesos cluster locally.
 
 ## Install Heron
-Follow [Quick Start Guide](../../../../getting-started) to install Heron.
+Follow [Quick Start Guide](getting-started-local-single-node) to install Heron.
 
 ## Setting up an Apache Mesos Cluster Locally
 
@@ -18,14 +18,14 @@
 the Mesos management console [http://localhost:5050](http://localhost:5050) and confirm there is
 activated slaves.
 
-![console page](/img/mesos-management-console.png)
+![console page](assets/mesos-management-console.png)
 
 ## Configure Heron
 
 ### State Manager
 By default, Heron uses Local File System State Manager on Mesos to manage states. Modify
 `$HOME/.heron/conf/mesos/statemgr.yaml` to use ZooKeeper. For more details see [Setting up
-ZooKeeper](../../statemanagers/zookeeper).
+ZooKeeper](state-managers-zookeeper).
 
 ### Scheduler
 Heron needs to know where to load the lib to interact with Mesos. Change the config
@@ -95,21 +95,21 @@
 Another way to check your topology is running is to look at the Mesos management console. If it
 was launched successfully, two containers will be running.
 
-![result page](/img/mesos-management-console-with-topology.png)
+![result page](assets/mesos-management-console-with-topology.png)
 
 To view the process logs, click the `sandbox` on the right side. The sandbox of the heron container
 is shown below.
 
-![container-container-sandbox](/img/container-container-sandbox.png)
+![container-container-sandbox](assets/container-container-sandbox.png)
 
 The `log-files` directory includes the application and GC log of the processes running in this
 container.
 
-![container-log-files](/img/container-log-files.png)
+![container-log-files](assets/container-log-files.png)
 
 The bolt log of the ExclamationTopology is `container_1_exclaim1_1.log.0`. Below is a sample of it.
 
-![bolt-log](/img/bolt-log.png)
+![bolt-log](assets/bolt-log.png)
 
 ## Heron UI
 
@@ -133,15 +133,15 @@
 
 Go to the UI at [http://localhost:8889](http://localhost:8889) to see the topology.
 
-![mesos-local-heron-ui](/img/mesos-local-heron-ui.png)
+![mesos-local-heron-ui](assets/mesos-local-heron-ui.png)
 
 To see the metrics, click on the topology.
 
-![mesos-local-heron-ui-more](/img/mesos-local-heron-ui-more.png)
+![mesos-local-heron-ui-more](assets/mesos-local-heron-ui-more.png)
 
 To enter the Mesos Management Console page, click the `job` button.
 
-![mesos-local-heron-ui-to-mesos-console](/img/mesos-local-heron-ui-to-mesos-console.png)
+![mesos-local-heron-ui-to-mesos-console](assets/mesos-local-heron-ui-to-mesos-console.png)
 
 ## Kill Topology
 
diff --git a/website2/docs/schedulers-nomad.md b/website2/docs/schedulers-nomad.md
index 3549376..39e2542 100644
--- a/website2/docs/schedulers-nomad.md
+++ b/website2/docs/schedulers-nomad.md
@@ -4,7 +4,7 @@
 sidebar_label:  Nomad
 ---
 
-Heron supports [Hashicorp](https://hashicorp.com)'s [Nomad](https://nomadproject.io) as a scheduler. You can use Nomad for either small- or large-scale Heron deployments or to run Heron locally in [standalone mode](../standalone).
+Heron supports [Hashicorp](https://hashicorp.com)'s [Nomad](https://nomadproject.io) as a scheduler. You can use Nomad for either small- or large-scale Heron deployments or to run Heron locally in [standalone mode](schedulers-standalone).
 
 > Update: Heron now supports running on Nomad via [raw exec driver](https://www.nomadproject.io/docs/drivers/raw_exec.html) and [docker driver](https://www.nomadproject.io/docs/drivers/docker.html)
 
@@ -22,13 +22,13 @@
 
 When setting up your Nomad cluster, the following are required:
 
-* The [Heron CLI tool](../../../heron-cli) must be installed on each machine used to deploy Heron topologies
+* The [Heron CLI tool](user-manuals-heron-cli) must be installed on each machine used to deploy Heron topologies
 * Python 2.7, Java 7 or 8, and [curl](https://curl.haxx.se/) must be installed on every machine in the cluster
 * A [ZooKeeper cluster](https://zookeeper.apache.org)
 
 ## Configuring Heron settings
 
-Before running Heron via Nomad, you'll need to configure some settings. Once you've [installed Heron](../../../../getting-started), all of the configurations you'll need to modify will be in the `~/.heron/conf/nomad` diredctory.
+Before running Heron via Nomad, you'll need to configure some settings. Once you've [installed Heron](getting-started-local-single-node), all of the configurations you'll need to modify will be in the `~/.heron/conf/nomad` diredctory.
 
 First, make sure that the `heron.nomad.driver` is set to "raw_exec" in `~/.heron/conf/nomad/scheduler.yaml` e.g.
 
@@ -38,9 +38,9 @@
 
 You'll need to use a topology uploader to deploy topology packages to nodes in your cluster. You can use one of the following uploaders:
 
-* The HTTP uploader in conjunction with Heron's [API server](../../../heron-api-server). The Heron API server acts like a file server to which users can upload topology packages. The API server distributes the packages, along with the Heron core package, to the relevant machines. You can also use the API server to submit your Heron topology to Nomad (described [below](#deploying-with-the-api-server)) <!-- TODO: link to upcoming HTTP uploader documentation -->
-* [Amazon S3](../../uploaders/s3). Please note that the S3 uploader requires an AWS account.
-* [SCP](../../uploaders/scp). Please note that the SCP uploader requires SSH access to nodes in the cluster.
+* The HTTP uploader in conjunction with Heron's [API server](deployment-api-server). The Heron API server acts like a file server to which users can upload topology packages. The API server distributes the packages, along with the Heron core package, to the relevant machines. You can also use the API server to submit your Heron topology to Nomad (described [below](#deploying-with-the-api-server)) <!-- TODO: link to upcoming HTTP uploader documentation -->
+* [Amazon S3](uploaders-amazon-s3). Please note that the S3 uploader requires an AWS account.
+* [SCP](uploaders-scp). Please note that the SCP uploader requires SSH access to nodes in the cluster.
 
 You can modify the `heron.class.uploader` parameter in `~/.heron/conf/nomad/uploader.yaml` to choose an uploader.
 
@@ -73,7 +73,7 @@
 
 You can do this in one of several ways:
 
-* Use the Heron API server to distribute `heron-core.tar.gz` (see [here](../../heron-api-server) for more info)
+* Use the Heron API server to distribute `heron-core.tar.gz` (see [here](deployment-api-server) for more info)
 * Copy `heron-core.tar.gz` onto every node in the cluster
 * Mount a network drive to every machine in the cluster that contains 
 * Upload `heron-core.tar.gz` to an S3 bucket and expose an HTTP endpoint
@@ -93,7 +93,7 @@
 
 ## Submitting Heron topologies to the Nomad cluster
 
-You can submit Heron topologies to a Nomad cluster via the [Heron CLI tool](../../../heron-cli):
+You can submit Heron topologies to a Nomad cluster via the [Heron CLI tool](user-manuals-heron-cli):
 
 ```bash
 $ heron submit nomad \
@@ -113,7 +113,7 @@
 
 ## Deploying with the API server
 
-The advantage of running the [Heron API Server](../../../heron-api-server) is that it can act as a file server to help you distribute topology package files and submit jobs to Nomad, so that you don't need to modify the configuration files mentioned above.  By using Heron’s API Server, you can set configurations such as the URI of ZooKeeper and the Nomad server once and not need to configure each machine from which you want to submit Heron topologies.
+The advantage of running the [Heron API Server](deployment-api-server) is that it can act as a file server to help you distribute topology package files and submit jobs to Nomad, so that you don't need to modify the configuration files mentioned above.  By using Heron’s API Server, you can set configurations such as the URI of ZooKeeper and the Nomad server once and not need to configure each machine from which you want to submit Heron topologies.
 
 ## Running the API server
 
@@ -160,7 +160,7 @@
 
 Make sure to replace the following:
 
-* `<heron_apiserver_executable>` --- The local path to where the [Heron API server](../../../heron-api-server) executable is located (usually `~/.heron/bin/heron-apiserver`)
+* `<heron_apiserver_executable>` --- The local path to where the [Heron API server](deployment-api-server) executable is located (usually `~/.heron/bin/heron-apiserver`)
 * `<zookeeper_uri>` --- The URI for your ZooKeeper cluster
 * `<scheduler_uri>` --- The URI for your Nomad server
 
@@ -174,7 +174,7 @@
 heron.uploader.http.uri: http://localhost:9000/api/v1/file/upload
 ```
 
-The [Heron CLI](../../../heron-cli) will take care of the upload. When the topology is starting up, the topology package will be automatically downloaded from the API server.
+The [Heron CLI](user-manuals-heron-cli) will take care of the upload. When the topology is starting up, the topology package will be automatically downloaded from the API server.
 
 ## Using the API server to distribute the Heron core package
 
@@ -207,7 +207,7 @@
 
 ## Using the API server to submit Heron topologies
 
-Users can submit topologies using the [Heron CLI](../../../heron-cli) by specifying a service URL to the API server. Here's the format of that command:
+Users can submit topologies using the [Heron CLI](user-manuals-heron-cli) by specifying a service URL to the API server. Here's the format of that command:
 
 ```bash
 $ heron submit nomad \
@@ -266,7 +266,7 @@
 
 When setting up your Nomad cluster, the following are required:
 
-* The [Heron CLI tool](../../../heron-cli) must be installed on each machine used to deploy Heron topologies
+* The [Heron CLI tool](user-manuals-heron-cli) must be installed on each machine used to deploy Heron topologies
 * Python 2.7, Java 7 or 8, and [curl](https://curl.haxx.se/) must be installed on every machine in the cluster
 * A [ZooKeeper cluster](https://zookeeper.apache.org)
 * Docker installed and enabled on every machine
@@ -274,7 +274,7 @@
 
 ## Configuring Heron settings
 
-Before running Heron via Nomad, you'll need to configure some settings. Once you've [installed Heron](../../../../getting-started), all of the configurations you'll need to modify will be in the `~/.heron/conf/nomad` diredctory.
+Before running Heron via Nomad, you'll need to configure some settings. Once you've [installed Heron](getting-started-local-single-node), all of the configurations you'll need to modify will be in the `~/.heron/conf/nomad` diredctory.
 
 First, make sure that the `heron.nomad.driver` is set to "docker" in `~/.heron/conf/nomad/scheduler.yaml` e.g.
 
@@ -290,9 +290,9 @@
 
 You'll need to use a topology uploader to deploy topology packages to nodes in your cluster. You can use one of the following uploaders:
 
-* The HTTP uploader in conjunction with Heron's [API server](../../../heron-api-server). The Heron API server acts like a file server to which users can upload topology packages. The API server distributes the packages, along with the Heron core package, to the relevant machines. You can also use the API server to submit your Heron topology to Nomad (described [below](#deploying-with-the-api-server)) <!-- TODO: link to upcoming HTTP uploader documentation -->
-* [Amazon S3](../../uploaders/s3). Please note that the S3 uploader requires an AWS account.
-* [SCP](../../uploaders/scp). Please note that the SCP uploader requires SSH access to nodes in the cluster.
+* The HTTP uploader in conjunction with Heron's [API server](deployment-api-server). The Heron API server acts like a file server to which users can upload topology packages. The API server distributes the packages, along with the Heron core package, to the relevant machines. You can also use the API server to submit your Heron topology to Nomad (described [below](#deploying-with-the-api-server)) <!-- TODO: link to upcoming HTTP uploader documentation -->
+* [Amazon S3](uploaders-amazon-s3). Please note that the S3 uploader requires an AWS account.
+* [SCP](uploaders-scp). Please note that the SCP uploader requires SSH access to nodes in the cluster.
 
 You can modify the `heron.class.uploader` parameter in `~/.heron/conf/nomad/uploader.yaml` to choose an uploader.
 
@@ -310,7 +310,7 @@
 
 ## Submitting Heron topologies to the Nomad cluster
 
-You can submit Heron topologies to a Nomad cluster via the [Heron CLI tool](../../../heron-cli):
+You can submit Heron topologies to a Nomad cluster via the [Heron CLI tool](user-manuals-heron-cli):
 
 ```bash
 $ heron submit nomad \
@@ -330,7 +330,7 @@
 
 ## Deploying with the API server
 
-The advantage of running the [Heron API Server](../../../heron-api-server) is that it can act as a file server to help you distribute topology package files and submit jobs to Nomad, so that you don't need to modify the configuration files mentioned above.  By using Heron’s API Server, you can set configurations such as the URI of ZooKeeper and the Nomad server once and not need to configure each machine from which you want to submit Heron topologies.
+The advantage of running the [Heron API Server](deployment-api-server) is that it can act as a file server to help you distribute topology package files and submit jobs to Nomad, so that you don't need to modify the configuration files mentioned above.  By using Heron’s API Server, you can set configurations such as the URI of ZooKeeper and the Nomad server once and not need to configure each machine from which you want to submit Heron topologies.
 
 ## Running the API server
 
@@ -377,7 +377,7 @@
 
 Make sure to replace the following:
 
-* `<heron_apiserver_executable>` --- The local path to where the [Heron API server](../../../heron-api-server) executable is located (usually `~/.heron/bin/heron-apiserver`)
+* `<heron_apiserver_executable>` --- The local path to where the [Heron API server](deployment-api-server) executable is located (usually `~/.heron/bin/heron-apiserver`)
 * `<zookeeper_uri>` --- The URI for your ZooKeeper cluster
 * `<scheduler_uri>` --- The URI for your Nomad server
 
diff --git a/website2/docs/schedulers-slurm.md b/website2/docs/schedulers-slurm.md
index c6087b6..86604a4 100644
--- a/website2/docs/schedulers-slurm.md
+++ b/website2/docs/schedulers-slurm.md
@@ -10,12 +10,10 @@
 
 ## How Slurm Deployment Works
 
-Using the Slurm scheduler is similar to deploying Heron on other systems. The Heron
-(../../heron-cli) cli is used to deploy and manage topologies similar to other
+Using the Slurm scheduler is similar to deploying Heron on other systems. [The Heron CLI](user-manuals-heron-cli)  is used to deploy and manage topologies similar to other
 schedulers. The main difference is in the configuration.
 
-A set of default configuration files are provided with Heron in the [conf/slurm]
-(https://github.com/apache/incubator-heron/tree/master/heron/config/src/yaml/conf/slurm) directory.
+A set of default configuration files are provided with Heron in the [conf/slurm](https://github.com/apache/incubator-heron/tree/master/heron/config/src/yaml/conf/slurm) directory.
 The default configuration uses the local file system based state manager. It is
 possible that the local file system is mounted using NFS.
 
diff --git a/website2/docs/schedulers-standalone.md b/website2/docs/schedulers-standalone.md
index 5ee01b3..6f0c3a0 100644
--- a/website2/docs/schedulers-standalone.md
+++ b/website2/docs/schedulers-standalone.md
@@ -4,11 +4,11 @@
 sidebar_label:  Heron Multi-node Standalone Cluster
 ---
 
-Heron enables you to easily run a multi-node cluster in **standalone mode**. The difference between standalone mode and [local mode](../local) for Heron is that standalone mode involves running multiple compute nodes---using [Hashicorp](https://www.hashicorp.com/)'s [Nomad](https://www.nomadproject.io/) as a scheduler---rather than just one.
+Heron enables you to easily run a multi-node cluster in **standalone mode**. The difference between standalone mode and [local mode](schedulers-local) for Heron is that standalone mode involves running multiple compute nodes---using [Hashicorp](https://www.hashicorp.com/)'s [Nomad](https://www.nomadproject.io/) as a scheduler---rather than just one.
 
 ## Installation
 
-You can use Heron in standalone mode using the `heron-admin` CLI tool, which can be installed using the instructions [here](../../../../getting-started).
+You can use Heron in standalone mode using the `heron-admin` CLI tool, which can be installed using the instructions [here](getting-started-local-single-node).
 
 ## Requirements
 
@@ -86,7 +86,7 @@
 $ heron-admin standalone info
 ```
 
-This will return a JSON string containing a list of hosts for Heron and ZooKeeper as well as URLs for the [Heron API server](../../../heron-api-server), [Heron UI](../../../heron-ui), and [Heron Tracker](../../../heron-tracker). Here is a cluster info JSON string if all defaults are retained:
+This will return a JSON string containing a list of hosts for Heron and ZooKeeper as well as URLs for the [Heron API server](deployment-api-server), [Heron UI](user-manuals-heron-ui), and [Heron Tracker](user-manuals-heron-tracker-runbook). Here is a cluster info JSON string if all defaults are retained:
 
 ```json
 {
@@ -128,7 +128,7 @@
 
 ## Setting the service URL
 
-Once your standalone cluster is running, there's one final step before you can interact with the cluster: you need to specify the service URL for the [Heron API server](../../../heron-api-server) for the standalone cluster. You can fetch that URL in two different ways:
+Once your standalone cluster is running, there's one final step before you can interact with the cluster: you need to specify the service URL for the [Heron API server](deployment-api-server) for the standalone cluster. You can fetch that URL in two different ways:
 
 ```bash
 # Using the "get" command
@@ -164,7 +164,7 @@
 
 ## Submitting a topology
 
-Once your standalone cluster is up and running and you've set the service URL for the [`heron` CLI tool](../../../heron-cli), you can submit and manage topologies by specifying the `standalone` cluster. Here's an example topology submission command:
+Once your standalone cluster is up and running and you've set the service URL for the [`heron` CLI tool](user-manuals-heron-cli), you can submit and manage topologies by specifying the `standalone` cluster. Here's an example topology submission command:
 
 ```bash
 $ heron submit standalone \
diff --git a/website2/docs/schedulers-yarn.md b/website2/docs/schedulers-yarn.md
index 39662a3..9c1de97 100644
--- a/website2/docs/schedulers-yarn.md
+++ b/website2/docs/schedulers-yarn.md
@@ -4,7 +4,7 @@
 sidebar_label:  YARN Cluster
 ---
 
-In addition to out-of-the-box schedulers for [Aurora](../aurora), Heron can also be deployed on a
+In addition to out-of-the-box schedulers for [Aurora](schedulers-aurora-cluster), Heron can also be deployed on a
 YARN cluster with the YARN scheduler. The YARN scheduler is implemented using the
 [Apache REEF](https://reef.apache.org/) framework.
 
@@ -19,7 +19,7 @@
 ## Topology deployment on a YARN Cluster
 
 Using the YARN scheduler is similar to deploying Heron on other clusters, i.e. using the
-[Heron CLI](/docs/operators/heron-cli/).
+[Heron CLI](user-manuals-heron-cli).
 This document assumes that the Hadoop yarn client is installed and configured.
 
 Following steps are executed when a Heron topology is submitted:
@@ -58,8 +58,7 @@
 
 ### Configure the YARN scheduler
 
-A set of default configuration files are provided with Heron in the [conf/yarn]
-(https://github.com/apache/incubator-heron/tree/master/heron/config/src/yaml/conf/yarn) directory.
+A set of default configuration files are provided with Heron in the [conf/yarn](https://github.com/apache/incubator-heron/tree/master/heron/config/src/yaml/conf/yarn) directory.
 The default configuration uses the local state manager. This will work with single-node local
 YARN installation only. A Zookeeper based state management will be needed for topology
 deployment on a multi-node YARN cluster.
@@ -87,8 +86,7 @@
 >**Tips**
 >
 >1. More details for using the `--extra-launch-classpath` argument in 0.14.3 version. It supports both a single directory which including all `hadoop-lib-jars` and multiple directories separated by colon such as what `hadoop classpath` gives. ***The submit operation will fail if any path is invalid or if any file is missing.***
->2. if you want to submit a topology to a specific YARN queue, you can set the `heron.scheduler.yarn.queue` argument in `--config-property`. For instance, `--config-property heron.scheduler.yarn.queue=test`. This configuration could be found in the [conf/yarn/scheduler]
-(https://github.com/apache/incubator-heron/blob/master/heron/config/src/yaml/conf/yarn/scheduler.yaml) file too. `default` would be the YARN default queue as YARN provided.
+>2. if you want to submit a topology to a specific YARN queue, you can set the `heron.scheduler.yarn.queue` argument in `--config-property`. For instance, `--config-property heron.scheduler.yarn.queue=test`. This configuration could be found in the [conf/yarn/scheduler](https://github.com/apache/incubator-heron/blob/master/heron/config/src/yaml/conf/yarn/scheduler.yaml) file too. `default` would be the YARN default queue as YARN provided.
 
 **Sample Output**
 
@@ -150,4 +148,4 @@
  supported yet. As a result AM failure will result in topology failure.
  Issue: [#949](https://github.com/apache/incubator-heron/issues/949)
 1. TMaster and Scheduler are started in separate containers. Increased network latency can result
- in warnings or failures. Issue: [#951] (https://github.com/apache/incubator-heron/issues/951)
+ in warnings or failures. Issue: [#951](https://github.com/apache/incubator-heron/issues/951)