Updated docs to follow Google markdown style guide

* Set line limit to 100 chars
* Removed two spaces after period
* Removed trailing line spaces
* Converted all headings to ATX style
diff --git a/README.md b/README.md
index e434a9b..c2980c2 100644
--- a/README.md
+++ b/README.md
@@ -4,42 +4,44 @@
 
 **Apache Fluo lets users make incremental updates to large data sets stored in Apache Accumulo.**
 
-[Apache Fluo][fluo] is an open source implementation of [Percolator][percolator] (which populates Google's
-search index) for [Apache Accumulo][accumulo]. Fluo makes it possible to update the results of a large-scale
-computation, index, or analytic as new data is discovered. Check out the Fluo [project website][fluo] for
-news and general information.
+[Apache Fluo][fluo] is an open source implementation of [Percolator][percolator] (which populates
+Google's search index) for [Apache Accumulo][accumulo]. Fluo makes it possible to update the results
+of a large-scale computation, index, or analytic as new data is discovered. Check out the Fluo
+[project website][fluo] for news and general information.
 
-### Getting Started
+## Getting Started
 
 There are several ways to run Fluo (listed in order of increasing difficulty):
 
-* [quickstart] - Starts a MiniFluo instance that is configured to run a word count application
-* [fluo-dev] - Automated tool that sets up Fluo and its dependencies on a single machine
-* [Zetten] - Automated tool that launches an AWS cluster and sets up Fluo/Accumulo on it
-* [Install instructions][install] - Manually set up Fluo on a cluster where Accumulo, Hadoop & Zookeeper are running
+*  [quickstart] - Starts a MiniFluo instance that is configured to run a word count application
+*  [fluo-dev] - Automated tool that sets up Fluo and its dependencies on a single machine
+*  [Zetten] - Automated tool that launches an AWS cluster and sets up Fluo/Accumulo on it
+*  [Install instructions][install] - Manually set up Fluo on a cluster where Accumulo, Hadoop &
+   Zookeeper are running
 
 Except for [quickstart], all above will set up a Fluo application that will be idle unless you
 create client & observer code for your application. You can either [create your own
 application][apps] or configure Fluo to run an example application below:
 
-* [phrasecount] - Computes phrase counts for unique documents
-* [fluo-stress] - Computes the number of unique integers by building bitwise trie
-* [webindex] - Creates a web index using Common Crawl data
+*  [phrasecount] - Computes phrase counts for unique documents
+*  [fluo-stress] - Computes the number of unique integers by building bitwise trie
+*  [webindex] - Creates a web index using Common Crawl data
 
-### Applications
+## Applications
 
 Below are helpful resources for Fluo application developers:
 
-* [Instructions][apps] for creating Fluo applications
-* [Fluo API][api] javadocs
-* [Fluo Recipes][recipes] is a project that provides common code for Fluo application developers implemented 
-  using the Fluo API.
+*  [Instructions][apps] for creating Fluo applications
+*  [Fluo API][api] javadocs
+*  [Fluo Recipes][recipes] is a project that provides common code for Fluo application developers
+   implemented using the Fluo API.
 
-### Implementation
+## Implementation
 
-* [Architecture] - Overview of Fluo's architecture
-* [Contributing] - Documentation for developers who want to contribute to Fluo
-* [Metrics] - Fluo metrics are visible via JMX by default but can be configured to send to Graphite or Ganglia
+*  [Architecture] - Overview of Fluo's architecture
+*  [Contributing] - Documentation for developers who want to contribute to Fluo
+*  [Metrics] - Fluo metrics are visible via JMX by default but can be configured to send to Graphite
+   or Ganglia
 
 [fluo]: https://fluo.apache.org/
 [accumulo]: https://accumulo.apache.org
@@ -65,4 +67,4 @@
 [ml]: https://maven-badges.herokuapp.com/maven-central/org.apache.fluo/fluo-api/
 [ji]: https://javadoc-emblem.rhcloud.com/doc/org.apache.fluo/fluo-api/badge.svg
 [jl]: http://www.javadoc.io/doc/org.apache.fluo/fluo-api
-[logo]: contrib/fluo-logo.png
+[logo]: contrib/fluo-logo.png
\ No newline at end of file
diff --git a/docs/applications.md b/docs/applications.md
index ceae1d6..31559e8 100644
--- a/docs/applications.md
+++ b/docs/applications.md
@@ -1,12 +1,11 @@
-Creating Fluo applications
-==========================
+# Fluo Applications
 
-Once you have Fluo installed and running on your cluster, you can now run Fluo applications which consist of 
-clients and observers.
+Once you have Fluo installed and running on your cluster, you can now run Fluo applications which
+consist of clients and observers.
 
-If you are new to Fluo, consider first running the [phrasecount] application on your Fluo instance. Otherwise,
-you can create your own Fluo client or observer by the following the steps below.
- 
+If you are new to Fluo, consider first running the [phrasecount] application on your Fluo instance.
+Otherwise, you can create your own Fluo client or observer by the following the steps below.
+
 For both clients and observers, you will need to include the following in your Maven pom:
 
 ```xml
@@ -23,20 +22,18 @@
 </dependency>
 ```
 
-Fluo provides a classpath command to help users build a runtime classpath.
-This command along with the `hadoop jar` command is useful when writing
-scripts to run Fluo client code.  These command allow the scripts to use the
-versions of Hadoop, Accumulo, and Zookeeper installed on a cluster.
- 
-Creating a Fluo client
-----------------------
+Fluo provides a classpath command to help users build a runtime classpath. This command along with
+the `hadoop jar` command is useful when writing scripts to run Fluo client code. These commands
+allow the scripts to use the versions of Hadoop, Accumulo, and Zookeeper installed on a cluster.
 
-To create a [FluoClient], you will need to provide it with a [FluoConfiguration] object that is configured
-to connect to your Fluo instance.
+## Creating a Fluo client
 
-If you have access to the [fluo.properties] file that was used to configure your Fluo instance, you can use
-it to build a [FluoConfiguration] object with all necessary properties which are all properties with the 
-`fluo.client.*` prefix in [fluo.properties]:
+To create a [FluoClient], you will need to provide it with a [FluoConfiguration] object that is
+configured to connect to your Fluo instance.
+
+If you have access to the [fluo.properties] file that was used to configure your Fluo instance, you
+can use it to build a [FluoConfiguration] object with all necessary properties which are all
+properties with the `fluo.client.*` prefix in [fluo.properties]:
 
 ```java
 FluoConfiguration config = new FluoConfiguration(new File("fluo.properties"));
@@ -51,7 +48,8 @@
 config.setAccumuloInstance("instance");
 ```
 
-Once you have [FluoConfiguration] object, pass it to the `newClient()` method of [FluoFactory] to create a [FluoClient]:
+Once you have [FluoConfiguration] object, pass it to the `newClient()` method of [FluoFactory] to
+create a [FluoClient]:
 
 ```java
 FluoClient client = FluoFactory.newClient(config)
@@ -59,15 +57,13 @@
 
 It may help to reference the [API javadocs][API] while you are learning the Fluo API.
 
-Running application code
-------------------------
+## Running application code
 
-The `fluo exec <app name> <class> {arguments}` provides an easy way to execute
-application code.  It will execute a class with a main method if a jar
-containing the class is placed in the lib directory of the application.  When
-the class is run, Fluo classes and dependencies will be on the classpath.  The
-`fluo exec` command can inject the applications configuration if the class is
-written in the following way.  Defining the injection point is optional.
+The `fluo exec <app name> <class> {arguments}` provides an easy way to execute application code. It
+will execute a class with a main method if a jar containing the class is placed in the lib directory
+of the application. When the class is run, Fluo classes and dependencies will be on the classpath.
+The `fluo exec` command can inject the applications configuration if the class is written in the
+following way. Defining the injection point is optional.
 
 ```java
 import javax.inject.Inject;
@@ -86,33 +82,30 @@
 }
 ```
 
-Creating a Fluo observer
-------------------------
+## Creating a Fluo observer
 
 To create an observer, follow these steps:
 
-1. Create a class that extends [AbstractObserver].
-2. Build a jar containing this class and include this jar in the `lib/` directory of your Fluo application.
-3. Configure your Fluo instance to use this observer by modifying the Observer section of [fluo.properties].
-4. Restart your Fluo instance so that your Fluo workers load the new observer.
+1.  Create a class that extends [AbstractObserver].
+2.  Build a jar containing this class and include this jar in the `lib/` directory of your Fluo
+    application.
+3.  Configure your Fluo instance to use this observer by modifying the Observer section of
+    [fluo.properties].
+4.  Restart your Fluo instance so that your Fluo workers load the new observer.
 
-Application Configuration
--------------------------
+## Application Configuration
 
-Each observer can have its own configuration.  This is useful for the case of
-using the same observer code w/ different parameters.  However for the case of
-sharing the same configuration across observers, fluo provides a simple
-mechanism to set and access application specific configuration.  See the
-javadoc on [FluoClient].getAppConfiguration() for more details.
+Each observer can have its own configuration. This is useful for the case of using the same
+observer code w/ different parameters. However for the case of sharing the same configuration
+across observers, fluo provides a simple mechanism to set and access application specific
+configuration. See the javadoc on [FluoClient].getAppConfiguration() for more details.
 
-Debugging Applications
-======================
+## Debugging Applications
 
-While monitoring [Fluo metrics][metrics] can detect problems (like too many
-transaction collisions) in a Fluo application, [metrics][metrics] may not
-provide enough information to debug the root cause of the problem.  To help
-debug Fluo applications, low-level logging of transactions can be turned on by
-setting the following loggers to TRACE:
+While monitoring [Fluo metrics][metrics] can detect problems (like too many transaction collisions)
+in a Fluo application, [metrics][metrics] may not provide enough information to debug the root cause
+of the problem. To help debug Fluo applications, low-level logging of transactions can be turned on
+by setting the following loggers to TRACE:
 
 | Logger               | Level | Information                                                                                        |
 |----------------------|-------|----------------------------------------------------------------------------------------------------|
@@ -120,8 +113,8 @@
 | `fluo.tx.summary`    | TRACE | Provides a one line summary about each transaction executed                                        |
 | `fluo.tx.collisions` | TRACE | Provides details about what data was involved When a transaction collides with another transaction |
 
-Below is an example log after setting `fluo.tx` to TRACE.   The number
-following `txid: ` is the transactions start timestamp from the Oracle.  
+Below is an example log after setting `fluo.tx` to TRACE. The number following `txid: ` is the
+transactions start timestamp from the Oracle.
 
 ```
 2015-02-11 18:24:05,341 [fluo.tx ] TRACE: txid: 3 begin() thread: 198
@@ -140,32 +133,29 @@
 
 The log above traces the following sequence of events.
 
- * Transaction T1 has a start timestamp of `3`
- * Thread with id `198` is executing T1, its running code from the class `com.SimpleLoader`
- * T1 reads row `4333` and column `stat count` which does not exist
- * T1 sets row `4333` and column `stat count` to `1`
- * T1 commits successfully and its commit timestamp from the Oracle is `4`.
- * Transaction T2 has a start timestamp of `5` (because its `5` > `4` it can see what T1 wrote). 
- * T2 reads a value of `1` for row `4333` and column `stat count`
- * T2 sets row `4333` and `column `stat count` to `2`
- * T2 commits successfully with a commit timestamp of `6`
+* Transaction T1 has a start timestamp of `3`
+* Thread with id `198` is executing T1, its running code from the class `com.SimpleLoader`
+* T1 reads row `4333` and column `stat count` which does not exist
+* T1 sets row `4333` and column `stat count` to `1`
+* T1 commits successfully and its commit timestamp from the Oracle is `4`.
+* Transaction T2 has a start timestamp of `5` (because its `5` > `4` it can see what T1 wrote).
+* T2 reads a value of `1` for row `4333` and column `stat count`
+* T2 sets row `4333` and `column `stat count` to `2`
+* T2 commits successfully with a commit timestamp of `6`
 
-Below is an example log after only setting `fluo.tx.collisions` to TRACE.
-This setting will only log trace information when a collision occurs.  Unlike
-the previous example, what the transaction read and wrote is not logged.  This
-shows that a transaction with a start timestamp of `106` and a class name of
-`com.SimpleLoader` collided with another transaction on row `r1` and column
-`fam1 qual1`.
+Below is an example log after only setting `fluo.tx.collisions` to TRACE. This setting will only log
+trace information when a collision occurs. Unlike the previous example, what the transaction read
+and wrote is not logged. This shows that a transaction with a start timestamp of `106` and a class
+name of `com.SimpleLoader` collided with another transaction on row `r1` and column `fam1 qual1`.
 
 ```
 2015-02-11 18:17:02,639 [tx.collisions] TRACE: txid: 106 class: com.SimpleLoader
 2015-02-11 18:17:02,639 [tx.collisions] TRACE: txid: 106 collisions: {r1=[fam1 qual1 ]}
 ```
 
-When applications read and write arbitrary binary data, this does not log so
-well.  In order to make the trace logs human readable, non ASCII chars are
-escaped using hex.  The convention used it `\xDD`  where D is a hex digit. Also
-the `\` character is escaped to make the output unambiguous.
+When applications read and write arbitrary binary data, this does not log so well. In order to make
+the trace logs human readable, non ASCII chars are escaped using hex. The convention used it `\xDD`
+where D is a hex digit. Also the `\` character is escaped to make the output unambiguous.
 
 [phrasecount]: https://github.com/fluo-io/phrasecount
 [FluoFactory]: ../modules/api/src/main/java/org/apache/fluo/api/client/FluoFactory.java
@@ -174,4 +164,4 @@
 [AbstractObserver]: ../modules/api/src/main/java/org/apache/fluo/api/observer/AbstractObserver.java
 [fluo.properties]: ../modules/distribution/src/main/config/fluo.properties
 [API]: https://fluo.apache.org/apidocs/
-[metrics]: metrics.md
+[metrics]: metrics.md
\ No newline at end of file
diff --git a/docs/architecture.md b/docs/architecture.md
index e70a0cb..cc36682 100644
--- a/docs/architecture.md
+++ b/docs/architecture.md
@@ -1,40 +1,41 @@
-Fluo Architecture
-=================
+# Fluo Architecture
 
 ![fluo-architecture][1]
 
 ## Fluo Application
 
 A **Fluo application** maintains a large scale computation using a series of small transactional
-updates.  Fluo applications store their data in a **Fluo table** which has a similar structure (row, 
-column, value) to an **Accumulo table** except that a Fluo table has no timestamps.  A Fluo table
-is implemented using an Accumulo table.  While you could scan the Accumulo table used to implement 
+updates. Fluo applications store their data in a **Fluo table** which has a similar structure (row,
+column, value) to an **Accumulo table** except that a Fluo table has no timestamps. A Fluo table
+is implemented using an Accumulo table. While you could scan the Accumulo table used to implement
 a Fluo table using an Accumulo client, you would read extra implementation-related data in addition
-to your data.  Therefore, developers should only interact with the data in a Fluo table by writing 
+to your data. Therefore, developers should only interact with the data in a Fluo table by writing
 Fluo client or observer code:
 
- * **Clients** ingest data or interact with Fluo from external applications (REST services, crawlers, etc).
- * **Observers** are run by Fluo workers and trigger a transaction when a requested column is 
-    modified in the Fluo table.
+* **Clients** ingest data or interact with Fluo from external applications (REST services,
+  crawlers, etc).
+* **Observers** are run by Fluo workers and trigger a transaction when a requested column is
+  modified in the Fluo table.
 
-Multiple Fluo applications can run on a cluster at the same time.  Each Fluo application runs as a 
-Hadoop YARN application and can be stopped, started, and upgraded independently.  Fluo applications 
+Multiple Fluo applications can run on a cluster at the same time. Each Fluo application runs as a
+Hadoop YARN application and can be stopped, started, and upgraded independently. Fluo applications
 consist of an oracle process and a configurable number of worker processes:
 
- * The **Oracle** process allocates timestamps for transactions.  While only one Oracle is required, 
-   Fluo can be configured to run extra Oracles that can take over if the primary Oracle fails.
- * **Worker** processes run user code (called **observers**) that perform transactions.  All workers
-   run the same observers.  The number of worker instances are configured to handle the processing 
+* The **Oracle** process allocates timestamps for transactions. While only one Oracle is required,
+  Fluo can be configured to run extra Oracles that can take over if the primary Oracle fails.
+* **Worker** processes run user code (called **observers**) that perform transactions. All workers
+   run the same observers. The number of worker instances are configured to handle the processing
    workload.
-   
+
 ## Fluo Dependencies
 
 Fluo requires the following software to be running on the cluster:
 
- * **Accumulo** - Fluo stores its data in Accumulo and uses Accumulo's conditional mutations for transactions. 
- * **Hadoop** - Each Fluo application run its oracle and worker processes as Hadoop YARN applications. 
-            HDFS is also required for Accumulo.
- * **Zookeeper** - Fluo stores its metadata and state information in Zookeeper.  Zookeeper is also 
-            required for Accumulo.
-      
-[1]: resources/fluo-architecture.png
+* **Accumulo** - Fluo stores its data in Accumulo and uses Accumulo's conditional mutations for
+  transactions.
+* **Hadoop** - Each Fluo application run its oracle and worker processes as Hadoop YARN
+  applications. HDFS is also required for Accumulo.
+* **Zookeeper** - Fluo stores its metadata and state information in Zookeeper. Zookeeper is also
+  required for Accumulo.
+
+[1]: resources/fluo-architecture.png
\ No newline at end of file
diff --git a/docs/contributing.md b/docs/contributing.md
index 17af2e6..3146338 100644
--- a/docs/contributing.md
+++ b/docs/contributing.md
@@ -1,27 +1,29 @@
-Contributing to Fluo
-====================
+# Contributing to Fluo
 
-Building Fluo
--------------
+## Building Fluo
 
-If you have [Git], [Maven], and [Java][java] (version 8+) installed, run these commands
-to build Fluo:
+If you have [Git], [Maven], and [Java][java] (version 8+) installed, run these commands to build
+Fluo:
 
     git clone https://github.com/apache/incubator-fluo.git
     cd fluo
     mvn package
 
-Testing Fluo
-------------
+## Testing Fluo
 
 Fluo has a test suite that consists of the following:
 
-* Units tests which are run by `mvn package`
-* Integration tests which are run using `mvn verify`.  These tests start
-a local Fluo instance (called MiniFluo) and run against it.
-* A [Stress test][Stress] application designed to run on a cluster.
+*  Units tests which are run by `mvn package`
+*  Integration tests which are run using `mvn verify`. These tests start a local Fluo instance
+   (called MiniFluo) and run against it.
+*  A [Stress test][Stress] application designed to run on a cluster.
+
+## See Also
+
+* [How to Contribute][contribute] on Apache Fluo project website
 
 [Git]: http://git-scm.com/
 [java]: http://openjdk.java.net/
 [Maven]: http://maven.apache.org/
 [Stress]: https://github.com/fluo-io/fluo-stress
+[contribute]: https://fluo.apache.org/how-to-contribute/
\ No newline at end of file
diff --git a/docs/grafana.md b/docs/grafana.md
index 5d10b96..b911567 100644
--- a/docs/grafana.md
+++ b/docs/grafana.md
@@ -1,30 +1,29 @@
-
 # Fluo metrics in Grafana/InfluxDB
 
-Fluo is instrumented using [dropwizard metrics][1] which allows Fluo to be configured
-to send metrics to multiple metrics tools (such as Graphite, Ganglia, etc).
+Fluo is instrumented using [dropwizard metrics][1] which allows Fluo to be configured to send
+metrics to multiple metrics tools (such as Graphite, Ganglia, etc).
 
-This document describes how to send Fluo metrics to [InfluxDB], a time series database, and make 
-them viewable in [Grafana], a visualization tool.  If you want general information on metrics, see the 
-[Fluo metrics][2] documentation. 
+This document describes how to send Fluo metrics to [InfluxDB], a time series database, and make
+them viewable in [Grafana], a visualization tool. If you want general information on metrics, see
+the [Fluo metrics][2] documentation.
 
 ## Set up Grafana/InfluxDB using fluo-dev or Zetten
 
-The easiest way to view the metrics coming from Fluo is to use [fluo-dev] or [Zetten] which
-can be configured to setup InfluxDB and Grafana as well have Fluo send data to
-them.  Fluo-dev will also set up a Fluo dashboard in Grafana.
+The easiest way to view the metrics coming from Fluo is to use [fluo-dev] or [Zetten] which can be
+configured to setup InfluxDB and Grafana as well have Fluo send data to them. Fluo-dev will also set
+up a Fluo dashboard in Grafana.
 
 ## Set up Grafana/InfluxDB on your own
 
-If you are not using [fluo-dev] or [Zetten], you can follow the instructions below to setup InfluxDB 
+If you are not using [fluo-dev] or [Zetten], you can follow the instructions below to setup InfluxDB
 and Grafana on your own.
 
-1.  Follow the standard installation instructions for [InfluxDB] and [Grafana].  As for versions, 
-    the instructions below were written using InfluxDB v0.9.4.2 and Grafana v2.5.0. 
+1.  Follow the standard installation instructions for [InfluxDB] and [Grafana]. As for versions,
+    the instructions below were written using InfluxDB v0.9.4.2 and Grafana v2.5.0.
 
-2.  Add the following to your InfluxDB configuration to configure it accept metrics
-    in Graphite format from Fluo.  The configuration below contains templates that
-    transform the Graphite metrics into a format that is usable in InfluxDB.
+2.  Add the following to your InfluxDB configuration to configure it accept metrics in Graphite
+    format from Fluo. The configuration below contains templates that transform the Graphite
+    metrics into a format that is usable in InfluxDB.
 
     ```
     [[graphite]]
@@ -44,18 +43,17 @@
       ]
     ```
 
-3. Fluo distributes a file called `fluo_metrics_setup.txt` that contains a list
-   of commands that setup InfluxDB.  These commands will configure an InfluxDB user, 
-   retention policies, and continuous queries that downsample data for the historical
-   dashboard in Grafana.  Run the command below to execute the commands in this file:
+3. Fluo distributes a file called `fluo_metrics_setup.txt` that contains a list of commands that
+   setup InfluxDB. These commands will configure an InfluxDB user, retention policies, and
+   continuous queries that downsample data for the historical dashboard in Grafana. Run the command
+   below to execute the commands in this file:
 
     ```
     $INFLUXDB_HOME/bin/influx -import -path $FLUO_HOME/contrib/influxdb/fluo_metrics_setup.txt
     ```
 
-3. Configure `fluo.properties` in your Fluo app configuration to send Graphite 
-   metrics to InfluxDB.  Below is example configuration. Remember to replace
-   `<INFLUXDB_HOST>` with the actual host.
+3. Configure `fluo.properties` in your Fluo app configuration to send Graphite metrics to InfluxDB.
+   Below is example configuration. Remember to replace `<INFLUXDB_HOST>` with the actual host.
 
     ```
     fluo.metrics.reporter.graphite.enable=true
@@ -64,13 +62,13 @@
     fluo.metrics.reporter.graphite.frequency=30
     ```
 
-    The reporting frequency of 30 sec is required if you are using the provided
-    Grafana dashboards that are configured in the next step.
+    The reporting frequency of 30 sec is required if you are using the provided Grafana dashboards
+    that are configured in the next step.
 
-4.  Grafana needs to be configured to load dashboard JSON templates from a directory.  
-    Fluo distributes two Grafana dashboard templates in its tarball distribution in the
-    directory `contrib/grafana`. Before restarting Grafana, you should copy the templates
-    from your Fluo installation to the `dashboards/` directory configured below.
+4.  Grafana needs to be configured to load dashboard JSON templates from a directory. Fluo
+    distributes two Grafana dashboard templates in its tarball distribution in the directory
+    `contrib/grafana`. Before restarting Grafana, you should copy the templates from your Fluo
+    installation to the `dashboards/` directory configured below.
 
     ```
     [dashboards.json]
@@ -78,14 +76,14 @@
     path = <GRAFANA_HOME>/dashboards
     ```
 
-5.  If you restart Grafana, you will see the Fluo dashboards configured but all of their charts will 
-    be empty unless you have a Fluo application running and configured to send
-    data to InfluxDB.  When you start sending data, you may need to refresh the dashboard page in 
-    the browser to start viewing metrics.
+5.  If you restart Grafana, you will see the Fluo dashboards configured but all of their charts will
+    be empty unless you have a Fluo application running and configured to send data to InfluxDB.
+    When you start sending data, you may need to refresh the dashboard page in the browser to start
+    viewing metrics.
 
 [1]: https://dropwizard.github.io/metrics/3.1.0/
 [2]: metrics.md
 [fluo-dev]: https://github.com/fluo-io/fluo-dev
 [Zetten]: https://github.com/fluo-io/zetten
 [Grafana]: http://grafana.org/
-[InfluxDB]: https://influxdb.com/
+[InfluxDB]: https://influxdb.com/
\ No newline at end of file
diff --git a/docs/install.md b/docs/install.md
index 9856ff9..c10a454 100644
--- a/docs/install.md
+++ b/docs/install.md
@@ -1,15 +1,13 @@
-Fluo Install Instructions
-=========================
+# Fluo Install Instructions
 
-Install instructions for running Fluo on machine or cluster where Accumulo, Hadoop,
-and Zookeeper are installed and running.  If you want to avoid setting up these
-dependencies, consider using [fluo-dev] or [Zetten].
+Install instructions for running Fluo on machine or cluster where Accumulo, Hadoop, and Zookeeper
+are installed and running. If you want to avoid setting up these dependencies, consider using
+[fluo-dev] or [Zetten].
 
-Requirements
-------------
+## Requirements
 
-Before you install Fluo, the following software must be installed and running on
-your local machine or cluster:
+Before you install Fluo, the following software must be installed and running on your local machine
+or cluster:
 
 | Software    | Recommended Version | Minimum Version |
 |-------------|---------------------|-----------------|
@@ -18,69 +16,65 @@
 | [Zookeeper] | 3.4.8               |                 |
 | [Java]      | JDK 8               | JDK 8           |
 
-Obtain a distribution
----------------------
+## Obtain a distribution
 
-Before you can install Fluo, you will need to obtain a distribution tarball.  It is
-recommended that you download the [latest release][release].  You can also build
-a distribution from the master branch by following these steps which create a tarball
-in `modules/distribution/target`:
+Before you can install Fluo, you will need to obtain a distribution tarball. It is recommended that
+you download the [latest release][release]. You can also build a distribution from the master
+branch by following these steps which create a tarball in `modules/distribution/target`:
 
     git clone https://github.com/apache/incubator-fluo.git
     cd fluo/
     mvn package
 
-Install Fluo
-------------
+## Install Fluo
 
 After you obtain a Fluo distribution tarball, follow these steps to install Fluo.
 
-1. Choose a directory with plenty of space and untar the distribution:
+1.  Choose a directory with plenty of space and untar the distribution:
 
         tar -xvzf fluo-1.0.0-incubating-bin.tar.gz
 
-2. Copy the example configuration to the base of your configuration directory to create
-the default configuration for your Fluo install:
+2.  Copy the example configuration to the base of your configuration directory to create the default
+    configuration for your Fluo install:
 
         cp conf/examples/* conf/
 
     The default configuration will be used as the base configuration for each new application.
 
-3. Modify [fluo.properties] for your environment.  However, you should not configure any 
-application settings (like observers). 
+3.  Modify [fluo.properties] for your environment. However, you should not configure any
+    application settings (like observers).
 
-    NOTE - All properties that have a default are set with it.  Uncomment a property if you want 
-to use a value different than the default.  Properties that are unset and uncommented must be
-set by the user.
+    NOTE - All properties that have a default are set with it. Uncomment a property if you want
+    to use a value different than the default. Properties that are unset and uncommented must be
+    set by the user.
 
 4. Fluo needs to build its classpath using jars from the versions of Hadoop, Accumulo, and
 Zookeeper that you are using. Choose one of the two ways below to make these jars available
 to Fluo:
 
     * Set `HADOOP_PREFIX`, `ACCUMULO_HOME`, and `ZOOKEEPER_HOME` in your environment or configure
-    these variables in [fluo-env.sh].  Fluo will look in these locations for jars.
+    these variables in [fluo-env.sh]. Fluo will look in these locations for jars.
     * Run `./lib/fetch.sh ahz` to download Hadoop, Accumulo, and Zookeeper jars to `lib/ahz` and
     configure [fluo-env.sh] to look in this directory. By default, this command will download the
     default versions set in [lib/ahz/pom.xml]. If you are not using the default versions, you can
     override them:
-    
+
             ./lib/fetch.sh ahz -Daccumulo.version=1.7.2 -Dhadoop.version=2.7.2 -Dzookeeper.version=3.4.8
 
-5. Fluo needs more dependencies than what is available from Hadoop, Accumulo, and Zookeeper.
-These extra dependencies need to be downloaded to `lib/` using the command below:
+5. Fluo needs more dependencies than what is available from Hadoop, Accumulo, and Zookeeper. These
+   extra dependencies need to be downloaded to `lib/` using the command below:
 
         ./lib/fetch.sh extra
 
 You are now ready to use the Fluo command script.
 
-Fluo command script
--------------------
+## Fluo command script
 
-The Fluo command script is located at `bin/fluo` of your Fluo installation.  All Fluo
-commands are invoked by this script.
+The Fluo command script is located at `bin/fluo` of your Fluo installation. All Fluo commands are
+invoked by this script.
 
-Modify and add the following to your `~/.bashrc` if you want to be able to execute the
-fluo script from any directory:
+Modify and add the following to your `~/.bashrc` if you want to be able to execute the fluo script
+from any directory:
 
     export PATH=/path/to/fluo-1.0.0-incubating/bin:$PATH
 
@@ -93,34 +87,32 @@
 
     ./bin/fluo
 
-Configure a Fluo application
-----------------------------
+## Configure a Fluo application
 
-You are now ready to configure a Fluo application.  Use the command below to create the
-configuration necessary for a new application.  Feel free to pick a different name (other
-than `myapp`) for your application:
+You are now ready to configure a Fluo application. Use the command below to create the
+configuration necessary for a new application. Feel free to pick a different name (other than
+`myapp`) for your application:
 
     fluo new myapp
 
-This command will create a directory for your application at `apps/myapp` of your Fluo
-install which will contain a `conf` and `lib`.
+This command will create a directory for your application at `apps/myapp` of your Fluo install which
+will contain a `conf` and `lib`.
 
 The `apps/myapp/conf` directory contains a copy of the `fluo.properties` from your default
-configuration.  This should be configured for your application:
+configuration. This should be configured for your application:
 
     vim apps/myapp/fluo.properties
 
-When configuring the observer section in fluo.properties, you can configure your instance
-for the [phrasecount] application if you have not created your own application. See
-the [phrasecount] example for instructions. You can also choose not to configure any
-observers but your workers will be idle when started.
+When configuring the observer section in fluo.properties, you can configure your instance for the
+[phrasecount] application if you have not created your own application. See the [phrasecount]
+example for instructions. You can also choose not to configure any observers but your workers will
+be idle when started.
 
-The `apps/myapp/lib` directory should contain any observer jars for your application. If 
-you configured [fluo.properties] for observers, copy any jars containing these
-observer classes to this directory.
- 
-Initialize your application
----------------------------
+The `apps/myapp/lib` directory should contain any observer jars for your application. If you
+configured [fluo.properties] for observers, copy any jars containing these observer classes to this
+directory.
+
+## Initialize your application
 
 After your application has been configured, use the command below to initialize it:
 
@@ -128,21 +120,17 @@
 
 This only needs to be called once and stores configuration in Zookeeper.
 
-Start your application
-----------------------
+## Start your application
 
-A Fluo application consists of one oracle process and multiple worker processes.
-Before starting your application, you can configure the number of worker process
-in your [fluo.properties] file.
+A Fluo application consists of one oracle process and multiple worker processes. Before starting
+your application, you can configure the number of worker process in your [fluo.properties] file.
 
-When you are ready to start your Fluo application on your YARN cluster, run the
-command below:
+When you are ready to start your Fluo application on your YARN cluster, run the command below:
 
     fluo start myapp
 
-The start command above will work for a single-node or a large cluster.  By
-using YARN, you do not need to deploy the Fluo binaries to every node on your
-cluster or start processes on every node.
+The start command above will work for a single-node or a large cluster. By using YARN, you do not
+need to deploy the Fluo binaries to every node on your cluster or start processes on every node.
 
 You can use the following command to check the status of your instance:
 
@@ -152,15 +140,13 @@
 
     fluo info myapp
 
-You can also use `yarn application -list` to check the status of your Fluo instance
-in YARN.  Logs are viewable within YARN.
+You can also use `yarn application -list` to check the status of your Fluo instance in YARN. Logs
+are viewable within YARN.
 
-When you have data in your fluo instance, you can view it using the command `fluo scan`.
-Pipe the output to `less` using the command `fluo scan | less` if you want to page 
-through the data.
+When you have data in your fluo instance, you can view it using the command `fluo scan`. Pipe the
+output to `less` using the command `fluo scan | less` if you want to page through the data.
 
-Stop your Fluo application
---------------------------
+## Stop your Fluo application
 
 Use the following command to stop your Fluo application:
 
@@ -170,38 +156,32 @@
 
     fluo kill myapp
 
-Tuning Accumulo
----------------
+## Tuning Accumulo
 
-Fluo will reread the same data frequently when it checks conditions on
-mutations.   When Fluo initializes a table it enables data caching to make
-this more efficient.  However you may need to increase the amount of memory
-available for caching in the tserver by increasing `tserver.cache.data.size`.
-Increasing this may require increasing the maximum tserver java heap size in
-`accumulo-env.sh`.
+Fluo will reread the same data frequently when it checks conditions on mutations. When Fluo
+initializes a table it enables data caching to make this more efficient. However you may need to
+increase the amount of memory available for caching in the tserver by increasing
+`tserver.cache.data.size`. Increasing this may require increasing the maximum tserver java heap size
+in `accumulo-env.sh`.
 
-Fluo will run many client threads, will want to ensure the tablet server
-has enough threads.  Should probably increase the
-`tserver.server.threads.minimum` Accumulo setting.
+Fluo will run many client threads, will want to ensure the tablet server has enough threads. Should
+probably increase the `tserver.server.threads.minimum` Accumulo setting.
 
-Using at least Accumulo 1.6.1 is recommended because multiple performance bugs
-were fixed.
+Using at least Accumulo 1.6.1 is recommended because multiple performance bugs were fixed.
 
-Tuning YARN
------------
+## Tuning YARN
 
-When running Fluo oracles and workers in YARN, the number of instances, max memory, and number
-of cores for Fluo processes can be configured in [fluo.properties]. If YARN is killing processes
-consider increasing `twill.java.reserved.memory.mb` (which defaults to 200 and is set in yarn-site.xml).
-The `twill.java.reserved.memory.mb` config determines the gap between the YARN memory limit set in
-[fluo.properties] and the java -Xmx setting.  For example, if max memory is 1024 and twill reserved
-memory is 200, the java -Xmx setting will be 1024-200 = 824 MB.
+When running Fluo oracles and workers in YARN, the number of instances, max memory, and number of
+cores for Fluo processes can be configured in [fluo.properties]. If YARN is killing processes
+consider increasing `twill.java.reserved.memory.mb` (which defaults to 200 and is set in
+yarn-site.xml). The `twill.java.reserved.memory.mb` config determines the gap between the YARN
+memory limit set in [fluo.properties] and the java -Xmx setting. For example, if max memory is 1024
+and twill reserved memory is 200, the java -Xmx setting will be 1024-200 = 824 MB.
 
-Run locally without YARN
-------------------------
+## Run locally without YARN
 
-If you do not have YARN set up, you can start the oracle and worker as a local 
-Fluo process using the following commands:
+If you do not have YARN set up, you can start the oracle and worker as a local Fluo process using
+the following commands:
 
     local-fluo start-oracle
     local-fluo start-worker
@@ -211,8 +191,8 @@
     local-fluo stop-worker
     local-fluo stop-oracle
 
-In a distributed environment, you will need to deploy and configure a Fluo 
-distribution on every node in your cluster.
+In a distributed environment, you will need to deploy and configure a Fluo distribution on every
+node in your cluster.
 
 [fluo-dev]: https://github.com/fluo-io/fluo-dev
 [Zetten]: https://github.com/fluo-io/zetten
@@ -224,4 +204,4 @@
 [phrasecount]: https://github.com/fluo-io/phrasecount
 [fluo.properties]: ../modules/distribution/src/main/config/fluo.properties
 [fluo-env.sh]: ../modules/distribution/src/main/config/fluo-env.sh
-[lib/ahz/pom.xml]: ../modules/distribution/src/main/lib/ahz/pom.xml
+[lib/ahz/pom.xml]: ../modules/distribution/src/main/lib/ahz/pom.xml
\ No newline at end of file
diff --git a/docs/metrics.md b/docs/metrics.md
index eb2a1e4..ff3cdc3 100644
--- a/docs/metrics.md
+++ b/docs/metrics.md
@@ -1,56 +1,50 @@
-Fluo Metrics
-============
+# Fluo Metrics
 
-Fluo core is instrumented using [dropwizard metrics][1].  This allows fluo
-users to easily gather information about Fluo by configuring different
-reporters.  While dropwizard can be configured to report Fluo metrics to many
-different tools, below are some tools that have been used with Fluo.
+Fluo core is instrumented using [dropwizard metrics][1]. This allows fluo users to easily gather
+information about Fluo by configuring different reporters. While dropwizard can be configured to
+report Fluo metrics to many different tools, below are some tools that have been used with Fluo.
 
-1. [Grafana/InfluxDB][3] - Fluo has [documentation][3] for sending metrics to
-   InfluxDB and viewing them in Grafana. The [fluo-dev] tool can also set up
-   these tools for you and configure Fluo to send to them.
+1.  [Grafana/InfluxDB][3] - Fluo has [documentation][3] for sending metrics to InfluxDB and viewing
+    them in Grafana. The [fluo-dev] tool can also set up these tools for you and configure Fluo to
+    send to them.
 
-2. JMX - Fluo can be configured to reports metrics via JMX which can be viewed
-   in jconsole or jvisualvm.
- 
-3. CSV - Fluo can be configured to output metrics as CSV to a specified directory.
+2.  JMX - Fluo can be configured to reports metrics via JMX which can be viewed in jconsole or
+    jvisualvm.
 
-Configuring Reporters
----------------------
+3.  CSV - Fluo can be configured to output metrics as CSV to a specified directory.
 
-Inorder to configure metrics reporters, look at the metrics section in an
-applications `fluo.properties` file.  This sections has a lot of commented out
-options for configuring reporters.
+## Configuring Reporters
+
+In order to configure metrics reporters, look at the metrics section in an applications
+`fluo.properties` file. This sections has a lot of commented out options for configuring reporters.
 
     fluo.metrics.reporter.console.enable=false
     fluo.metrics.reporter.console.frequency=30
 
 The frequency is in seconds for all reporters.
 
-Metrics reported by Fluo
-------------------------
+## Metrics reported by Fluo
 
 All metrics reported by Fluo have the prefix `fluo.<APP>.<PID>.` which is denoted by `<prefix>` in
-the table below.  In the prefix, `<APP>` represents the Fluo application name and `<PID>` is the 
-process ID of the Fluo oracle or worker that is reporting the metric.  When running in yarn, this 
-id is of the format `worker-<instance id>` or `oracle-<instance id>`.  When not running from yarn, 
-this id consist of a hostname and a base36 long that is unique across all fluo processes. 
+the table below. In the prefix, `<APP>` represents the Fluo application name and `<PID>` is the
+process ID of the Fluo oracle or worker that is reporting the metric. When running in yarn, this id
+is of the format `worker-<instance id>` or `oracle-<instance id>`. When not running from yarn, this
+id consist of a hostname and a base36 long that is unique across all fluo processes.
 
-Some of the metrics reported have the class name as the suffix.  This classname
-is the observer or load task that executed the transactions.   This should
-allow things like transaction collisions to be tracked per class.  In the
-table below this is denoted with `<cn>`.
+Some of the metrics reported have the class name as the suffix. This classname is the observer or
+load task that executed the transactions. This should allow things like transaction collisions to
+be tracked per class. In the table below this is denoted with `<cn>`.
 
 |Metric                                 | Type           | Description                         |
 |---------------------------------------|----------------|-------------------------------------|
-|\<prefix\>.tx.lock_wait_time.\<cn\>    | [Timer][T]     | *WHEN:* After each transaction. *COND:* &gt; 0 *WHAT:* Time transaction spent waiting on locks held by other transactions.   |
-|\<prefix\>.tx.execution_time.\<cn\>    | [Timer][T]     | *WHEN:* After each transaction. *WHAT:* Time transaction took to execute.  Updated for failed and successful transactions.  This does not include commit time, only the time from start until commit is called. |
-|\<prefix\>.tx.with_collision.\<cn\>    | [Meter][M]     | *WHEN:* After each transaction. *WHAT:* Rate of transactions with collisions.  |
-|\<prefix\>.tx.collisions.\<cn\>        | [Meter][M]     | *WHEN:* After each transaction. *WHAT:* Rate of collisions.  |
+|\<prefix\>.tx.lock_wait_time.\<cn\>    | [Timer][T]     | *WHEN:* After each transaction. *COND:* &gt; 0 *WHAT:* Time transaction spent waiting on locks held by other transactions.  |
+|\<prefix\>.tx.execution_time.\<cn\>    | [Timer][T]     | *WHEN:* After each transaction. *WHAT:* Time transaction took to execute. Updated for failed and successful transactions. This does not include commit time, only the time from start until commit is called. |
+|\<prefix\>.tx.with_collision.\<cn\>    | [Meter][M]     | *WHEN:* After each transaction. *WHAT:* Rate of transactions with collisions. |
+|\<prefix\>.tx.collisions.\<cn\>        | [Meter][M]     | *WHEN:* After each transaction. *WHAT:* Rate of collisions. |
 |\<prefix\>.tx.entries_set.\<cn\>       | [Meter][H]     | *WHEN:* After each transaction. *WHAT:* Rate of row/columns set by transaction |
-|\<prefix\>.tx.entries_read.\<cn\>      | [Meter][H]     | *WHEN:* After each transaction. *WHAT:* Rate of row/columns read by transaction that existed.  There is currently no count of all reads (including non-existent data) |
-|\<prefix\>.tx.locks_timedout.\<cn\>    | [Meter][M]     | *WHEN:* After each transaction. *WHAT:* Rate of timedout locks rolled back by transaction.  These are locks that are held for very long periods by another transaction that appears to be alive based on zookeeper.  |
-|\<prefix\>.tx.locks_dead.\<cn\>        | [Meter][M]     | *WHEN:* After each transaction. *WHAT:* Rate of dead locks rolled by a transaction.  These are locks held by a process that appears to be dead according to zookeeper.  |
+|\<prefix\>.tx.entries_read.\<cn\>      | [Meter][H]     | *WHEN:* After each transaction. *WHAT:* Rate of row/columns read by transaction that existed. There is currently no count of all reads (including non-existent data) |
+|\<prefix\>.tx.locks_timedout.\<cn\>    | [Meter][M]     | *WHEN:* After each transaction. *WHAT:* Rate of timedout locks rolled back by transaction. These are locks that are held for very long periods by another transaction that appears to be alive based on zookeeper. |
+|\<prefix\>.tx.locks_dead.\<cn\>        | [Meter][M]     | *WHEN:* After each transaction. *WHAT:* Rate of dead locks rolled by a transaction. These are locks held by a process that appears to be dead according to zookeeper. |
 |\<prefix\>.tx.status_\<status\>.\<cn\> | [Meter][M]     | *WHEN:* After each transaction. *WHAT:* Rate of different ways a transaction can terminate |
 |\<prefix\>.oracle.response_time        | [Timer][T]     | *WHEN:* For each request for stamps to the server. *WHAT:* Time RPC call to oracle took |
 |\<prefix\>.oracle.client_stamps        | [Histogram][H] | *WHEN:* For each request for stamps to the server. *WHAT:* The number of stamps requested. |
@@ -58,16 +52,14 @@
 |\<prefix\>.worker.notifications_queued | [Gauge][G]     | *WHAT:* The current number of notifications queued for processing. |
 |\<prefix\>.transactor.committing       | [Gauge][G]     | *WHAT:* The current number of transactions that are working their way through the commit steps. |
 
-The table above outlines when a particular metric is updated and whats updated.
-The use of *COND* indicates that the metric is not always updated.   For
-example `i.f.<pid>.tx.lockWait.<cn>` is only updated for transactions that had a non
-zero lock wait time.  
+The table above outlines when a particular metric is updated and whats updated. The use of *COND*
+indicates that the metric is not always updated. For example `i.f.<pid>.tx.lockWait.<cn>` is only
+updated for transactions that had a non zero lock wait time.
 
-Histograms and Timers have a counter.  In the case of a histogram, the counter
-is the number of times the metric was updated and not a sum of the updates.
-For example if a request for 5 timestamps was made to the oracle followed by a
-request for 3 timestamps, then the count for `i.f.<pid>.oracle.server.stamps` would
-be 2 and the mean would be (5+3)/2.
+Histograms and Timers have a counter. In the case of a histogram, the counter is the number of times
+the metric was updated and not a sum of the updates. For example if a request for 5 timestamps was
+made to the oracle followed by a request for 3 timestamps, then the count for
+`i.f.<pid>.oracle.server.stamps` would be 2 and the mean would be (5+3)/2.
 
 [1]: https://dropwizard.github.io/metrics/3.1.0/
 [3]: grafana.md
@@ -76,4 +68,4 @@
 [H]: https://dropwizard.github.io/metrics/3.1.0/getting-started/#histograms
 [G]: https://dropwizard.github.io/metrics/3.1.0/getting-started/#gauges
 [M]: https://dropwizard.github.io/metrics/3.1.0/getting-started/#meters
-[fluo-dev]: https://github.com/fluo-io/fluo-dev
+[fluo-dev]: https://github.com/fluo-io/fluo-dev
\ No newline at end of file