Added titles and harmonized docs to improve usability and SEO (#6731) (#6735)

* added titles and harmonized docs

* manually fixed some titles
diff --git a/docs/content/comparisons/druid-vs-elasticsearch.md b/docs/content/comparisons/druid-vs-elasticsearch.md
index d0782bd..015200e 100644
--- a/docs/content/comparisons/druid-vs-elasticsearch.md
+++ b/docs/content/comparisons/druid-vs-elasticsearch.md
@@ -19,10 +19,9 @@
 
 ---
 layout: doc_page
+title: "Druid vs Elasticsearch"
 ---
-
-Druid vs Elasticsearch
-======================
+# Druid vs Elasticsearch
 
 We are not experts on search systems, if anything is incorrect about our portrayal, please let us know on the mailing list or via some other means.
 
diff --git a/docs/content/comparisons/druid-vs-key-value.md b/docs/content/comparisons/druid-vs-key-value.md
index 4e91101..d8ccd50 100644
--- a/docs/content/comparisons/druid-vs-key-value.md
+++ b/docs/content/comparisons/druid-vs-key-value.md
@@ -19,10 +19,9 @@
 
 ---
 layout: doc_page
+title: "Druid vs. Key/Value Stores (HBase/Cassandra/OpenTSDB)"
 ---
-
-Druid vs. Key/Value Stores (HBase/Cassandra/OpenTSDB)
-====================================================
+# Druid vs. Key/Value Stores (HBase/Cassandra/OpenTSDB)
 
 Druid is highly optimized for scans and aggregations, it supports arbitrarily deep drill downs into data sets. This same functionality 
 is supported in key/value stores in 2 ways:
diff --git a/docs/content/comparisons/druid-vs-kudu.md b/docs/content/comparisons/druid-vs-kudu.md
index 8d00ae5..7f8fc73 100644
--- a/docs/content/comparisons/druid-vs-kudu.md
+++ b/docs/content/comparisons/druid-vs-kudu.md
@@ -19,10 +19,9 @@
 
 ---
 layout: doc_page
+title: "Druid vs Kudu"
 ---
-
-Druid vs Kudu
-=============
+# Druid vs Kudu
 
 Kudu's storage format enables single row updates, whereas updates to existing Druid segments requires recreating the segment, so theoretically  
 the process for updating old values should be higher latency in Druid. However, the requirements in Kudu for maintaining extra head space to store 
diff --git a/docs/content/comparisons/druid-vs-redshift.md b/docs/content/comparisons/druid-vs-redshift.md
index 6595141..103ecc3 100644
--- a/docs/content/comparisons/druid-vs-redshift.md
+++ b/docs/content/comparisons/druid-vs-redshift.md
@@ -19,10 +19,9 @@
 
 ---
 layout: doc_page
+title: "Druid vs Redshift"
 ---
-Druid vs Redshift
-=================
-
+# Druid vs Redshift
 
 ### How does Druid compare to Redshift?
 
diff --git a/docs/content/comparisons/druid-vs-spark.md b/docs/content/comparisons/druid-vs-spark.md
index 9723bef..07f16fa 100644
--- a/docs/content/comparisons/druid-vs-spark.md
+++ b/docs/content/comparisons/druid-vs-spark.md
@@ -19,10 +19,9 @@
 
 ---
 layout: doc_page
+title: "Druid vs Spark"
 ---
-
-Druid vs Spark
-==============
+# Druid vs Spark
 
 Druid and Spark are complementary solutions as Druid can be used to accelerate OLAP queries in Spark.
 
diff --git a/docs/content/comparisons/druid-vs-sql-on-hadoop.md b/docs/content/comparisons/druid-vs-sql-on-hadoop.md
index 3bf2c3f..f867c24 100644
--- a/docs/content/comparisons/druid-vs-sql-on-hadoop.md
+++ b/docs/content/comparisons/druid-vs-sql-on-hadoop.md
@@ -19,17 +19,16 @@
 
 ---
 layout: doc_page
+title: "Druid vs SQL-on-Hadoop"
 ---
+# Druid vs SQL-on-Hadoop (Impala/Drill/Spark SQL/Presto)
 
-Druid vs SQL-on-Hadoop (Impala/Drill/Spark SQL/Presto)
-===========================================================
-
-SQL-on-Hadoop engines provide an 
-execution engine for various data formats and data stores, and 
+SQL-on-Hadoop engines provide an
+execution engine for various data formats and data stores, and
 many can be made to push down computations down to Druid, while providing a SQL interface to Druid.
 
-For a direct comparison between the technologies and when to only use one or the other, things basically comes down to your 
-product requirements and what the systems were designed to do.  
+For a direct comparison between the technologies and when to only use one or the other, things basically comes down to your
+product requirements and what the systems were designed to do.
 
 Druid was designed to
 
@@ -37,7 +36,7 @@
 1. ingest data in real-time
 1. handle slice-n-dice style ad-hoc queries
 
-SQL-on-Hadoop engines generally sidestep Map/Reduce, instead querying data directly from HDFS or, in some cases, other storage systems. 
+SQL-on-Hadoop engines generally sidestep Map/Reduce, instead querying data directly from HDFS or, in some cases, other storage systems.
 Some of these engines (including Impala and Presto) can be colocated with HDFS data nodes and coordinate with them to achieve data locality for queries.
 What does this mean?  We can talk about it in terms of three general areas
 
@@ -47,37 +46,37 @@
 
 ### Queries
 
-Druid segments stores data in a custom column format. Segments are scanned directly as part of queries and each Druid server 
-calculates a set of results that are eventually merged at the Broker level. This means the data that is transferred between servers 
+Druid segments stores data in a custom column format. Segments are scanned directly as part of queries and each Druid server
+calculates a set of results that are eventually merged at the Broker level. This means the data that is transferred between servers
 are queries and results, and all computation is done internally as part of the Druid servers.
 
-Most SQL-on-Hadoop engines are responsible for query planning and execution for underlying storage layers and storage formats. 
-They are processes that stay on even if there is no query running (eliminating the JVM startup costs from Hadoop MapReduce).  
-Some (Impala/Presto) SQL-on-Hadoop engines have daemon processes that can be run where the data is stored, virtually eliminating network transfer costs. There is still 
-some latency overhead (e.g. serde time) associated with pulling data from the underlying storage layer into the computation layer. We are unaware of exactly 
+Most SQL-on-Hadoop engines are responsible for query planning and execution for underlying storage layers and storage formats.
+They are processes that stay on even if there is no query running (eliminating the JVM startup costs from Hadoop MapReduce).
+Some (Impala/Presto) SQL-on-Hadoop engines have daemon processes that can be run where the data is stored, virtually eliminating network transfer costs. There is still
+some latency overhead (e.g. serde time) associated with pulling data from the underlying storage layer into the computation layer. We are unaware of exactly
 how much of a performance impact this makes.
 
 ### Data Ingestion
 
-Druid is built to allow for real-time ingestion of data.  You can ingest data and query it immediately upon ingestion, 
+Druid is built to allow for real-time ingestion of data.  You can ingest data and query it immediately upon ingestion,
 the latency between how quickly the event is reflected in the data is dominated by how long it takes to deliver the event to Druid.
 
-SQL-on-Hadoop, being based on data in HDFS or some other backing store, are limited in their data ingestion rates by the 
-rate at which that backing store can make data available.  Generally, the backing store is the biggest bottleneck for 
+SQL-on-Hadoop, being based on data in HDFS or some other backing store, are limited in their data ingestion rates by the
+rate at which that backing store can make data available.  Generally, the backing store is the biggest bottleneck for
 how quickly data can become available.
 
 ### Query Flexibility
 
-Druid's query language is fairly low level and maps to how Druid operates internally. Although Druid can be combined with a high level query 
-planner such as [Plywood](https://github.com/implydata/plywood) to support most SQL queries and analytic SQL queries (minus joins among large tables), 
+Druid's query language is fairly low level and maps to how Druid operates internally. Although Druid can be combined with a high level query
+planner such as [Plywood](https://github.com/implydata/plywood) to support most SQL queries and analytic SQL queries (minus joins among large tables),
 base Druid is less flexible than SQL-on-Hadoop solutions for generic processing.
 
 SQL-on-Hadoop support SQL style queries with full joins.
 
 ## Druid vs Parquet
 
-Parquet is a column storage format that is designed to work with SQL-on-Hadoop engines. Parquet doesn't have a query execution engine, and instead 
+Parquet is a column storage format that is designed to work with SQL-on-Hadoop engines. Parquet doesn't have a query execution engine, and instead
 relies on external sources to pull data out of it.
 
-Druid's storage format is highly optimized for linear scans. Although Druid has support for nested data, Parquet's storage format is much 
+Druid's storage format is highly optimized for linear scans. Although Druid has support for nested data, Parquet's storage format is much
 more hierachical, and is more designed for binary chunking. In theory, this should lead to faster scans in Druid.
diff --git a/docs/content/configuration/index.md b/docs/content/configuration/index.md
index 6e2e916..7588ae9 100644
--- a/docs/content/configuration/index.md
+++ b/docs/content/configuration/index.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Configuration Reference"
 ---
-
 # Configuration Reference
 
 This page documents all of the configuration properties for each Druid service type.
diff --git a/docs/content/configuration/logging.md b/docs/content/configuration/logging.md
index 640eea4..10a6769 100644
--- a/docs/content/configuration/logging.md
+++ b/docs/content/configuration/logging.md
@@ -19,9 +19,9 @@
 
 ---
 layout: doc_page
+title: "Logging"
 ---
-Logging
-==========================
+# Logging
 
 Druid nodes will emit logs that are useful for debugging to the console. Druid nodes also emit periodic metrics about their state. For more about metrics, see [Configuration](../configuration/index.html#enabling-metrics). Metric logs are printed to the console by default, and can be disabled with `-Ddruid.emitter.logging.logLevel=debug`.
 
diff --git a/docs/content/configuration/realtime.md b/docs/content/configuration/realtime.md
index 7189491..d5e3ba8 100644
--- a/docs/content/configuration/realtime.md
+++ b/docs/content/configuration/realtime.md
@@ -19,10 +19,10 @@
 
 ---
 layout: doc_page
+title: "Realtime Node Configuration"
 ---
+# Realtime Node Configuration
 
-Realtime Node Configuration
-==============================
 For general Realtime Node information, see [here](../design/realtime.html).
 
 Runtime Configuration
diff --git a/docs/content/dependencies/cassandra-deep-storage.md b/docs/content/dependencies/cassandra-deep-storage.md
index 3b0e3f1..3ba1791 100644
--- a/docs/content/dependencies/cassandra-deep-storage.md
+++ b/docs/content/dependencies/cassandra-deep-storage.md
@@ -19,15 +19,18 @@
 
 ---
 layout: doc_page
+title: "Cassandra Deep Storage"
 ---
+# Cassandra Deep Storage
 
 ## Introduction
+
 Druid can use Cassandra as a deep storage mechanism. Segments and their metadata are stored in Cassandra in two tables:
-`index_storage` and `descriptor_storage`.  Underneath the hood, the Cassandra integration leverages Astyanax.  The 
+`index_storage` and `descriptor_storage`.  Underneath the hood, the Cassandra integration leverages Astyanax.  The
 index storage table is a [Chunked Object](https://github.com/Netflix/astyanax/wiki/Chunked-Object-Store) repository. It contains
 compressed segments for distribution to historical nodes.  Since segments can be large, the Chunked Object storage allows the integration to multi-thread
-the write to Cassandra, and spreads the data across all the nodes in a cluster.  The descriptor storage table is a normal C* table that 
-stores the segment metadatak.  
+the write to Cassandra, and spreads the data across all the nodes in a cluster.  The descriptor storage table is a normal C* table that
+stores the segment metadatak.
 
 ## Schema
 Below are the create statements for each:
diff --git a/docs/content/dependencies/deep-storage.md b/docs/content/dependencies/deep-storage.md
index 02a97d3..b75d1be 100644
--- a/docs/content/dependencies/deep-storage.md
+++ b/docs/content/dependencies/deep-storage.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Deep Storage"
 ---
-
 # Deep Storage
 
 Deep storage is where segments are stored.  It is a storage mechanism that Druid does not provide.  This deep storage infrastructure defines the level of durability of your data, as long as Druid nodes can see this storage infrastructure and get at the segments stored on it, you will not lose data no matter how many Druid nodes you lose.  If segments disappear from this storage layer, then you will lose whatever data those segments represented.
diff --git a/docs/content/dependencies/metadata-storage.md b/docs/content/dependencies/metadata-storage.md
index 767551f..d50fb05 100644
--- a/docs/content/dependencies/metadata-storage.md
+++ b/docs/content/dependencies/metadata-storage.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Metadata Storage"
 ---
-
 # Metadata Storage
 
 The Metadata Storage is an external dependency of Druid. Druid uses it to store
diff --git a/docs/content/dependencies/zookeeper.md b/docs/content/dependencies/zookeeper.md
index e9d6094..c1b6bb1 100644
--- a/docs/content/dependencies/zookeeper.md
+++ b/docs/content/dependencies/zookeeper.md
@@ -19,8 +19,10 @@
 
 ---
 layout: doc_page
+title: "ZooKeeper"
 ---
 # ZooKeeper
+
 Druid uses [ZooKeeper](http://zookeeper.apache.org/) (ZK) for management of current cluster state. The operations that happen over ZK are
 
 1.  [Coordinator](../design/coordinator.html) leader election
diff --git a/docs/content/design/auth.md b/docs/content/design/auth.md
index 92f0c62..62406b8 100644
--- a/docs/content/design/auth.md
+++ b/docs/content/design/auth.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Authentication and Authorization"
 ---
-
 # Authentication and Authorization
 
 |Property|Type|Description|Default|Required|
diff --git a/docs/content/design/broker.md b/docs/content/design/broker.md
index 529da80..7a35cd4 100644
--- a/docs/content/design/broker.md
+++ b/docs/content/design/broker.md
@@ -19,9 +19,9 @@
 
 ---
 layout: doc_page
+title: "Broker"
 ---
-Broker
-======
+# Broker
 
 ### Configuration
 
diff --git a/docs/content/design/coordinator.md b/docs/content/design/coordinator.md
index 4a16e93..e759357 100644
--- a/docs/content/design/coordinator.md
+++ b/docs/content/design/coordinator.md
@@ -19,9 +19,9 @@
 
 ---
 layout: doc_page
+title: "Coordinator Node"
 ---
-Coordinator Node
-================
+# Coordinator Node
 
 ### Configuration
 
diff --git a/docs/content/design/historical.md b/docs/content/design/historical.md
index 9398e52..06f8f20 100644
--- a/docs/content/design/historical.md
+++ b/docs/content/design/historical.md
@@ -19,9 +19,9 @@
 
 ---
 layout: doc_page
+title: "Historical Node"
 ---
-Historical Node
-===============
+# Historical Node
 
 ### Configuration
 
diff --git a/docs/content/design/index.md b/docs/content/design/index.md
index 7cb96b8..4bcb3bb 100644
--- a/docs/content/design/index.md
+++ b/docs/content/design/index.md
@@ -19,6 +19,7 @@
 
 ---
 layout: doc_page
+title: "Design"
 ---
 
 # What is Druid?<a id="what-is-druid"></a>
@@ -159,7 +160,7 @@
     - Bitmap compression for bitmap indexes
     - Type-aware compression for all columns
 
-Periodically, segments are committed and published. At this point, they are written to [deep storage](#deep-storage), 
+Periodically, segments are committed and published. At this point, they are written to [deep storage](#deep-storage),
 become immutable, and move from MiddleManagers to the Historical processes (see [Architecture](#architecture) above
 for details). An entry about the segment is also written to the [metadata store](#metadata-storage). This entry is a
 self-describing bit of metadata about the segment, including things like the schema of the segment, its size, and its
diff --git a/docs/content/design/indexing-service.md b/docs/content/design/indexing-service.md
index 4a8edef..b23f9c6 100644
--- a/docs/content/design/indexing-service.md
+++ b/docs/content/design/indexing-service.md
@@ -19,9 +19,9 @@
 
 ---
 layout: doc_page
+title: "Indexing Service"
 ---
-Indexing Service
-================
+# Indexing Service
 
 The indexing service is a highly-available, distributed service that runs indexing related tasks. 
 
diff --git a/docs/content/design/middlemanager.md b/docs/content/design/middlemanager.md
index 9779cdb..b3e5d84 100644
--- a/docs/content/design/middlemanager.md
+++ b/docs/content/design/middlemanager.md
@@ -19,10 +19,9 @@
 
 ---
 layout: doc_page
+title: "MiddleManager Node"
 ---
-
-Middle Manager Node
-------------------
+# MiddleManager Node
 
 ### Configuration
 
diff --git a/docs/content/design/overlord.md b/docs/content/design/overlord.md
index eaad38e..92c394b 100644
--- a/docs/content/design/overlord.md
+++ b/docs/content/design/overlord.md
@@ -19,10 +19,9 @@
 
 ---
 layout: doc_page
+title: "Overlord Node"
 ---
-
-Overlord Node
--------------
+# Overlord Node
 
 ### Configuration
 
diff --git a/docs/content/design/peons.md b/docs/content/design/peons.md
index c33a3d4..16fdff1 100644
--- a/docs/content/design/peons.md
+++ b/docs/content/design/peons.md
@@ -19,10 +19,9 @@
 
 ---
 layout: doc_page
+title: "Peons"
 ---
-
-Peons
------
+# Peons
 
 ### Configuration
 
diff --git a/docs/content/design/plumber.md b/docs/content/design/plumber.md
index ffbdaba..e3c1cfa 100644
--- a/docs/content/design/plumber.md
+++ b/docs/content/design/plumber.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Druid Plumbers"
 ---
-
 # Druid Plumbers
 
 The plumber handles generated segments both while they are being generated and when they are "done". This is also technically a pluggable interface and there are multiple implementations. However, plumbers handle numerous complex details, and therefore an advanced understanding of Druid is recommended before implementing your own.
diff --git a/docs/content/design/realtime.md b/docs/content/design/realtime.md
index 9f83fca..3a15c1d 100644
--- a/docs/content/design/realtime.md
+++ b/docs/content/design/realtime.md
@@ -19,10 +19,9 @@
 
 ---
 layout: doc_page
+title: "Real-time Node"
 ---
-
-Real-time Node
-==============
+# Real-time Node
 
 <div class="note info">
 NOTE: Realtime nodes are deprecated. Please use the <a href="../development/extensions-core/kafka-ingestion.html">Kafka Indexing Service</a> for stream pull use cases instead. 
diff --git a/docs/content/design/segments.md b/docs/content/design/segments.md
index 50f45bd..d1b8e65 100644
--- a/docs/content/design/segments.md
+++ b/docs/content/design/segments.md
@@ -19,9 +19,9 @@
 
 ---
 layout: doc_page
+title: "Segments"
 ---
-Segments
-========
+# Segments
 
 Druid stores its index in *segment files*, which are partitioned by
 time. In a basic setup, one segment file is created for each time
diff --git a/docs/content/development/build.md b/docs/content/development/build.md
index 9b9f6be..f0bd6e8 100644
--- a/docs/content/development/build.md
+++ b/docs/content/development/build.md
@@ -19,9 +19,9 @@
 
 ---
 layout: doc_page
+title: "Build from Source"
 ---
-
-### Build from Source
+# Build from Source
 
 You can build Druid directly from source. Please note that these instructions are for building the latest stable version of Druid.
 For building the latest code in master, follow the instructions [here](https://github.com/apache/incubator-druid/blob/master/docs/content/development/build.md).
diff --git a/docs/content/development/experimental.md b/docs/content/development/experimental.md
index fecaff1..760ea0d 100644
--- a/docs/content/development/experimental.md
+++ b/docs/content/development/experimental.md
@@ -19,9 +19,9 @@
 
 ---
 layout: doc_page
+title: "Experimental Features"
 ---
-
-# About Experimental Features
+# Experimental Features
 
 Experimental features are features we have developed but have not fully tested in a production environment. If you choose to try them out, there will likely be edge cases that we have not covered. We would love feedback on any of these features, whether they are bug reports, suggestions for improvement, or letting us know they work as intended.
 
diff --git a/docs/content/development/extensions-contrib/ambari-metrics-emitter.md b/docs/content/development/extensions-contrib/ambari-metrics-emitter.md
index fde2279..6357ca3 100644
--- a/docs/content/development/extensions-contrib/ambari-metrics-emitter.md
+++ b/docs/content/development/extensions-contrib/ambari-metrics-emitter.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Ambari Metrics Emitter"
 ---
-
 # Ambari Metrics Emitter
 
 To use this extension, make sure to [include](../../operations/including-extensions.html) `ambari-metrics-emitter` extension.
diff --git a/docs/content/development/extensions-contrib/azure.md b/docs/content/development/extensions-contrib/azure.md
index 035fe27..bea6b71 100644
--- a/docs/content/development/extensions-contrib/azure.md
+++ b/docs/content/development/extensions-contrib/azure.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Microsoft Azure"
 ---
-
 # Microsoft Azure
 
 To use this extension, make sure to [include](../../operations/including-extensions.html) `druid-azure-extensions` extension.
diff --git a/docs/content/development/extensions-contrib/cassandra.md b/docs/content/development/extensions-contrib/cassandra.md
index 7a70f69..b1e3c9e 100644
--- a/docs/content/development/extensions-contrib/cassandra.md
+++ b/docs/content/development/extensions-contrib/cassandra.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Apache Cassandra"
 ---
-
 # Apache Cassandra
 
 To use this extension, make sure to [include](../../operations/including-extensions.html) `druid-cassandra-storage` extension.
diff --git a/docs/content/development/extensions-contrib/cloudfiles.md b/docs/content/development/extensions-contrib/cloudfiles.md
index ad7acee..363507d 100644
--- a/docs/content/development/extensions-contrib/cloudfiles.md
+++ b/docs/content/development/extensions-contrib/cloudfiles.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Rackspace Cloud Files"
 ---
-
 # Rackspace Cloud Files
 
 ## Deep Storage
diff --git a/docs/content/development/extensions-contrib/distinctcount.md b/docs/content/development/extensions-contrib/distinctcount.md
index 0bc4d3f..77e6c39 100644
--- a/docs/content/development/extensions-contrib/distinctcount.md
+++ b/docs/content/development/extensions-contrib/distinctcount.md
@@ -19,9 +19,9 @@
 
 ---
 layout: doc_page
+title: "DistinctCount Aggregator"
 ---
-
-# DistinctCount aggregator
+# DistinctCount Aggregator
 
 To use this extension, make sure to [include](../../operations/including-extensions.html) the `druid-distinctcount` extension.
 
diff --git a/docs/content/development/extensions-contrib/google.md b/docs/content/development/extensions-contrib/google.md
index 4a1c26c..4d587ec 100644
--- a/docs/content/development/extensions-contrib/google.md
+++ b/docs/content/development/extensions-contrib/google.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Google Cloud Storage"
 ---
-
 # Google Cloud Storage
 
 To use this extension, make sure to [include](../../operations/including-extensions.html) `druid-google-extensions` extension.
diff --git a/docs/content/development/extensions-contrib/graphite.md b/docs/content/development/extensions-contrib/graphite.md
index a70910d..a50706a 100644
--- a/docs/content/development/extensions-contrib/graphite.md
+++ b/docs/content/development/extensions-contrib/graphite.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Graphite Emitter"
 ---
-
 # Graphite Emitter
 
 To use this extension, make sure to [include](../../operations/including-extensions.html) `graphite-emitter` extension.
diff --git a/docs/content/development/extensions-contrib/influx.md b/docs/content/development/extensions-contrib/influx.md
index b2f61f4..3446b48 100644
--- a/docs/content/development/extensions-contrib/influx.md
+++ b/docs/content/development/extensions-contrib/influx.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "InfluxDB Line Protocol Parser"
 ---
-
 # InfluxDB Line Protocol Parser
 
 To use this extension, make sure to [include](../../operations/including-extensions.html) `druid-influx-extensions`.
diff --git a/docs/content/development/extensions-contrib/kafka-emitter.md b/docs/content/development/extensions-contrib/kafka-emitter.md
index a2df861..2ad9429 100644
--- a/docs/content/development/extensions-contrib/kafka-emitter.md
+++ b/docs/content/development/extensions-contrib/kafka-emitter.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Kafka Emitter"
 ---
-
 # Kafka Emitter
 
 To use this extension, make sure to [include](../../operations/including-extensions.html) `kafka-emitter` extension.
diff --git a/docs/content/development/extensions-contrib/kafka-simple.md b/docs/content/development/extensions-contrib/kafka-simple.md
index bb811ee..1aeeea8 100644
--- a/docs/content/development/extensions-contrib/kafka-simple.md
+++ b/docs/content/development/extensions-contrib/kafka-simple.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Kafka Simple Consumer"
 ---
-
 # Kafka Simple Consumer
 
 To use this extension, make sure to [include](../../operations/including-extensions.html) `druid-kafka-eight-simpleConsumer` extension.
diff --git a/docs/content/development/extensions-contrib/materialized-view.md b/docs/content/development/extensions-contrib/materialized-view.md
index 8c92480..67fa154 100644
--- a/docs/content/development/extensions-contrib/materialized-view.md
+++ b/docs/content/development/extensions-contrib/materialized-view.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Materialized View"
 ---
-
 # Materialized View
 
 To use this feature, make sure to only load materialized-view-selection on broker and load materialized-view-maintenance on overlord. In addtion, this feature currently requires a hadoop cluster.
diff --git a/docs/content/development/extensions-contrib/opentsdb-emitter.md b/docs/content/development/extensions-contrib/opentsdb-emitter.md
index 27e9069..17a3f63 100644
--- a/docs/content/development/extensions-contrib/opentsdb-emitter.md
+++ b/docs/content/development/extensions-contrib/opentsdb-emitter.md
@@ -19,9 +19,9 @@
 
 ---
 layout: doc_page
+title: "OpenTSDB Emitter"
 ---
-
-# Opentsdb Emitter
+# OpenTSDB Emitter
 
 To use this extension, make sure to [include](../../operations/including-extensions.html) `opentsdb-emitter` extension.
 
@@ -57,5 +57,5 @@
     "type"
 ]
 ```
- 
+
 For most use-cases, the default configuration is sufficient.
diff --git a/docs/content/development/extensions-contrib/orc.md b/docs/content/development/extensions-contrib/orc.md
index 8d65d0b..3674ec7 100644
--- a/docs/content/development/extensions-contrib/orc.md
+++ b/docs/content/development/extensions-contrib/orc.md
@@ -19,15 +19,15 @@
 
 ---
 layout: doc_page
+title: "ORC"
 ---
-
-# Orc
+# ORC
 
 To use this extension, make sure to [include](../../operations/including-extensions.html) `druid-orc-extensions`.
 
-This extension enables Druid to ingest and understand the Apache Orc data format offline.
+This extension enables Druid to ingest and understand the Apache ORC data format offline.
 
-## Orc Hadoop Parser
+## ORC Hadoop Parser
 
 This is for batch ingestion using the HadoopDruidIndexer. The inputFormat of inputSpec in ioConfig must be set to `"org.apache.hadoop.hive.ql.io.orc.OrcNewInputFormat"`.
 
@@ -35,7 +35,7 @@
 |----------|-------------|----------------------------------------------------------------------------------------|---------|
 |type      | String      | This should say `orc`                                                                  | yes|
 |parseSpec | JSON Object | Specifies the timestamp and dimensions of the data. Any parse spec that extends ParseSpec is possible but only their TimestampSpec and DimensionsSpec are used. | yes|
-|typeString| String      | String representation of Orc struct type info. If not specified, auto constructed from parseSpec but all metric columns are dropped | no|
+|typeString| String      | String representation of ORC struct type info. If not specified, auto constructed from parseSpec but all metric columns are dropped | no|
 |mapFieldNameFormat| String | String format for resolving the flatten map fields. Default is `<PARENT>_<CHILD>`. | no |
 
 For example of `typeString`, string column col1 and array of string column col2 is represented by `"struct<col1:string,col2:array<string>>"`.
diff --git a/docs/content/development/extensions-contrib/rabbitmq.md b/docs/content/development/extensions-contrib/rabbitmq.md
index 0554963..a1bce50 100644
--- a/docs/content/development/extensions-contrib/rabbitmq.md
+++ b/docs/content/development/extensions-contrib/rabbitmq.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "RabbitMQ"
 ---
-
 # RabbitMQ
 
 To use this extension, make sure to [include](../../operations/including-extensions.html) `druid-rabbitmq` extension.
diff --git a/docs/content/development/extensions-contrib/redis-cache.md b/docs/content/development/extensions-contrib/redis-cache.md
index c3c6c6c..a446b4f 100644
--- a/docs/content/development/extensions-contrib/redis-cache.md
+++ b/docs/content/development/extensions-contrib/redis-cache.md
@@ -19,10 +19,9 @@
 
 ---
 layout: doc_page
+title: "Druid Redis Cache"
 ---
-
-Druid Redis Cache
---------------------
+# Druid Redis Cache
 
 A cache implementation for Druid based on [Redis](https://github.com/antirez/redis).
 
diff --git a/docs/content/development/extensions-contrib/rocketmq.md b/docs/content/development/extensions-contrib/rocketmq.md
index 3ec025b..c9c2e00 100644
--- a/docs/content/development/extensions-contrib/rocketmq.md
+++ b/docs/content/development/extensions-contrib/rocketmq.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "RocketMQ"
 ---
-
 # RocketMQ
 
 To use this extension, make sure to [include](../../operations/including-extensions.html) `druid-rocketmq` extension.
diff --git a/docs/content/development/extensions-contrib/sqlserver.md b/docs/content/development/extensions-contrib/sqlserver.md
index 78873d0..99a9fac 100644
--- a/docs/content/development/extensions-contrib/sqlserver.md
+++ b/docs/content/development/extensions-contrib/sqlserver.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Microsoft SQLServer"
 ---
-
 # Microsoft SQLServer
 
 Make sure to [include](../../operations/including-extensions.html) `sqlserver-metadata-storage` as an extension.
diff --git a/docs/content/development/extensions-contrib/statsd.md b/docs/content/development/extensions-contrib/statsd.md
index aa89af9..e68fd7a 100644
--- a/docs/content/development/extensions-contrib/statsd.md
+++ b/docs/content/development/extensions-contrib/statsd.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "StatsD Emitter"
 ---
-
 # StatsD Emitter
 
 To use this extension, make sure to [include](../../operations/including-extensions.html) `statsd-emitter` extension.
diff --git a/docs/content/development/extensions-contrib/thrift.md b/docs/content/development/extensions-contrib/thrift.md
index 3a1d197..284879b 100644
--- a/docs/content/development/extensions-contrib/thrift.md
+++ b/docs/content/development/extensions-contrib/thrift.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Thrift"
 ---
-
 # Thrift
 
 To use this extension, make sure to [include](../../operations/including-extensions.html) `druid-thrift-extensions`.
diff --git a/docs/content/development/extensions-contrib/time-min-max.md b/docs/content/development/extensions-contrib/time-min-max.md
index b4eff51..6782042 100644
--- a/docs/content/development/extensions-contrib/time-min-max.md
+++ b/docs/content/development/extensions-contrib/time-min-max.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Timestamp Min/Max aggregators"
 ---
-
 # Timestamp Min/Max aggregators
 
 To use this extension, make sure to [include](../../operations/including-extensions.html) `druid-time-min-max`.
diff --git a/docs/content/development/extensions-core/approximate-histograms.md b/docs/content/development/extensions-core/approximate-histograms.md
index 895ca19..ae96d15 100644
--- a/docs/content/development/extensions-core/approximate-histograms.md
+++ b/docs/content/development/extensions-core/approximate-histograms.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Approximate Histogram aggregator"
 ---
-
 # Approximate Histogram aggregator
 
 Make sure to [include](../../operations/including-extensions.html) `druid-histogram` as an extension.
diff --git a/docs/content/development/extensions-core/avro.md b/docs/content/development/extensions-core/avro.md
index 9e50cbe..c8ba667 100644
--- a/docs/content/development/extensions-core/avro.md
+++ b/docs/content/development/extensions-core/avro.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Avro"
 ---
-
 # Avro
 
 This extension enables Druid to ingest and understand the Apache Avro data format. Make sure to [include](../../operations/including-extensions.html) `druid-avro-extensions` as an extension.
diff --git a/docs/content/development/extensions-core/bloom-filter.md b/docs/content/development/extensions-core/bloom-filter.md
index 69a5904..1c861e4 100644
--- a/docs/content/development/extensions-core/bloom-filter.md
+++ b/docs/content/development/extensions-core/bloom-filter.md
@@ -19,25 +19,26 @@
 
 ---
 layout: doc_page
+title: "Bloom Filter"
 ---
-
-# Druid Bloom Filter
+# Bloom Filter
 
 Make sure to [include](../../operations/including-extensions.html) `druid-bloom-filter` as an extension.
 
-BloomFilter is a probabilistic data structure for set membership check. 
-Following are some characterstics of BloomFilter 
+BloomFilter is a probabilistic data structure for set membership check.
+Following are some characterstics of BloomFilter
 - BloomFilters are highly space efficient when compared to using a HashSet.
 - Because of the probabilistic nature of bloom filter false positive (element not present in bloom filter but test() says true) are possible
-- false negatives are not possible (if element is present then test() will never say false). 
-- The false positive probability is configurable (default: 5%) depending on which storage requirement may increase or decrease. 
+- false negatives are not possible (if element is present then test() will never say false).
+- The false positive probability is configurable (default: 5%) depending on which storage requirement may increase or decrease.
 - Lower the false positive probability greater is the space requirement.
 - Bloom filters are sensitive to number of elements that will be inserted in the bloom filter.
 - During the creation of bloom filter expected number of entries must be specified.If the number of insertions exceed the specified initial number of entries then false positive probability will increase accordingly.
 
 Internally, this implementation of bloom filter uses Murmur3 fast non-cryptographic hash algorithm.
 
-### Json Representation of Bloom Filter
+### JSON Representation of Bloom Filter
+
 ```json
 {
   "type" : "bloom",
@@ -60,5 +61,5 @@
  - 1 byte for the number of hash functions.
  - 1 big endian int(That is how OutputStream works) for the number of longs in the bitset
  - big endian longs in the BloomKFilter bitset
-     
+
 Note: `org.apache.hive.common.util.BloomKFilter` provides a serialize method which can be used to serialize bloom filters to outputStream.
diff --git a/docs/content/development/extensions-core/datasketches-extension.md b/docs/content/development/extensions-core/datasketches-extension.md
index 781cf1a..aec599c 100644
--- a/docs/content/development/extensions-core/datasketches-extension.md
+++ b/docs/content/development/extensions-core/datasketches-extension.md
@@ -19,9 +19,9 @@
 
 ---
 layout: doc_page
+title: "DataSketches extension"
 ---
-
-## DataSketches extension
+# DataSketches extension
 
 Druid aggregators based on [datasketches](http://datasketches.github.io/) library. Sketches are data structures implementing approximate streaming mergeable algorithms. Sketches can be ingested from the outside of Druid or built from raw data at ingestion time. Sketches can be stored in Druid segments as additive metrics.
 
diff --git a/docs/content/development/extensions-core/datasketches-hll.md b/docs/content/development/extensions-core/datasketches-hll.md
index 0af0a31..783af1f 100644
--- a/docs/content/development/extensions-core/datasketches-hll.md
+++ b/docs/content/development/extensions-core/datasketches-hll.md
@@ -19,9 +19,9 @@
 
 ---
 layout: doc_page
+title: "DataSketches HLL Sketch module"
 ---
-
-## DataSketches HLL Sketch module
+# DataSketches HLL Sketch module
 
 This module provides Druid aggregators for distinct counting based on HLL sketch from [datasketches](http://datasketches.github.io/) library. At ingestion time, this aggregator creates the HLL sketch objects to be stored in Druid segments. At query time, sketches are read and merged together. In the end, by default, you receive the estimate of the number of distinct values presented to the sketch. Also, you can use post aggregator to produce a union of sketch columns in the same row. 
 You can use the HLL sketch aggregator on columns of any identifiers. It will return estimated cardinality of the column.
diff --git a/docs/content/development/extensions-core/datasketches-quantiles.md b/docs/content/development/extensions-core/datasketches-quantiles.md
index 83bd927..4b5fe83 100644
--- a/docs/content/development/extensions-core/datasketches-quantiles.md
+++ b/docs/content/development/extensions-core/datasketches-quantiles.md
@@ -19,9 +19,9 @@
 
 ---
 layout: doc_page
+title: "DataSketches Quantiles Sketch module"
 ---
-
-## DataSketches Quantiles Sketch module
+# DataSketches Quantiles Sketch module
 
 This module provides Druid aggregators based on numeric quantiles DoublesSketch from [datasketches](http://datasketches.github.io/) library. Quantiles sketch is a mergeable streaming algorithm to estimate the distribution of values, and approximately answer queries about the rank of a value, probability mass function of the distribution (PMF) or histogram, cummulative distribution function (CDF), and quantiles (median, min, max, 95th percentile and such). See [Quantiles Sketch Overview](https://datasketches.github.io/docs/Quantiles/QuantilesOverview.html).
 
diff --git a/docs/content/development/extensions-core/datasketches-theta.md b/docs/content/development/extensions-core/datasketches-theta.md
index 46893dc..8eca141 100644
--- a/docs/content/development/extensions-core/datasketches-theta.md
+++ b/docs/content/development/extensions-core/datasketches-theta.md
@@ -19,9 +19,9 @@
 
 ---
 layout: doc_page
+title: "DataSketches Theta Sketch module"
 ---
-
-## DataSketches Theta Sketch module
+# DataSketches Theta Sketch module
 
 This module provides Druid aggregators based on Theta sketch from [datasketches](http://datasketches.github.io/) library. Note that sketch algorithms are approximate; see details in the "Accuracy" section of the datasketches doc. 
 At ingestion time, this aggregator creates the Theta sketch objects which get stored in Druid segments. Logically speaking, a Theta sketch object can be thought of as a Set data structure. At query time, sketches are read and aggregated (set unioned) together. In the end, by default, you receive the estimate of the number of unique entries in the sketch object. Also, you can use post aggregators to do union, intersection or difference on sketch columns in the same row. 
diff --git a/docs/content/development/extensions-core/datasketches-tuple.md b/docs/content/development/extensions-core/datasketches-tuple.md
index f92567e..4cfa5a9 100644
--- a/docs/content/development/extensions-core/datasketches-tuple.md
+++ b/docs/content/development/extensions-core/datasketches-tuple.md
@@ -19,9 +19,9 @@
 
 ---
 layout: doc_page
+title: "DataSketches Tuple Sketch module"
 ---
-
-## DataSketches Tuple Sketch module
+# DataSketches Tuple Sketch module
 
 This module provides Druid aggregators based on Tuple sketch from [datasketches](http://datasketches.github.io/) library. ArrayOfDoublesSketch sketches extend the functionality of the count-distinct Theta sketches by adding arrays of double values associated with unique keys.
 
diff --git a/docs/content/development/extensions-core/druid-basic-security.md b/docs/content/development/extensions-core/druid-basic-security.md
index 59d74c1..6b80862 100644
--- a/docs/content/development/extensions-core/druid-basic-security.md
+++ b/docs/content/development/extensions-core/druid-basic-security.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Basic Security"
 ---
-
 # Druid Basic Security
 
 This extension adds:
@@ -58,7 +58,7 @@
 druid.auth.authenticator.MyBasicAuthenticator.authorizerName=MyBasicAuthorizer
 ```
 
-To use the Basic authenticator, add an authenticator with type `basic` to the authenticatorChain. 
+To use the Basic authenticator, add an authenticator with type `basic` to the authenticatorChain.
 
 Configuration of the named authenticator is assigned through properties with the form:
 
@@ -208,14 +208,14 @@
 Content: List of JSON Resource-Action objects, e.g.:
 ```
 [
-{ 
+{
   "resource": {
     "name": "wiki.*",
     "type": "DATASOURCE"
   },
   "action": "READ"
 },
-{ 
+{
   "resource": {
     "name": "wikiticker",
     "type": "DATASOURCE"
@@ -225,7 +225,7 @@
 ]
 ```
 
-The "name" field for resources in the permission definitions are regexes used to match resource names during authorization checks. 
+The "name" field for resources in the permission definitions are regexes used to match resource names during authorization checks.
 
 Please see [Defining permissions](#defining-permissions) for more details.
 
@@ -238,7 +238,7 @@
 ### Authenticator
 If `druid.auth.authenticator.<authenticator-name>.initialAdminPassword` is set, a default admin user named "admin" will be created, with the specified initial password. If this configuration is omitted, the "admin" user will not be created.
 
-If `druid.auth.authenticator.<authenticator-name>.initialInternalClientPassword` is set, a default internal system user named "druid_system" will be created, with the specified initial password. If this configuration is omitted, the "druid_system" user will not be created. 
+If `druid.auth.authenticator.<authenticator-name>.initialInternalClientPassword` is set, a default internal system user named "druid_system" will be created, with the specified initial password. If this configuration is omitted, the "druid_system" user will not be created.
 
 
 ### Authorizer
diff --git a/docs/content/development/extensions-core/druid-kerberos.md b/docs/content/development/extensions-core/druid-kerberos.md
index 71ea60a..c74ab06 100644
--- a/docs/content/development/extensions-core/druid-kerberos.md
+++ b/docs/content/development/extensions-core/druid-kerberos.md
@@ -19,12 +19,12 @@
 
 ---
 layout: doc_page
+title: "Kerberos"
 ---
-
-# Druid-Kerberos
+# Kerberos
 
 Druid Extension to enable Authentication for Druid Nodes using Kerberos.
-This extension adds an Authenticator which is used to protect HTTP Endpoints using the simple and protected GSSAPI negotiation mechanism [SPNEGO](https://en.wikipedia.org/wiki/SPNEGO). 
+This extension adds an Authenticator which is used to protect HTTP Endpoints using the simple and protected GSSAPI negotiation mechanism [SPNEGO](https://en.wikipedia.org/wiki/SPNEGO).
 Make sure to [include](../../operations/including-extensions.html) `druid-kerberos` as an extension.
 
 
@@ -57,23 +57,23 @@
 |`druid.auth.authenticator.kerberos.cookieSignatureSecret`|`secretString`| Secret used to sign authentication cookies. It is advisable to explicitly set it, if you have multiple druid ndoes running on same machine with different ports as the Cookie Specification does not guarantee isolation by port.|<Random value>|No|
 |`druid.auth.authenticator.kerberos.authorizerName`|Depends on available authorizers|Authorizer that requests should be directed to|Empty|Yes|
 
-As a note, it is required that the SPNego principal in use by the druid nodes must start with HTTP (This specified by [RFC-4559](https://tools.ietf.org/html/rfc4559)) and must be of the form "HTTP/_HOST@REALM". 
+As a note, it is required that the SPNego principal in use by the druid nodes must start with HTTP (This specified by [RFC-4559](https://tools.ietf.org/html/rfc4559)) and must be of the form "HTTP/_HOST@REALM".
 The special string _HOST will be replaced automatically with the value of config `druid.host`
 
 ### Auth to Local Syntax
 `druid.auth.authenticator.kerberos.authToLocal` allows you to set a general rules for mapping principal names to local user names.
 The syntax for mapping rules is `RULE:\[n:string](regexp)s/pattern/replacement/g`. The integer n indicates how many components the target principal should have. If this matches, then a string will be formed from string, substituting the realm of the principal for $0 and the n‘th component of the principal for $n. e.g. if the principal was druid/admin then `\[2:$2$1suffix]` would result in the string `admindruidsuffix`.
 If this string matches regexp, then the s//\[g] substitution command will be run over the string. The optional g will cause the substitution to be global over the string, instead of replacing only the first match in the string.
-If required, multiple rules can be be joined by newline character and specified as a String. 
+If required, multiple rules can be be joined by newline character and specified as a String.
 
 ### Increasing HTTP Header size for large SPNEGO negotiate header
 In Active Directory environment, SPNEGO token in the Authorization header includes PAC (Privilege Access Certificate) information,
 which includes all security groups for the user. In some cases when the user belongs to many security groups the header to grow beyond what druid can handle by default.
 In such cases, max request header size that druid can handle can be increased by setting `druid.server.http.maxRequestHeaderSize` (default 8Kb) and `druid.router.http.maxRequestBufferSize` (default 8Kb).
 
-## Configuring Kerberos Escalated Client 
+## Configuring Kerberos Escalated Client
 
-Druid internal nodes communicate with each other using an escalated http Client. A Kerberos enabled escalated HTTP Client can be configured by following properties -  
+Druid internal nodes communicate with each other using an escalated http Client. A Kerberos enabled escalated HTTP Client can be configured by following properties -
 
 
 |Property|Example Values|Description|Default|required|
@@ -83,15 +83,15 @@
 |`druid.escalator.internalClientKeytab`|`/etc/security/keytabs/druid.keytab`|Path to keytab file used for internal node communication|n/a|Yes|
 |`druid.escalator.authorizerName`|`MyBasicAuthorizer`|Authorizer that requests should be directed to.|n/a|Yes|
 
-## Accessing Druid HTTP end points when kerberos security is enabled 
-1. To access druid HTTP endpoints via curl user will need to first login using `kinit` command as follows -  
+## Accessing Druid HTTP end points when kerberos security is enabled
+1. To access druid HTTP endpoints via curl user will need to first login using `kinit` command as follows -
 
     ```
     kinit -k -t <path_to_keytab_file> user@REALM.COM
     ```
 
 2. Once the login is successful verify that login is successful using `klist` command
-3. Now you can access druid HTTP endpoints using curl command as follows - 
+3. Now you can access druid HTTP endpoints using curl command as follows -
 
     ```
     curl --negotiate -u:anyUser -b ~/cookies.txt -c ~/cookies.txt -X POST -H'Content-Type: application/json' <HTTP_END_POINT>
@@ -105,13 +105,13 @@
     Note: Above command will authenticate the user first time using SPNego negotiate mechanism and store the authentication cookie in file. For subsequent requests the cookie will be used for authentication.
 
 ## Accessing coordinator or overlord console from web browser
-To access Coordinator/Overlord console from browser you will need to configure your browser for SPNego authentication as follows - 
+To access Coordinator/Overlord console from browser you will need to configure your browser for SPNego authentication as follows -
 
 1. Safari - No configurations required.
-2. Firefox - Open firefox and follow these steps - 
+2. Firefox - Open firefox and follow these steps -
     1. Go to `about:config` and search for `network.negotiate-auth.trusted-uris`.
     2. Double-click and add the following values: `"http://druid-coordinator-hostname:ui-port"` and `"http://druid-overlord-hostname:port"`
-3. Google Chrome - From the command line run following commands - 
+3. Google Chrome - From the command line run following commands -
     1. `google-chrome --auth-server-whitelist="druid-coordinator-hostname" --auth-negotiate-delegate-whitelist="druid-coordinator-hostname"`
     2. `google-chrome --auth-server-whitelist="druid-overlord-hostname" --auth-negotiate-delegate-whitelist="druid-overlord-hostname"`
 4. Internet Explorer -
@@ -119,4 +119,4 @@
     2. Allow negotiation for the UI website.
 
 ## Sending Queries programmatically
-Many HTTP client libraries, such as Apache Commons [HttpComponents](https://hc.apache.org/), already have support for performing SPNEGO authentication. You can use any of the available HTTP client library to communicate with druid cluster. 
+Many HTTP client libraries, such as Apache Commons [HttpComponents](https://hc.apache.org/), already have support for performing SPNEGO authentication. You can use any of the available HTTP client library to communicate with druid cluster.
diff --git a/docs/content/development/extensions-core/druid-lookups.md b/docs/content/development/extensions-core/druid-lookups.md
index 101acf5..473109d 100644
--- a/docs/content/development/extensions-core/druid-lookups.md
+++ b/docs/content/development/extensions-core/druid-lookups.md
@@ -19,6 +19,7 @@
 
 ---
 layout: doc_page
+title: "Cached Lookup Module"
 ---
 # Cached Lookup Module
 
diff --git a/docs/content/development/extensions-core/examples.md b/docs/content/development/extensions-core/examples.md
index 7199d52..02b22e6 100644
--- a/docs/content/development/extensions-core/examples.md
+++ b/docs/content/development/extensions-core/examples.md
@@ -19,9 +19,9 @@
 
 ---
 layout: doc_page
+title: "Extension Examples"
 ---
-
-# Druid examples
+# Extension Examples
 
 ## TwitterSpritzerFirehose
 
diff --git a/docs/content/development/extensions-core/hdfs.md b/docs/content/development/extensions-core/hdfs.md
index da6127f..e2fe62c 100644
--- a/docs/content/development/extensions-core/hdfs.md
+++ b/docs/content/development/extensions-core/hdfs.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "HDFS"
 ---
-
 # HDFS
 
 Make sure to [include](../../operations/including-extensions.html) `druid-hdfs-storage` as an extension.
diff --git a/docs/content/development/extensions-core/kafka-eight-firehose.md b/docs/content/development/extensions-core/kafka-eight-firehose.md
index c32e725..2ab4122 100644
--- a/docs/content/development/extensions-core/kafka-eight-firehose.md
+++ b/docs/content/development/extensions-core/kafka-eight-firehose.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Kafka Eight Firehose"
 ---
-
 # Kafka Eight Firehose
 
 Make sure to [include](../../operations/including-extensions.html) `druid-kafka-eight` as an extension.
diff --git a/docs/content/development/extensions-core/kafka-extraction-namespace.md b/docs/content/development/extensions-core/kafka-extraction-namespace.md
index 93437ed..6d9ea16 100644
--- a/docs/content/development/extensions-core/kafka-extraction-namespace.md
+++ b/docs/content/development/extensions-core/kafka-extraction-namespace.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Kafka Lookups"
 ---
-
 # Kafka Lookups
 
 <div class="note caution">
diff --git a/docs/content/development/extensions-core/kafka-ingestion.md b/docs/content/development/extensions-core/kafka-ingestion.md
index de3fa39..f8b80c0 100644
--- a/docs/content/development/extensions-core/kafka-ingestion.md
+++ b/docs/content/development/extensions-core/kafka-ingestion.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Kafka Indexing Service"
 ---
-
 # Kafka Indexing Service
 
 The Kafka indexing service enables the configuration of *supervisors* on the Overlord, which facilitate ingestion from
diff --git a/docs/content/development/extensions-core/lookups-cached-global.md b/docs/content/development/extensions-core/lookups-cached-global.md
index ef7b2ad..c5a89cb 100644
--- a/docs/content/development/extensions-core/lookups-cached-global.md
+++ b/docs/content/development/extensions-core/lookups-cached-global.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Globally Cached Lookups"
 ---
-
 # Globally Cached Lookups
 
 <div class="note caution">
diff --git a/docs/content/development/extensions-core/mysql.md b/docs/content/development/extensions-core/mysql.md
index a91b4e1..67bc5cf 100644
--- a/docs/content/development/extensions-core/mysql.md
+++ b/docs/content/development/extensions-core/mysql.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "MySQL Metadata Store"
 ---
-
 # MySQL Metadata Store
 
 Make sure to [include](../../operations/including-extensions.html) `mysql-metadata-storage` as an extension.
diff --git a/docs/content/development/extensions-core/postgresql.md b/docs/content/development/extensions-core/postgresql.md
index 0a12126..cc54cdf 100644
--- a/docs/content/development/extensions-core/postgresql.md
+++ b/docs/content/development/extensions-core/postgresql.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "PostgreSQL Metadata Store"
 ---
-
 # PostgreSQL Metadata Store
 
 Make sure to [include](../../operations/including-extensions.html) `postgresql-metadata-storage` as an extension.
diff --git a/docs/content/development/extensions-core/protobuf.md b/docs/content/development/extensions-core/protobuf.md
index 000e72a..b8f3125 100644
--- a/docs/content/development/extensions-core/protobuf.md
+++ b/docs/content/development/extensions-core/protobuf.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Protobuf"
 ---
-
 # Protobuf
 
 This extension enables Druid to ingest and understand the Protobuf data format. Make sure to [include](../../operations/including-extensions.html) `druid-protobuf-extensions` as an extension.
diff --git a/docs/content/development/extensions-core/s3.md b/docs/content/development/extensions-core/s3.md
index dcd81cd..df8d745 100644
--- a/docs/content/development/extensions-core/s3.md
+++ b/docs/content/development/extensions-core/s3.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "S3-compatible"
 ---
-
 # S3-compatible
 
 Make sure to [include](../../operations/including-extensions.html) `druid-s3-extensions` as an extension.
diff --git a/docs/content/development/extensions-core/simple-client-sslcontext.md b/docs/content/development/extensions-core/simple-client-sslcontext.md
index 5992bdb..19976bf 100644
--- a/docs/content/development/extensions-core/simple-client-sslcontext.md
+++ b/docs/content/development/extensions-core/simple-client-sslcontext.md
@@ -19,9 +19,9 @@
 
 ---
 layout: doc_page
+title: "Simple SSLContext Provider Module"
 ---
-
-## Simple SSLContext Provider Module
+# Simple SSLContext Provider Module
 
 This module contains a simple implementation of [SSLContext](http://docs.oracle.com/javase/8/docs/api/javax/net/ssl/SSLContext.html)
 that will be injected to be used with HttpClient that Druid nodes use internally to communicate with each other. To learn more about
diff --git a/docs/content/development/extensions-core/stats.md b/docs/content/development/extensions-core/stats.md
index 0e66d31..31117c7 100644
--- a/docs/content/development/extensions-core/stats.md
+++ b/docs/content/development/extensions-core/stats.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Stats aggregator"
 ---
-
 # Stats aggregator
 
 Includes stat-related aggregators, including variance and standard deviations, etc. Make sure to [include](../../operations/including-extensions.html) `druid-stats` as an extension.
diff --git a/docs/content/development/extensions-core/test-stats.md b/docs/content/development/extensions-core/test-stats.md
index 2e61641..9e175ae 100644
--- a/docs/content/development/extensions-core/test-stats.md
+++ b/docs/content/development/extensions-core/test-stats.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Test Stats Aggregators"
 ---
-
 # Test Stats Aggregators
 
 Incorporates test statistics related aggregators, including z-score and p-value. Please refer to [https://www.paypal-engineering.com/2017/06/29/democratizing-experimentation-data-for-product-innovations/](https://www.paypal-engineering.com/2017/06/29/democratizing-experimentation-data-for-product-innovations/) for math background and details.
diff --git a/docs/content/development/extensions.md b/docs/content/development/extensions.md
index e5f833e..64e3f07 100644
--- a/docs/content/development/extensions.md
+++ b/docs/content/development/extensions.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Druid extensions"
 ---
-
 # Druid extensions
 
 Druid implements an extension system that allows for adding functionality at runtime. Extensions
diff --git a/docs/content/development/geo.md b/docs/content/development/geo.md
index ac8db41..7f9befa 100644
--- a/docs/content/development/geo.md
+++ b/docs/content/development/geo.md
@@ -19,8 +19,10 @@
 
 ---
 layout: doc_page
+title: "Geographic Queries"
 ---
 # Geographic Queries
+
 Druid supports filtering specially spatially indexed columns based on an origin and a bound.
 
 # Spatial Indexing
diff --git a/docs/content/development/integrating-druid-with-other-technologies.md b/docs/content/development/integrating-druid-with-other-technologies.md
index 5862bfa..16c6bde 100644
--- a/docs/content/development/integrating-druid-with-other-technologies.md
+++ b/docs/content/development/integrating-druid-with-other-technologies.md
@@ -19,6 +19,7 @@
 
 ---
 layout: doc_page
+title: "Integrating Druid With Other Technologies"
 ---
 # Integrating Druid With Other Technologies
 
diff --git a/docs/content/development/javascript.md b/docs/content/development/javascript.md
index 53c93f4..a90a08c 100644
--- a/docs/content/development/javascript.md
+++ b/docs/content/development/javascript.md
@@ -19,6 +19,7 @@
 
 ---
 layout: doc_page
+title: "JavaScript Programming Guide"
 ---
 # JavaScript Programming Guide
 
diff --git a/docs/content/development/modules.md b/docs/content/development/modules.md
index 4800ff1..901b1b5 100644
--- a/docs/content/development/modules.md
+++ b/docs/content/development/modules.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Extending Druid With Custom Modules"
 ---
-
 # Extending Druid With Custom Modules
 
 Druid uses a module system that allows for the addition of extensions at runtime.
diff --git a/docs/content/development/overview.md b/docs/content/development/overview.md
index 361049d..d900b4e 100644
--- a/docs/content/development/overview.md
+++ b/docs/content/development/overview.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Developing on Druid"
 ---
-
 # Developing on Druid
 
 Druid's codebase consists of several major components. For developers interested in learning the code, this document provides 
diff --git a/docs/content/development/router.md b/docs/content/development/router.md
index 283226a..ff480e8 100644
--- a/docs/content/development/router.md
+++ b/docs/content/development/router.md
@@ -19,10 +19,9 @@
 
 ---
 layout: doc_page
+title: "Router Node"
 ---
-
-Router Node
-===========
+# Router Node
 
 You should only ever need the router node if you have a Druid cluster well into the terabyte range. The router node can be used to route queries to different broker nodes. By default, the broker routes queries based on how [Rules](../operations/rule-configuration.html) are set up. For example, if 1 month of recent data is loaded into a `hot` cluster, queries that fall within the recent month can be routed to a dedicated set of brokers. Queries outside this range are routed to another set of brokers. This set up provides query isolation such that queries for more important data are not impacted by queries for less important data. 
 
diff --git a/docs/content/development/versioning.md b/docs/content/development/versioning.md
index dfd04a0..4b1577f 100644
--- a/docs/content/development/versioning.md
+++ b/docs/content/development/versioning.md
@@ -19,8 +19,10 @@
 
 ---
 layout: doc_page
+title: "Versioning Druid"
 ---
 # Versioning Druid
+
 This page discusses how we do versioning and provides information on our stable releases.
 
 Versioning Strategy
diff --git a/docs/content/ingestion/batch-ingestion.md b/docs/content/ingestion/batch-ingestion.md
index dfa9007..db394c6 100644
--- a/docs/content/ingestion/batch-ingestion.md
+++ b/docs/content/ingestion/batch-ingestion.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Batch Data Ingestion"
 ---
-
 # Batch Data Ingestion
 
 Druid can load data from static files through a variety of methods described here.
diff --git a/docs/content/ingestion/command-line-hadoop-indexer.md b/docs/content/ingestion/command-line-hadoop-indexer.md
index 162499a..3068783 100644
--- a/docs/content/ingestion/command-line-hadoop-indexer.md
+++ b/docs/content/ingestion/command-line-hadoop-indexer.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Command Line Hadoop Indexer"
 ---
-
 # Command Line Hadoop Indexer
 
 To run:
diff --git a/docs/content/ingestion/compaction.md b/docs/content/ingestion/compaction.md
index 2c46e09..956c347 100644
--- a/docs/content/ingestion/compaction.md
+++ b/docs/content/ingestion/compaction.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Compaction Task"
 ---
-
 # Compaction Task
 
 Compaction tasks merge all segments of the given interval. The syntax is:
diff --git a/docs/content/ingestion/data-formats.md b/docs/content/ingestion/data-formats.md
index bdb7fb1..bfd7962 100644
--- a/docs/content/ingestion/data-formats.md
+++ b/docs/content/ingestion/data-formats.md
@@ -19,9 +19,9 @@
 
 ---
 layout: doc_page
+title: "Data Formats for Ingestion"
 ---
-Data Formats for Ingestion
-==========================
+# Data Formats for Ingestion
 
 Druid can ingest denormalized data in JSON, CSV, or a delimited form such as TSV, or any custom format. While most examples in the documentation use data in JSON format, it is not difficult to configure Druid to ingest any other delimited data.
 We welcome any contributions to new formats.
diff --git a/docs/content/ingestion/delete-data.md b/docs/content/ingestion/delete-data.md
index cd0c2a0..6f5e966 100644
--- a/docs/content/ingestion/delete-data.md
+++ b/docs/content/ingestion/delete-data.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Deleting Data"
 ---
-
 # Deleting Data
 
 Permanent deletion of a Druid segment has two steps:
diff --git a/docs/content/ingestion/faq.md b/docs/content/ingestion/faq.md
index a5bbe6d..9ed403e 100644
--- a/docs/content/ingestion/faq.md
+++ b/docs/content/ingestion/faq.md
@@ -19,9 +19,9 @@
 
 ---
 layout: doc_page
+title: "My Data isn't being loaded"
 ---
-
-## My Data isn't being loaded
+# My Data isn't being loaded
 
 ### Realtime Ingestion
 
diff --git a/docs/content/ingestion/firehose.md b/docs/content/ingestion/firehose.md
index c11a73f..8aab739 100644
--- a/docs/content/ingestion/firehose.md
+++ b/docs/content/ingestion/firehose.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Druid Firehoses"
 ---
-
 # Druid Firehoses
 
 Firehoses are used in [native batch ingestion tasks](../ingestion/native_tasks.html), stream push tasks automatically created by [Tranquility](../ingestion/stream-push.html), and the [stream-pull (deprecated)](../ingestion/stream-pull.html) ingestion model.
diff --git a/docs/content/ingestion/flatten-json.md b/docs/content/ingestion/flatten-json.md
index d9f31b3..bcaf6c8 100644
--- a/docs/content/ingestion/flatten-json.md
+++ b/docs/content/ingestion/flatten-json.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "JSON Flatten Spec"
 ---
-
 # JSON Flatten Spec
 
 | Field | Type | Description | Required |
diff --git a/docs/content/ingestion/hadoop.md b/docs/content/ingestion/hadoop.md
index e32fd75..8eda60c 100644
--- a/docs/content/ingestion/hadoop.md
+++ b/docs/content/ingestion/hadoop.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Hadoop-based Batch Ingestion"
 ---
-
 # Hadoop-based Batch Ingestion
 
 Hadoop-based batch ingestion in Druid is supported via a Hadoop-ingestion task. These tasks can be posted to a running
diff --git a/docs/content/ingestion/index.md b/docs/content/ingestion/index.md
index 528699a..c072176 100644
--- a/docs/content/ingestion/index.md
+++ b/docs/content/ingestion/index.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Ingestion"
 ---
-
 # Ingestion
 
 ## Overview
diff --git a/docs/content/ingestion/ingestion-spec.md b/docs/content/ingestion/ingestion-spec.md
index d463d57..82888ce 100644
--- a/docs/content/ingestion/ingestion-spec.md
+++ b/docs/content/ingestion/ingestion-spec.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Ingestion Spec"
 ---
-
 # Ingestion Spec
 
 A Druid ingestion spec consists of 3 components:
diff --git a/docs/content/ingestion/locking-and-priority.md b/docs/content/ingestion/locking-and-priority.md
index d343e97..d2a8579 100644
--- a/docs/content/ingestion/locking-and-priority.md
+++ b/docs/content/ingestion/locking-and-priority.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Task Locking & Priority"
 ---
-
 # Task Locking & Priority
 
 ## Locking
diff --git a/docs/content/ingestion/misc-tasks.md b/docs/content/ingestion/misc-tasks.md
index e309bbb..fe119dc 100644
--- a/docs/content/ingestion/misc-tasks.md
+++ b/docs/content/ingestion/misc-tasks.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Miscellaneous Tasks"
 ---
-
 # Miscellaneous Tasks
 
 ## Noop Task
diff --git a/docs/content/ingestion/native_tasks.md b/docs/content/ingestion/native_tasks.md
index 34279b9..2742994 100644
--- a/docs/content/ingestion/native_tasks.md
+++ b/docs/content/ingestion/native_tasks.md
@@ -19,6 +19,7 @@
 
 ---
 layout: doc_page
+title: "Native Index Tasks"
 ---
 # Native Index Tasks
 
diff --git a/docs/content/ingestion/reports.md b/docs/content/ingestion/reports.md
index 20b1836..2f30317 100644
--- a/docs/content/ingestion/reports.md
+++ b/docs/content/ingestion/reports.md
@@ -19,6 +19,7 @@
 
 ---
 layout: doc_page
+title: "Ingestion Reports"
 ---
 # Ingestion Reports
 
diff --git a/docs/content/ingestion/schema-changes.md b/docs/content/ingestion/schema-changes.md
index a8d72a0..5f091f1 100644
--- a/docs/content/ingestion/schema-changes.md
+++ b/docs/content/ingestion/schema-changes.md
@@ -19,6 +19,7 @@
 
 ---
 layout: doc_page
+title: "Schema Changes"
 ---
 # Schema Changes
 
diff --git a/docs/content/ingestion/schema-design.md b/docs/content/ingestion/schema-design.md
index f47aeb8..21be5bf 100644
--- a/docs/content/ingestion/schema-design.md
+++ b/docs/content/ingestion/schema-design.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Schema Design"
 ---
-
 # Schema Design
 
 This page is meant to assist users in designing a schema for data to be ingested in Druid. Druid intakes denormalized data 
diff --git a/docs/content/ingestion/stream-ingestion.md b/docs/content/ingestion/stream-ingestion.md
index 292e074..dd22218 100644
--- a/docs/content/ingestion/stream-ingestion.md
+++ b/docs/content/ingestion/stream-ingestion.md
@@ -19,22 +19,22 @@
 
 ---
 layout: doc_page
+title: "Loading Streams"
 ---
+# Loading Streams
 
-# Loading streams
-
-Streams can be ingested in Druid using either [Tranquility](https://github.com/druid-io/tranquility) (a Druid-aware 
+Streams can be ingested in Druid using either [Tranquility](https://github.com/druid-io/tranquility) (a Druid-aware
 client) or the [Kafka Indexing Service](../development/extensions-core/kafka-ingestion.html).
 
 ## Tranquility (Stream Push)
 
-If you have a program that generates a stream, then you can push that stream directly into Druid in 
-real-time. With this approach, Tranquility is embedded in your data-producing application. 
-Tranquility comes with bindings for the 
-Storm and Samza stream processors. It also has a direct API that can be used from any JVM-based 
+If you have a program that generates a stream, then you can push that stream directly into Druid in
+real-time. With this approach, Tranquility is embedded in your data-producing application.
+Tranquility comes with bindings for the
+Storm and Samza stream processors. It also has a direct API that can be used from any JVM-based
 program, such as Spark Streaming or a Kafka consumer.
 
-Tranquility handles partitioning, replication, service discovery, and schema rollover for you, 
+Tranquility handles partitioning, replication, service discovery, and schema rollover for you,
 seamlessly and without downtime. You only have to define your Druid schema.
 
 For examples and more information, please see the [Tranquility README](https://github.com/druid-io/tranquility).
diff --git a/docs/content/ingestion/stream-pull.md b/docs/content/ingestion/stream-pull.md
index db3a76a..0c53944 100644
--- a/docs/content/ingestion/stream-pull.md
+++ b/docs/content/ingestion/stream-pull.md
@@ -19,14 +19,14 @@
 
 ---
 layout: doc_page
+title: "Stream Pull Ingestion"
 ---
 
 <div class="note info">
-NOTE: Realtime nodes are deprecated. Please use the <a href="../development/extensions-core/kafka-ingestion.html">Kafka Indexing Service</a> for stream pull use cases instead. 
+NOTE: Realtime nodes are deprecated. Please use the <a href="../development/extensions-core/kafka-ingestion.html">Kafka Indexing Service</a> for stream pull use cases instead.
 </div>
 
-Stream Pull Ingestion
-=====================
+# Stream Pull Ingestion
 
 If you have an external service that you want to pull data from, you have two options. The simplest
 option is to set up a "copying" service that reads from the data source and writes to Druid using
@@ -34,7 +34,7 @@
 
 Another option is *stream pull*. With this approach, a Druid Realtime Node ingests data from a
 [Firehose](../ingestion/firehose.html) connected to the data you want to
-read. The Druid quickstart and tutorials do not include information about how to set up standalone realtime nodes, but 
+read. The Druid quickstart and tutorials do not include information about how to set up standalone realtime nodes, but
 they can be used in place for Tranquility server and the indexing service. Please note that Realtime nodes have different properties and roles than the indexing service.
 
 ## Realtime Node Ingestion
@@ -182,7 +182,7 @@
 |dedupColumn|String|the column to judge whether this row is already in this segment, if so, throw away this row. If it is String type column, to reduce heap cost, use long type hashcode of this column's value to judge whether this row is already ingested, so there maybe very small chance to throw away a row that is not ingested before.|no (default == null)|
 |indexSpec|Object|Tune how data is indexed. See below for more information.|no|
 
-Before enabling thread priority settings, users are highly encouraged to read the [original pull request](https://github.com/apache/incubator-druid/pull/984) and other documentation about proper use of `-XX:+UseThreadPriorities`. 
+Before enabling thread priority settings, users are highly encouraged to read the [original pull request](https://github.com/apache/incubator-druid/pull/984) and other documentation about proper use of `-XX:+UseThreadPriorities`.
 
 #### Rejection Policy
 
@@ -248,7 +248,7 @@
         "partitionNum": 0
     }
 ```
-            
+
 
 ##### Numbered
 
@@ -263,7 +263,7 @@
         "partitions": 2
     }
 ```
-     
+
 
 ##### Scale and Redundancy
 
@@ -277,7 +277,7 @@
         "partitionNum": 0
     }
 ```
-            
+
 and RealTimeNode2 has:
 
 ```json
@@ -323,48 +323,48 @@
 
 Standalone realtime nodes use the Kafka high level consumer, which imposes a few restrictions.
 
-Druid replicates segment such that logically equivalent data segments are concurrently hosted on N nodes. If N–1 nodes go down, 
-the data will still be available for querying. On real-time nodes, this process depends on maintaining logically equivalent 
-data segments on each of the N nodes, which is not possible with standard Kafka consumer groups if your Kafka topic requires more than one consumer 
+Druid replicates segment such that logically equivalent data segments are concurrently hosted on N nodes. If N–1 nodes go down,
+the data will still be available for querying. On real-time nodes, this process depends on maintaining logically equivalent
+data segments on each of the N nodes, which is not possible with standard Kafka consumer groups if your Kafka topic requires more than one consumer
 (because consumers in different consumer groups will split up the data differently).
 
-For example, let's say your topic is split across Kafka partitions 1, 2, & 3 and you have 2 real-time nodes with linear shard specs 1 & 2. 
-Both of the real-time nodes are in the same consumer group. Real-time node 1 may consume data from partitions 1 & 3, and real-time node 2 may consume data from partition 2. 
+For example, let's say your topic is split across Kafka partitions 1, 2, & 3 and you have 2 real-time nodes with linear shard specs 1 & 2.
+Both of the real-time nodes are in the same consumer group. Real-time node 1 may consume data from partitions 1 & 3, and real-time node 2 may consume data from partition 2.
 Querying for your data through the broker will yield correct results.
 
-The problem arises if you want to replicate your data by creating real-time nodes 3 & 4. These new real-time nodes also 
-have linear shard specs 1 & 2, and they will consume data from Kafka using a different consumer group. In this case, 
-real-time node 3 may consume data from partitions 1 & 2, and real-time node 4 may consume data from partition 2. 
-From Druid's perspective, the segments hosted by real-time nodes 1 and 3 are the same, and the data hosted by real-time nodes 
-2 and 4 are the same, although they are reading from different Kafka partitions. Querying for the data will yield inconsistent 
+The problem arises if you want to replicate your data by creating real-time nodes 3 & 4. These new real-time nodes also
+have linear shard specs 1 & 2, and they will consume data from Kafka using a different consumer group. In this case,
+real-time node 3 may consume data from partitions 1 & 2, and real-time node 4 may consume data from partition 2.
+From Druid's perspective, the segments hosted by real-time nodes 1 and 3 are the same, and the data hosted by real-time nodes
+2 and 4 are the same, although they are reading from different Kafka partitions. Querying for the data will yield inconsistent
 results.
 
-Is this always a problem? No. If your data is small enough to fit on a single Kafka partition, you can replicate without issues. 
+Is this always a problem? No. If your data is small enough to fit on a single Kafka partition, you can replicate without issues.
 Otherwise, you can run real-time nodes without replication.
 
 Please note that druid will skip over event that failed its checksum and it is corrupt.
 
 ### Locking
 
-Using stream pull ingestion with Realtime nodes together batch ingestion may introduce data override issues. For example, if you 
-are generating hourly segments for the current day, and run a daily batch job for the current day's data, the segments created by 
-the batch job will have a more recent version than most of the segments generated by realtime ingestion. If your batch job is indexing 
-data that isn't yet complete for the day, the daily segment created by the batch job can override recent segments created by 
+Using stream pull ingestion with Realtime nodes together batch ingestion may introduce data override issues. For example, if you
+are generating hourly segments for the current day, and run a daily batch job for the current day's data, the segments created by
+the batch job will have a more recent version than most of the segments generated by realtime ingestion. If your batch job is indexing
+data that isn't yet complete for the day, the daily segment created by the batch job can override recent segments created by
 realtime nodes. A portion of data will appear to be lost in this case.
 
 ### Schema changes
 
-Standalone realtime nodes require stopping a node to update a schema, and starting it up again for the schema to take effect. 
+Standalone realtime nodes require stopping a node to update a schema, and starting it up again for the schema to take effect.
 This can be difficult to manage at scale, especially with multiple partitions.
 
 ### Log management
 
-Each standalone realtime node has its own set of logs. Diagnosing errors across many partitions across many servers may be 
+Each standalone realtime node has its own set of logs. Diagnosing errors across many partitions across many servers may be
 difficult to manage and track at scale.
 
 ## Deployment Notes
 
 Stream ingestion may generate a large number of small segments because it's difficult to optimize the segment size at
-ingestion time. The number of segments will increase over time, and this might cause the query performance issue. 
+ingestion time. The number of segments will increase over time, and this might cause the query performance issue.
 
 Details on how to optimize the segment size can be found on [Segment size optimization](../operations/segment-optimization.html).
diff --git a/docs/content/ingestion/stream-push.md b/docs/content/ingestion/stream-push.md
index 7a5a9de..c6e79ad 100644
--- a/docs/content/ingestion/stream-push.md
+++ b/docs/content/ingestion/stream-push.md
@@ -19,9 +19,9 @@
 
 ---
 layout: doc_page
+title: "Stream Push"
 ---
-
-## Stream Push
+# Stream Push
 
 Druid can connect to any streaming data source through
 [Tranquility](https://github.com/druid-io/tranquility/blob/master/README.md), a package for pushing
diff --git a/docs/content/ingestion/tasks.md b/docs/content/ingestion/tasks.md
index 27f446a..44ffc3b 100644
--- a/docs/content/ingestion/tasks.md
+++ b/docs/content/ingestion/tasks.md
@@ -19,6 +19,7 @@
 
 ---
 layout: doc_page
+title: "Tasks Overview"
 ---
 # Tasks Overview
 
diff --git a/docs/content/ingestion/transform-spec.md b/docs/content/ingestion/transform-spec.md
index 84c00d3..4e7c66e 100644
--- a/docs/content/ingestion/transform-spec.md
+++ b/docs/content/ingestion/transform-spec.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Transform Specs"
 ---
-
 # Transform Specs
 
 Transform specs allow Druid to filter and transform input data during ingestion. 
diff --git a/docs/content/ingestion/update-existing-data.md b/docs/content/ingestion/update-existing-data.md
index 3fdf557..da8ab31 100644
--- a/docs/content/ingestion/update-existing-data.md
+++ b/docs/content/ingestion/update-existing-data.md
@@ -19,6 +19,7 @@
 
 ---
 layout: doc_page
+title: "Updating Existing Data"
 ---
 # Updating Existing Data
 
diff --git a/docs/content/misc/math-expr.md b/docs/content/misc/math-expr.md
index 321f8fb..7a3beeb 100644
--- a/docs/content/misc/math-expr.md
+++ b/docs/content/misc/math-expr.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Druid Expressions"
 ---
-
 # Druid Expressions
 
 <div class="note info">
diff --git a/docs/content/misc/papers-and-talks.md b/docs/content/misc/papers-and-talks.md
index a265ef1..f97c3d3 100644
--- a/docs/content/misc/papers-and-talks.md
+++ b/docs/content/misc/papers-and-talks.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Papers"
 ---
-
 # Papers
 
 * [Druid: A Real-time Analytical Data Store](http://static.druid.io/docs/druid.pdf) - Discusses the Druid architecture in detail.
diff --git a/docs/content/operations/alerts.md b/docs/content/operations/alerts.md
index 7faa296..239c330 100644
--- a/docs/content/operations/alerts.md
+++ b/docs/content/operations/alerts.md
@@ -19,6 +19,7 @@
 
 ---
 layout: doc_page
+title: "Druid Alerts"
 ---
 # Druid Alerts
 
diff --git a/docs/content/operations/api-reference.md b/docs/content/operations/api-reference.md
index 101af95..030a748 100644
--- a/docs/content/operations/api-reference.md
+++ b/docs/content/operations/api-reference.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "API Reference"
 ---
-
 # API Reference
 
 This page documents all of the API endpoints for each Druid service type.
diff --git a/docs/content/operations/dump-segment.md b/docs/content/operations/dump-segment.md
index b881e11..1f93dfd 100644
--- a/docs/content/operations/dump-segment.md
+++ b/docs/content/operations/dump-segment.md
@@ -19,6 +19,7 @@
 
 ---
 layout: doc_page
+title: "DumpSegment tool"
 ---
 # DumpSegment tool
 
diff --git a/docs/content/operations/http-compression.md b/docs/content/operations/http-compression.md
index 4bbcd50..5ba9c0d 100644
--- a/docs/content/operations/http-compression.md
+++ b/docs/content/operations/http-compression.md
@@ -19,6 +19,7 @@
 
 ---
 layout: doc_page
+title: "HTTP Compression"
 ---
 # HTTP Compression
 
diff --git a/docs/content/operations/including-extensions.md b/docs/content/operations/including-extensions.md
index 2de6b7f..d8cb69e 100644
--- a/docs/content/operations/including-extensions.md
+++ b/docs/content/operations/including-extensions.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Loading extensions"
 ---
-
 # Loading extensions
 
 ## Loading core extensions
diff --git a/docs/content/operations/insert-segment-to-db.md b/docs/content/operations/insert-segment-to-db.md
index 3c4306e..8f9aed6 100644
--- a/docs/content/operations/insert-segment-to-db.md
+++ b/docs/content/operations/insert-segment-to-db.md
@@ -19,6 +19,7 @@
 
 ---
 layout: doc_page
+title: "insert-segment-to-db Tool"
 ---
 # insert-segment-to-db Tool
 
diff --git a/docs/content/operations/metrics.md b/docs/content/operations/metrics.md
index 4729e2e..0a4b010 100644
--- a/docs/content/operations/metrics.md
+++ b/docs/content/operations/metrics.md
@@ -19,6 +19,7 @@
 
 ---
 layout: doc_page
+title: "Druid Metrics"
 ---
 # Druid Metrics
 
diff --git a/docs/content/operations/other-hadoop.md b/docs/content/operations/other-hadoop.md
index f496a9b..594d167 100644
--- a/docs/content/operations/other-hadoop.md
+++ b/docs/content/operations/other-hadoop.md
@@ -19,6 +19,7 @@
 
 ---
 layout: doc_page
+title: "Working with different versions of Hadoop"
 ---
 # Working with different versions of Hadoop
 
diff --git a/docs/content/operations/password-provider.md b/docs/content/operations/password-provider.md
index 9a89990..7ed0e5a 100644
--- a/docs/content/operations/password-provider.md
+++ b/docs/content/operations/password-provider.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Password Provider"
 ---
-
 # Password Provider
 
 Druid needs some passwords for accessing various secured systems like metadata store, Key Store containing server certificates etc.
diff --git a/docs/content/operations/performance-faq.md b/docs/content/operations/performance-faq.md
index b18ec1a..6b703ab 100644
--- a/docs/content/operations/performance-faq.md
+++ b/docs/content/operations/performance-faq.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Performance FAQ"
 ---
-
 # Performance FAQ
 
 ## I can't match your benchmarked results
diff --git a/docs/content/operations/pull-deps.md b/docs/content/operations/pull-deps.md
index d9abf57..6721f7f 100644
--- a/docs/content/operations/pull-deps.md
+++ b/docs/content/operations/pull-deps.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "pull-deps Tool"
 ---
-
 # pull-deps Tool
 
 `pull-deps` is a tool that can pull down dependencies to the local repository and lay dependencies out into the extension directory as needed.
diff --git a/docs/content/operations/recommendations.md b/docs/content/operations/recommendations.md
index aa365e0..2672ea3 100644
--- a/docs/content/operations/recommendations.md
+++ b/docs/content/operations/recommendations.md
@@ -19,10 +19,9 @@
 
 ---
 layout: doc_page
+title: "Recommendations"
 ---
-
-Recommendations
-===============
+# Recommendations
 
 # Some General guidelines
 
diff --git a/docs/content/operations/reset-cluster.md b/docs/content/operations/reset-cluster.md
index b16baa7..f33667e 100644
--- a/docs/content/operations/reset-cluster.md
+++ b/docs/content/operations/reset-cluster.md
@@ -19,6 +19,7 @@
 
 ---
 layout: doc_page
+title: "ResetCluster tool"
 ---
 # ResetCluster tool
 
diff --git a/docs/content/operations/rolling-updates.md b/docs/content/operations/rolling-updates.md
index 72acfe0..df94842 100644
--- a/docs/content/operations/rolling-updates.md
+++ b/docs/content/operations/rolling-updates.md
@@ -19,10 +19,9 @@
 
 ---
 layout: doc_page
+title: "Rolling Updates"
 ---
-
-Rolling Updates
-===============
+# Rolling Updates
 
 For rolling Druid cluster updates with no downtime, we recommend updating Druid nodes in the
 following order:
diff --git a/docs/content/operations/rule-configuration.md b/docs/content/operations/rule-configuration.md
index c4b0c71..c3dc4b2 100644
--- a/docs/content/operations/rule-configuration.md
+++ b/docs/content/operations/rule-configuration.md
@@ -19,6 +19,7 @@
 
 ---
 layout: doc_page
+title: "Retaining or Automatically Dropping Data"
 ---
 # Retaining or Automatically Dropping Data
 
diff --git a/docs/content/operations/segment-optimization.md b/docs/content/operations/segment-optimization.md
index a3d8aa6..e539d9d 100644
--- a/docs/content/operations/segment-optimization.md
+++ b/docs/content/operations/segment-optimization.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Segment size optimization"
 ---
-
 # Segment size optimization
 
 In Druid, it's important to optimize the segment size because
diff --git a/docs/content/operations/tls-support.md b/docs/content/operations/tls-support.md
index be9c451..a98036e 100644
--- a/docs/content/operations/tls-support.md
+++ b/docs/content/operations/tls-support.md
@@ -19,10 +19,9 @@
 
 ---
 layout: doc_page
+title: "TLS Support"
 ---
-
-TLS Support
-===============
+# TLS Support
 
 # General Configuration
 
diff --git a/docs/content/operations/use_sbt_to_build_fat_jar.md b/docs/content/operations/use_sbt_to_build_fat_jar.md
index eeae5af..79f5a0f 100644
--- a/docs/content/operations/use_sbt_to_build_fat_jar.md
+++ b/docs/content/operations/use_sbt_to_build_fat_jar.md
@@ -19,10 +19,10 @@
 
 ---
 layout: doc_page
+title: "Content for build.sbt"
 ---
+# Content for build.sbt
 
-Content for build.sbt
----------------------
 ```scala
 libraryDependencies ++= Seq(
   "com.amazonaws" % "aws-java-sdk" % "1.9.23" exclude("common-logging", "common-logging"),
diff --git a/docs/content/querying/aggregations.md b/docs/content/querying/aggregations.md
index 8f65140..ddd804e 100644
--- a/docs/content/querying/aggregations.md
+++ b/docs/content/querying/aggregations.md
@@ -19,6 +19,7 @@
 
 ---
 layout: doc_page
+title: "Aggregations"
 ---
 # Aggregations
 
diff --git a/docs/content/querying/caching.md b/docs/content/querying/caching.md
index f5da776..68b7daa 100644
--- a/docs/content/querying/caching.md
+++ b/docs/content/querying/caching.md
@@ -19,6 +19,7 @@
 
 ---
 layout: doc_page
+title: "Query Caching"
 ---
 # Query Caching
 
diff --git a/docs/content/querying/datasource.md b/docs/content/querying/datasource.md
index a966fba..7dee075 100644
--- a/docs/content/querying/datasource.md
+++ b/docs/content/querying/datasource.md
@@ -19,9 +19,9 @@
 
 ---
 layout: doc_page
+title: "Datasources"
 ---
-
-## Datasources
+# Datasources
 
 A data source is the Druid equivalent of a database table. However, a query can also masquerade as a data source, providing subquery-like functionality. Query data sources are currently supported only by [GroupBy](../querying/groupbyquery.html) queries.
 
diff --git a/docs/content/querying/datasourcemetadataquery.md b/docs/content/querying/datasourcemetadataquery.md
index 2f10222..f7d2da1 100644
--- a/docs/content/querying/datasourcemetadataquery.md
+++ b/docs/content/querying/datasourcemetadataquery.md
@@ -19,8 +19,10 @@
 
 ---
 layout: doc_page
+title: "Data Source Metadata Queries"
 ---
 # Data Source Metadata Queries
+
 Data Source Metadata queries return metadata information for a dataSource.  These queries return information about:
 
 * The timestamp of latest ingested event for the dataSource. This is the ingested event without any consideration of rollup.
diff --git a/docs/content/querying/dimensionspecs.md b/docs/content/querying/dimensionspecs.md
index a35b029..823ea07 100644
--- a/docs/content/querying/dimensionspecs.md
+++ b/docs/content/querying/dimensionspecs.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Transforming Dimension Values"
 ---
-
 # Transforming Dimension Values
 
 The following JSON fields can be used in a query to operate on dimension values.
diff --git a/docs/content/querying/filters.md b/docs/content/querying/filters.md
index 596f337..9012df8 100644
--- a/docs/content/querying/filters.md
+++ b/docs/content/querying/filters.md
@@ -19,8 +19,10 @@
 
 ---
 layout: doc_page
+title: "Query Filters"
 ---
 # Query Filters
+
 A filter is a JSON object indicating which rows of data should be included in the computation for a query. It’s essentially the equivalent of the WHERE clause in SQL. Druid supports the following types of filters.
 
 ### Selector filter
diff --git a/docs/content/querying/granularities.md b/docs/content/querying/granularities.md
index 677a476..c8a1a47 100644
--- a/docs/content/querying/granularities.md
+++ b/docs/content/querying/granularities.md
@@ -19,9 +19,10 @@
 
 ---
 layout: doc_page
+title: "Aggregation Granularity"
 ---
-
 # Aggregation Granularity
+
 The granularity field determines how data gets bucketed across the time dimension, or how it gets aggregated by hour, day, minute, etc.
 
 It can be specified either as a string for simple granularities or as an object for arbitrary granularities.
diff --git a/docs/content/querying/groupbyquery.md b/docs/content/querying/groupbyquery.md
index 4a90908..e4e39a3 100644
--- a/docs/content/querying/groupbyquery.md
+++ b/docs/content/querying/groupbyquery.md
@@ -19,6 +19,7 @@
 
 ---
 layout: doc_page
+title: "groupBy Queries"
 ---
 # groupBy Queries
 
diff --git a/docs/content/querying/having.md b/docs/content/querying/having.md
index da37d15..aba8acf 100644
--- a/docs/content/querying/having.md
+++ b/docs/content/querying/having.md
@@ -19,8 +19,10 @@
 
 ---
 layout: doc_page
+title: "Filter groupBy Query Results"
 ---
 # Filter groupBy Query Results
+
 A having clause is a JSON object identifying which rows from a groupBy query should be returned, by specifying conditions on aggregated values.
 
 It is essentially the equivalent of the HAVING clause in SQL.
diff --git a/docs/content/querying/joins.md b/docs/content/querying/joins.md
index 6286c56..1c8c5fd 100644
--- a/docs/content/querying/joins.md
+++ b/docs/content/querying/joins.md
@@ -19,6 +19,7 @@
 
 ---
 layout: doc_page
+title: "Joins"
 ---
 # Joins
 
diff --git a/docs/content/querying/limitspec.md b/docs/content/querying/limitspec.md
index cfd715d..cc57ab3 100644
--- a/docs/content/querying/limitspec.md
+++ b/docs/content/querying/limitspec.md
@@ -19,8 +19,10 @@
 
 ---
 layout: doc_page
+title: "Sort groupBy Query Results"
 ---
 # Sort groupBy Query Results
+
 The limitSpec field provides the functionality to sort and limit the set of results from a groupBy query. If you group by a single dimension and are ordering by a single metric, we highly recommend using [TopN Queries](../querying/topnquery.html) instead. The performance will be substantially better. Available options are:
 
 ### DefaultLimitSpec
diff --git a/docs/content/querying/lookups.md b/docs/content/querying/lookups.md
index c5bafab..b86501c 100644
--- a/docs/content/querying/lookups.md
+++ b/docs/content/querying/lookups.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Lookups"
 ---
-
 # Lookups
 
 <div class="note caution">
diff --git a/docs/content/querying/multi-value-dimensions.md b/docs/content/querying/multi-value-dimensions.md
index ef20032..532538e 100644
--- a/docs/content/querying/multi-value-dimensions.md
+++ b/docs/content/querying/multi-value-dimensions.md
@@ -19,6 +19,7 @@
 
 ---
 layout: doc_page
+title: "Multi-value dimensions"
 ---
 # Multi-value dimensions
 
diff --git a/docs/content/querying/multitenancy.md b/docs/content/querying/multitenancy.md
index 4e2b345..7ab468e 100644
--- a/docs/content/querying/multitenancy.md
+++ b/docs/content/querying/multitenancy.md
@@ -19,6 +19,7 @@
 
 ---
 layout: doc_page
+title: "Multitenancy Considerations"
 ---
 # Multitenancy Considerations
 
diff --git a/docs/content/querying/post-aggregations.md b/docs/content/querying/post-aggregations.md
index 9b7d776..15f8d80 100644
--- a/docs/content/querying/post-aggregations.md
+++ b/docs/content/querying/post-aggregations.md
@@ -19,8 +19,10 @@
 
 ---
 layout: doc_page
+title: "Post-Aggregations"
 ---
 # Post-Aggregations
+
 Post-aggregations are specifications of processing that should happen on aggregated values as they come out of Druid. If you include a post aggregation as part of a query, make sure to include all aggregators the post-aggregator requires.
 
 There are several post-aggregators available.
diff --git a/docs/content/querying/query-context.md b/docs/content/querying/query-context.md
index 135b82b..81d1e09 100644
--- a/docs/content/querying/query-context.md
+++ b/docs/content/querying/query-context.md
@@ -19,10 +19,9 @@
 
 ---
 layout: doc_page
+title: "Query Context"
 ---
-
-Query Context
-=============
+# Query Context
 
 The query context is used for various query configuration parameters. The following parameters apply to all queries.
 
diff --git a/docs/content/querying/querying.md b/docs/content/querying/querying.md
index 1fdb49d..af59ba6 100644
--- a/docs/content/querying/querying.md
+++ b/docs/content/querying/querying.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Querying"
 ---
-
 # Querying
 
 Queries are made using an HTTP REST style request to queryable nodes ([Broker](../design/broker.html),
diff --git a/docs/content/querying/scan-query.md b/docs/content/querying/scan-query.md
index 4636aa4..3571727 100644
--- a/docs/content/querying/scan-query.md
+++ b/docs/content/querying/scan-query.md
@@ -19,9 +19,10 @@
 
 ---
 layout: doc_page
+title: "Scan query"
 ---
-
 # Scan query
+
 Scan query returns raw Druid rows in streaming mode.
 
 ```json
diff --git a/docs/content/querying/searchquery.md b/docs/content/querying/searchquery.md
index d621f86..e26746b 100644
--- a/docs/content/querying/searchquery.md
+++ b/docs/content/querying/searchquery.md
@@ -19,8 +19,10 @@
 
 ---
 layout: doc_page
+title: "Search Queries"
 ---
 # Search Queries
+
 A search query returns dimension values that match the search specification.
 
 ```json
diff --git a/docs/content/querying/searchqueryspec.md b/docs/content/querying/searchqueryspec.md
index 0a22448..6e15757 100644
--- a/docs/content/querying/searchqueryspec.md
+++ b/docs/content/querying/searchqueryspec.md
@@ -19,8 +19,10 @@
 
 ---
 layout: doc_page
+title: "Refining Search Queries"
 ---
 # Refining Search Queries
+
 Search query specs define how a "match" is defined between a search value and a dimension value. The available search query specs are:
 
 InsensitiveContainsSearchQuerySpec
diff --git a/docs/content/querying/segmentmetadataquery.md b/docs/content/querying/segmentmetadataquery.md
index e979f58..ae3477f 100644
--- a/docs/content/querying/segmentmetadataquery.md
+++ b/docs/content/querying/segmentmetadataquery.md
@@ -19,8 +19,10 @@
 
 ---
 layout: doc_page
+title: "Segment Metadata Queries"
 ---
 # Segment Metadata Queries
+
 Segment metadata queries return per-segment information about:
 
 * Cardinality of all columns in the segment
diff --git a/docs/content/querying/select-query.md b/docs/content/querying/select-query.md
index ac44e49..7454ba5 100644
--- a/docs/content/querying/select-query.md
+++ b/docs/content/querying/select-query.md
@@ -19,6 +19,7 @@
 
 ---
 layout: doc_page
+title: "Select Queries"
 ---
 # Select Queries
 
diff --git a/docs/content/querying/sorting-orders.md b/docs/content/querying/sorting-orders.md
index a50617e..4ba336e 100644
--- a/docs/content/querying/sorting-orders.md
+++ b/docs/content/querying/sorting-orders.md
@@ -19,8 +19,10 @@
 
 ---
 layout: doc_page
+title: "Sorting Orders"
 ---
 # Sorting Orders
+
 These sorting orders are used by the [TopNMetricSpec](./topnmetricspec.html), [SearchQuery](./searchquery.html), GroupByQuery's [LimitSpec](./limitspec.html), and [BoundFilter](./filters.html#bound-filter).
 
 ## Lexicographic
diff --git a/docs/content/querying/sql.md b/docs/content/querying/sql.md
index e1fb2e4..5fa4319 100644
--- a/docs/content/querying/sql.md
+++ b/docs/content/querying/sql.md
@@ -19,6 +19,7 @@
 
 ---
 layout: doc_page
+title: "SQL"
 ---
 # SQL
 
diff --git a/docs/content/querying/timeboundaryquery.md b/docs/content/querying/timeboundaryquery.md
index 5aa9581..971f733 100644
--- a/docs/content/querying/timeboundaryquery.md
+++ b/docs/content/querying/timeboundaryquery.md
@@ -19,8 +19,10 @@
 
 ---
 layout: doc_page
+title: "Time Boundary Queries"
 ---
 # Time Boundary Queries
+
 Time boundary queries return the earliest and latest data points of a data set. The grammar is:
 
 ```json
diff --git a/docs/content/querying/timeseriesquery.md b/docs/content/querying/timeseriesquery.md
index d716329..1af6815 100644
--- a/docs/content/querying/timeseriesquery.md
+++ b/docs/content/querying/timeseriesquery.md
@@ -19,9 +19,9 @@
 
 ---
 layout: doc_page
+title: "Timeseries queries"
 ---
-Timeseries queries
-==================
+# Timeseries queries
 
 These types of queries take a timeseries query object and return an array of JSON objects where each object represents a value asked for by the timeseries query.
 
diff --git a/docs/content/querying/topnmetricspec.md b/docs/content/querying/topnmetricspec.md
index 4cf035c..f3b195b 100644
--- a/docs/content/querying/topnmetricspec.md
+++ b/docs/content/querying/topnmetricspec.md
@@ -19,9 +19,9 @@
 
 ---
 layout: doc_page
+title: "TopNMetricSpec"
 ---
-TopNMetricSpec
-==================
+# TopNMetricSpec
 
 The topN metric spec specifies how topN values should be sorted.
 
diff --git a/docs/content/querying/topnquery.md b/docs/content/querying/topnquery.md
index bca0bdc..b9adf80 100644
--- a/docs/content/querying/topnquery.md
+++ b/docs/content/querying/topnquery.md
@@ -19,9 +19,9 @@
 
 ---
 layout: doc_page
+title: "TopN queries"
 ---
-TopN queries
-==================
+# TopN queries
 
 TopN queries return a sorted set of results for the values in a given dimension according to some criteria. Conceptually, they can be thought of as an approximate [GroupByQuery](../querying/groupbyquery.html) over a single dimension with an [Ordering](../querying/limitspec.html) spec. TopNs are much faster and resource efficient than GroupBys for this use case. These types of queries take a topN query object and return an array of JSON objects where each object represents a value asked for by the topN query.
 
diff --git a/docs/content/querying/virtual-columns.md b/docs/content/querying/virtual-columns.md
index 30a94e4..1a9779c 100644
--- a/docs/content/querying/virtual-columns.md
+++ b/docs/content/querying/virtual-columns.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Virtual Columns"
 ---
-
 # Virtual Columns
 
 Virtual columns are queryable column "views" created from a set of columns during a query. 
diff --git a/docs/content/tutorials/cluster.md b/docs/content/tutorials/cluster.md
index 65d7f17..f9b2cee 100644
--- a/docs/content/tutorials/cluster.md
+++ b/docs/content/tutorials/cluster.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Clustering"
 ---
-
 # Clustering
 
 Druid is designed to be deployed as a scalable, fault-tolerant cluster.
diff --git a/docs/content/tutorials/index.md b/docs/content/tutorials/index.md
index e68aaa7..1863365 100644
--- a/docs/content/tutorials/index.md
+++ b/docs/content/tutorials/index.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Quickstart"
 ---
-
 # Druid Quickstart
 
 In this quickstart, we will download Druid and set it up on a single machine. The cluster will be ready to load data
@@ -106,7 +106,7 @@
 
 All persistent state such as the cluster metadata store and segments for the services will be kept in the `var` directory under the apache-druid-#{DRUIDVERSION} package root. Logs for the services are located at `var/sv`.
 
-Later on, if you'd like to stop the services, CTRL-C to exit the `bin/supervise` script, which will terminate the Druid processes. 
+Later on, if you'd like to stop the services, CTRL-C to exit the `bin/supervise` script, which will terminate the Druid processes.
 
 ### Resetting cluster state
 
@@ -153,7 +153,7 @@
   * regionIsoCode
   * regionName
   * user
- 
+
 ```json
 {
   "timestamp":"2015-09-12T20:03:45.018Z",
diff --git a/docs/content/tutorials/tutorial-batch-hadoop.md b/docs/content/tutorials/tutorial-batch-hadoop.md
index 921972f..0afd25b 100644
--- a/docs/content/tutorials/tutorial-batch-hadoop.md
+++ b/docs/content/tutorials/tutorial-batch-hadoop.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Tutorial: Load batch data using Hadoop"
 ---
-
 # Tutorial: Load batch data using Hadoop
 
 This tutorial shows you how to load data files into Druid using a remote Hadoop cluster.
diff --git a/docs/content/tutorials/tutorial-batch.md b/docs/content/tutorials/tutorial-batch.md
index d7842ba..1cf8b2f 100644
--- a/docs/content/tutorials/tutorial-batch.md
+++ b/docs/content/tutorials/tutorial-batch.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Tutorial: Loading a file"
 ---
-
 # Tutorial: Loading a file
 
 ## Getting started
diff --git a/docs/content/tutorials/tutorial-compaction.md b/docs/content/tutorials/tutorial-compaction.md
index 697ce6f..3080eb8 100644
--- a/docs/content/tutorials/tutorial-compaction.md
+++ b/docs/content/tutorials/tutorial-compaction.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Tutorial: Compacting segments"
 ---
-
 # Tutorial: Compacting segments
 
 This tutorial demonstrates how to compact existing segments into fewer but larger segments.
diff --git a/docs/content/tutorials/tutorial-delete-data.md b/docs/content/tutorials/tutorial-delete-data.md
index 877abb9..49b9d14 100644
--- a/docs/content/tutorials/tutorial-delete-data.md
+++ b/docs/content/tutorials/tutorial-delete-data.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Tutorial: Deleting data"
 ---
-
 # Tutorial: Deleting data
 
 This tutorial demonstrates how to delete existing data.
diff --git a/docs/content/tutorials/tutorial-ingestion-spec.md b/docs/content/tutorials/tutorial-ingestion-spec.md
index 1c3d34c..34ee3ab 100644
--- a/docs/content/tutorials/tutorial-ingestion-spec.md
+++ b/docs/content/tutorials/tutorial-ingestion-spec.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Tutorial: Writing an ingestion spec"
 ---
-
 # Tutorial: Writing an ingestion spec
 
 This tutorial will guide the reader through the process of defining an ingestion spec, pointing out key considerations and guidelines.
diff --git a/docs/content/tutorials/tutorial-kafka.md b/docs/content/tutorials/tutorial-kafka.md
index 2795331..2cfc05f 100644
--- a/docs/content/tutorials/tutorial-kafka.md
+++ b/docs/content/tutorials/tutorial-kafka.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Tutorial: Load streaming data from Kafka"
 ---
-
 # Tutorial: Load streaming data from Kafka
 
 ## Getting started
diff --git a/docs/content/tutorials/tutorial-query.md b/docs/content/tutorials/tutorial-query.md
index 23de38c..fbf75e0 100644
--- a/docs/content/tutorials/tutorial-query.md
+++ b/docs/content/tutorials/tutorial-query.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Tutorial: Querying data"
 ---
-
 # Tutorial: Querying data
 
 This tutorial will demonstrate how to query data in Druid, with examples for Druid's native query format and Druid SQL.
diff --git a/docs/content/tutorials/tutorial-retention.md b/docs/content/tutorials/tutorial-retention.md
index b5acc41..8c894f3 100644
--- a/docs/content/tutorials/tutorial-retention.md
+++ b/docs/content/tutorials/tutorial-retention.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Tutorial: Configuring data retention"
 ---
-
 # Tutorial: Configuring data retention
 
 This tutorial demonstrates how to configure retention rules on a datasource to set the time intervals of data that will be retained or dropped.
diff --git a/docs/content/tutorials/tutorial-rollup.md b/docs/content/tutorials/tutorial-rollup.md
index f945162..dd57085 100644
--- a/docs/content/tutorials/tutorial-rollup.md
+++ b/docs/content/tutorials/tutorial-rollup.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Tutorial: Roll-up"
 ---
-
 # Tutorial: Roll-up
 
 Druid can summarize raw data at ingestion time using a process we refer to as "roll-up". Roll-up is a first-level aggregation operation over a selected set of columns that reduces the size of stored segments.
diff --git a/docs/content/tutorials/tutorial-tranquility.md b/docs/content/tutorials/tutorial-tranquility.md
index 2b14c3a..21fbe5a 100644
--- a/docs/content/tutorials/tutorial-tranquility.md
+++ b/docs/content/tutorials/tutorial-tranquility.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Tutorial: Load streaming data with HTTP push"
 ---
-
 # Tutorial: Load streaming data with HTTP push
 
 ## Getting started
diff --git a/docs/content/tutorials/tutorial-transform-spec.md b/docs/content/tutorials/tutorial-transform-spec.md
index 677206a..f1e13c4 100644
--- a/docs/content/tutorials/tutorial-transform-spec.md
+++ b/docs/content/tutorials/tutorial-transform-spec.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Tutorial: Transforming input data"
 ---
-
 # Tutorial: Transforming input data
 
 This tutorial will demonstrate how to use transform specs to filter and transform input data during ingestion.
diff --git a/docs/content/tutorials/tutorial-update-data.md b/docs/content/tutorials/tutorial-update-data.md
index 4a2b725..f24e559 100644
--- a/docs/content/tutorials/tutorial-update-data.md
+++ b/docs/content/tutorials/tutorial-update-data.md
@@ -19,8 +19,8 @@
 
 ---
 layout: doc_page
+title: "Tutorial: Updating existing data"
 ---
-
 # Tutorial: Updating existing data
 
 This tutorial demonstrates how to update existing data, showing both overwrites and appends.