Rebuild site
diff --git a/content/Powered-By.html b/content/Powered-By.html
index cf2df24..c8709a6 100644
--- a/content/Powered-By.html
+++ b/content/Powered-By.html
@@ -142,7 +142,7 @@
 
 <p class="post-meta"></p>
 
-<p>Want to be added to this page? Send an email <a href="mailto:dev@storm.apache.org">here</a>.</p>
+<div class="documentation-content"><p>Want to be added to this page? Send an email <a href="mailto:dev@storm.apache.org">here</a>.</p>
 
 <table class="table table-striped">
 
@@ -1177,7 +1177,7 @@
 
 
 </table>
-
+</div>
 
 
 	          </div>
diff --git a/content/contribute/BYLAWS.html b/content/contribute/BYLAWS.html
index 242e290..2a0b584 100644
--- a/content/contribute/BYLAWS.html
+++ b/content/contribute/BYLAWS.html
@@ -142,7 +142,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="roles-and-responsibilities">Roles and Responsibilities</h2>
+<div class="documentation-content"><h2 id="roles-and-responsibilities">Roles and Responsibilities</h2>
 
 <p>Apache projects define a set of roles with associated rights and responsibilities. These roles govern what tasks an individual may perform within the project. The roles are defined in the following sections:</p>
 
@@ -356,7 +356,7 @@
 <td><a href="mailto:dev@storm.apache.org">dev@storm.apache.org</a></td>
 </tr>
 </tbody></table>
-
+</div>
 
 
 	          </div>
diff --git a/content/contribute/Contributing-to-Storm.html b/content/contribute/Contributing-to-Storm.html
index ff4c290..da28f96 100644
--- a/content/contribute/Contributing-to-Storm.html
+++ b/content/contribute/Contributing-to-Storm.html
@@ -142,7 +142,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="getting-started-with-contributing">Getting started with contributing</h3>
+<div class="documentation-content"><h3 id="getting-started-with-contributing">Getting started with contributing</h3>
 
 <p>Some of the issues on the <a href="https://issues.apache.org/jira/browse/STORM">issue tracker</a> are marked with the &quot;Newbie&quot; label. If you&#39;re interesting in contributing to Storm but don&#39;t know where to begin, these are good issues to start with. These issues are a great way to get your feet wet with learning the codebase because they require learning about only an isolated portion of the codebase and are a relatively small amount of work.</p>
 
@@ -167,7 +167,7 @@
 <h3 id="contributing-documentation">Contributing documentation</h3>
 
 <p>Documentation contributions are very welcome! The best way to send contributions is as emails through the mailing list.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/contribute/People.html b/content/contribute/People.html
index cddc6e3..c52cc58 100644
--- a/content/contribute/People.html
+++ b/content/contribute/People.html
@@ -142,7 +142,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="project-management">Project Management</h2>
+<div class="documentation-content"><h2 id="project-management">Project Management</h2>
 
 <table class="table table-striped table-bordered table-responsive">
   <thead>
@@ -434,7 +434,7 @@
   </tr>
 
 </table>
-
+</div>
 
 
 	          </div>
diff --git a/content/css/style.css b/content/css/style.css
index 0302cb7..57ea25c 100644
--- a/content/css/style.css
+++ b/content/css/style.css
@@ -532,7 +532,19 @@
 	border-top: none;
 }
 
+.documentation-content table tr {
+	background-color: #fff;
+	border-top: 1px solid #c6cbd1;
+}
 
+.documentation-content table th, .documentation-content table td {
+	padding: 6px 13px;
+	border: 1px solid #dfe2e5;
+}
+
+.documentation-content table tr:nth-child(2n) {
+	background-color: #f6f8fa;
+}
 
 
 
diff --git a/content/feed.xml b/content/feed.xml
index 19d5acd..9fdda0e 100644
--- a/content/feed.xml
+++ b/content/feed.xml
@@ -5,8 +5,8 @@
     <description></description>
     <link>http://storm.apache.org/</link>
     <atom:link href="http://storm.apache.org/feed.xml" rel="self" type="application/rss+xml"/>
-    <pubDate>Sat, 12 May 2018 17:59:26 +0200</pubDate>
-    <lastBuildDate>Sat, 12 May 2018 17:59:26 +0200</lastBuildDate>
+    <pubDate>Tue, 15 May 2018 17:16:07 +0200</pubDate>
+    <lastBuildDate>Tue, 15 May 2018 17:16:07 +0200</lastBuildDate>
     <generator>Jekyll v3.6.2</generator>
     
       <item>
diff --git a/content/news.html b/content/news.html
index 15765f1..a45df5f 100644
--- a/content/news.html
+++ b/content/news.html
@@ -142,7 +142,7 @@
 
 <p class="post-meta"></p>
 
-<div class="row">
+<div class="documentation-content"><div class="row">
     <div class="col-md-3">
         <ul class="news" id="news-list">
             
@@ -234,7 +234,7 @@
              
         </div>
     </div>
-</div>
+</div></div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Acking-framework-implementation.html b/content/releases/1.0.6/Acking-framework-implementation.html
index b57c18b..59c87d8 100644
--- a/content/releases/1.0.6/Acking-framework-implementation.html
+++ b/content/releases/1.0.6/Acking-framework-implementation.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p><a href="https://github.com/apache/incubator-storm/blob/46c3ba7/storm-core/src/clj/backtype/storm/daemon/acker.clj#L28">Storm&#39;s acker</a> tracks completion of each tupletree with a checksum hash: each time a tuple is sent, its value is XORed into the checksum, and each time a tuple is acked its value is XORed in again. If all tuples have been successfully acked, the checksum will be zero (the odds that the checksum will be zero otherwise are vanishingly small).</p>
+<div class="documentation-content"><p><a href="https://github.com/apache/incubator-storm/blob/46c3ba7/storm-core/src/clj/backtype/storm/daemon/acker.clj#L28">Storm&#39;s acker</a> tracks completion of each tupletree with a checksum hash: each time a tuple is sent, its value is XORed into the checksum, and each time a tuple is acked its value is XORed in again. If all tuples have been successfully acked, the checksum will be zero (the odds that the checksum will be zero otherwise are vanishingly small).</p>
 
 <p>You can read a bit more about the <a href="Guaranteeing-message-processing.html#what-is-storms-reliability-api">reliability mechanism</a> elsewhere on the wiki -- this explains the internal details.</p>
 
@@ -180,7 +180,7 @@
 <p>Internally, it holds several HashMaps (&#39;buckets&#39;) of its own, each holding a cohort of records that will expire at the same time.  Let&#39;s call the longest-lived bucket death row, and the most recent the nursery. Whenever a value is <code>.put()</code> to the RotatingMap, it is relocated to the nursery -- and removed from any other bucket it might have been in (effectively resetting its death clock).</p>
 
 <p>Whenever its owner calls <code>.rotate()</code>, the RotatingMap advances each cohort one step further towards expiration. (Typically, Storm objects call rotate on every receipt of a system tick stream tuple.) If there are any key-value pairs in the former death row bucket, the RotatingMap invokes a callback (given in the constructor) for each key-value pair, letting its owner take appropriate action (eg, failing a tuple.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Clojure-DSL.html b/content/releases/1.0.6/Clojure-DSL.html
index e917f7c..75368b6 100644
--- a/content/releases/1.0.6/Clojure-DSL.html
+++ b/content/releases/1.0.6/Clojure-DSL.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm comes with a Clojure DSL for defining spouts, bolts, and topologies. The Clojure DSL has access to everything the Java API exposes, so if you&#39;re a Clojure user you can code Storm topologies without touching Java at all. The Clojure DSL is defined in the source in the <a href="http://github.com/apache/storm/blob/v1.0.6/storm-core/src/clj/org/apache/storm/clojure.clj">org.apache.storm.clojure</a> namespace.</p>
+<div class="documentation-content"><p>Storm comes with a Clojure DSL for defining spouts, bolts, and topologies. The Clojure DSL has access to everything the Java API exposes, so if you&#39;re a Clojure user you can code Storm topologies without touching Java at all. The Clojure DSL is defined in the source in the <a href="http://github.com/apache/storm/blob/v1.0.6/storm-core/src/clj/org/apache/storm/clojure.clj">org.apache.storm.clojure</a> namespace.</p>
 
 <p>This page outlines all the pieces of the Clojure DSL, including:</p>
 
@@ -371,7 +371,7 @@
 <h3 id="testing-topologies">Testing topologies</h3>
 
 <p><a href="http://www.pixelmachine.org/2011/12/17/Testing-Storm-Topologies.html">This blog post</a> and its <a href="http://www.pixelmachine.org/2011/12/21/Testing-Storm-Topologies-Part-2.html">follow-up</a> give a good overview of Storm&#39;s powerful built-in facilities for testing topologies in Clojure.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Command-line-client.html b/content/releases/1.0.6/Command-line-client.html
index 20ca10c..16bd00f 100644
--- a/content/releases/1.0.6/Command-line-client.html
+++ b/content/releases/1.0.6/Command-line-client.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page describes all the commands that are possible with the &quot;storm&quot; command line client. To learn how to set up your &quot;storm&quot; client to talk to a remote cluster, follow the instructions in <a href="Setting-up-development-environment.html">Setting up development environment</a>.</p>
+<div class="documentation-content"><p>This page describes all the commands that are possible with the &quot;storm&quot; command line client. To learn how to set up your &quot;storm&quot; client to talk to a remote cluster, follow the instructions in <a href="Setting-up-development-environment.html">Setting up development environment</a>.</p>
 
 <p>These commands are:</p>
 
@@ -411,7 +411,7 @@
 <p>Syntax: <code>storm help [command]</code></p>
 
 <p>Print one help message or list of available commands</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Common-patterns.html b/content/releases/1.0.6/Common-patterns.html
index ed5e979..20aea41 100644
--- a/content/releases/1.0.6/Common-patterns.html
+++ b/content/releases/1.0.6/Common-patterns.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page lists a variety of common patterns in Storm topologies.</p>
+<div class="documentation-content"><p>This page lists a variety of common patterns in Storm topologies.</p>
 
 <ol>
 <li>Streaming joins</li>
@@ -225,7 +225,7 @@
 <p><code>KeyedFairBolt</code> also wraps the bolt containing your logic and makes sure your topology processes multiple DRPC invocations at the same time, instead of doing them serially one at a time.</p>
 
 <p>See <a href="Distributed-RPC.html">Distributed RPC</a> for more details.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Concepts.html b/content/releases/1.0.6/Concepts.html
index 14aae54..7969af2 100644
--- a/content/releases/1.0.6/Concepts.html
+++ b/content/releases/1.0.6/Concepts.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page lists the main concepts of Storm and links to resources where you can find more information. The concepts discussed are:</p>
+<div class="documentation-content"><p>This page lists the main concepts of Storm and links to resources where you can find more information. The concepts discussed are:</p>
 
 <ol>
 <li>Topologies</li>
@@ -268,7 +268,7 @@
 <ul>
 <li><a href="javadocs/org/apache/storm/Config.html#TOPOLOGY_WORKERS">Config.TOPOLOGY_WORKERS</a>: this config sets the number of workers to allocate for executing the topology</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Configuration.html b/content/releases/1.0.6/Configuration.html
index 42cb905..8044370 100644
--- a/content/releases/1.0.6/Configuration.html
+++ b/content/releases/1.0.6/Configuration.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm has a variety of configurations for tweaking the behavior of nimbus, supervisors, and running topologies. Some configurations are system configurations and cannot be modified on a topology by topology basis, whereas other configurations can be modified per topology. </p>
+<div class="documentation-content"><p>Storm has a variety of configurations for tweaking the behavior of nimbus, supervisors, and running topologies. Some configurations are system configurations and cannot be modified on a topology by topology basis, whereas other configurations can be modified per topology. </p>
 
 <p>Every configuration has a default value defined in <a href="http://github.com/apache/storm/blob/v1.0.6/conf/defaults.yaml">defaults.yaml</a> in the Storm codebase. You can override these configurations by defining a storm.yaml in the classpath of Nimbus and the supervisors. Finally, you can define a topology-specific configuration that you submit along with your topology when using <a href="javadocs/org/apache/storm/StormSubmitter.html">StormSubmitter</a>. However, the topology-specific configuration can only override configs prefixed with &quot;TOPOLOGY&quot;.</p>
 
@@ -175,7 +175,7 @@
 <li><a href="Running-topologies-on-a-production-cluster.html">Running topologies on a production cluster</a>: lists useful configurations when running topologies on a cluster</li>
 <li><a href="Local-mode.html">Local mode</a>: lists useful configurations when using local mode</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Contributing-to-Storm.html b/content/releases/1.0.6/Contributing-to-Storm.html
index af3b3bc..887c574 100644
--- a/content/releases/1.0.6/Contributing-to-Storm.html
+++ b/content/releases/1.0.6/Contributing-to-Storm.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="getting-started-with-contributing">Getting started with contributing</h3>
+<div class="documentation-content"><h3 id="getting-started-with-contributing">Getting started with contributing</h3>
 
 <p>Some of the issues on the <a href="https://issues.apache.org/jira/browse/STORM">issue tracker</a> are marked with the <a href="https://issues.apache.org/jira/browse/STORM-2891?jql=project%20%3D%20STORM%20AND%20status%20%3D%20Open%20AND%20labels%20in%20(newbie%2C%20%22newbie%2B%2B%22)">&quot;Newbie&quot;</a> label. If you&#39;re interested in contributing to Storm but don&#39;t know where to begin, these are good issues to start with. These issues are a great way to get your feet wet with learning the codebase because they require learning about only an isolated portion of the codebase and are a relatively small amount of work.</p>
 
@@ -172,7 +172,7 @@
 <h3 id="contributing-documentation">Contributing documentation</h3>
 
 <p>Documentation contributions are very welcome! The best way to send contributions is as emails through the mailing list.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Creating-a-new-Storm-project.html b/content/releases/1.0.6/Creating-a-new-Storm-project.html
index f919cfa..251e502 100644
--- a/content/releases/1.0.6/Creating-a-new-Storm-project.html
+++ b/content/releases/1.0.6/Creating-a-new-Storm-project.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page outlines how to set up a Storm project for development. The steps are:</p>
+<div class="documentation-content"><p>This page outlines how to set up a Storm project for development. The steps are:</p>
 
 <ol>
 <li>Add Storm jars to classpath</li>
@@ -166,7 +166,7 @@
 <p>For more information on writing topologies in other languages, see <a href="Using-non-JVM-languages-with-Storm.html">Using non-JVM languages with Storm</a>.</p>
 
 <p>To test that everything is working in Eclipse, you should now be able to <code>Run</code> the <code>WordCountTopology.java</code> file. You will see messages being emitted at the console for 10 seconds.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/DSLs-and-multilang-adapters.html b/content/releases/1.0.6/DSLs-and-multilang-adapters.html
index d05295d..107b114 100644
--- a/content/releases/1.0.6/DSLs-and-multilang-adapters.html
+++ b/content/releases/1.0.6/DSLs-and-multilang-adapters.html
@@ -144,14 +144,14 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><a href="https://github.com/velvia/ScalaStorm">Scala DSL</a></li>
 <li><a href="https://github.com/colinsurprenant/redstorm">JRuby DSL</a></li>
 <li><a href="Clojure-DSL.html">Clojure DSL</a></li>
 <li><a href="https://github.com/tomdz/storm-esper">Storm/Esper integration</a>: Streaming SQL on top of Storm</li>
 <li><a href="https://github.com/dan-blanchard/io-storm">io-storm</a>: Perl multilang adapter</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Daemon-Fault-Tolerance.html b/content/releases/1.0.6/Daemon-Fault-Tolerance.html
index 2aac7ee..c5c36af 100644
--- a/content/releases/1.0.6/Daemon-Fault-Tolerance.html
+++ b/content/releases/1.0.6/Daemon-Fault-Tolerance.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm has several different daemon processes.  Nimbus that schedules workers, supervisors that launch and kill workers, the log viewer that gives access to logs, and the UI that shows the status of a cluster.</p>
+<div class="documentation-content"><p>Storm has several different daemon processes.  Nimbus that schedules workers, supervisors that launch and kill workers, the log viewer that gives access to logs, and the UI that shows the status of a cluster.</p>
 
 <h2 id="what-happens-when-a-worker-dies">What happens when a worker dies?</h2>
 
@@ -169,7 +169,7 @@
 <h2 id="how-does-storm-guarantee-data-processing">How does Storm guarantee data processing?</h2>
 
 <p>Storm provides mechanisms to guarantee data processing even if nodes die or messages are lost. See <a href="Guaranteeing-message-processing.html">Guaranteeing message processing</a> for the details.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Defining-a-non-jvm-language-dsl-for-storm.html b/content/releases/1.0.6/Defining-a-non-jvm-language-dsl-for-storm.html
index b8dfa1a..713d544 100644
--- a/content/releases/1.0.6/Defining-a-non-jvm-language-dsl-for-storm.html
+++ b/content/releases/1.0.6/Defining-a-non-jvm-language-dsl-for-storm.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The right place to start to learn how to make a non-JVM DSL for Storm is <a href="http://github.com/apache/storm/blob/v1.0.6/storm-core/src/storm.thrift">storm-core/src/storm.thrift</a>. Since Storm topologies are just Thrift structures, and Nimbus is a Thrift daemon, you can create and submit topologies in any language.</p>
+<div class="documentation-content"><p>The right place to start to learn how to make a non-JVM DSL for Storm is <a href="http://github.com/apache/storm/blob/v1.0.6/storm-core/src/storm.thrift">storm-core/src/storm.thrift</a>. Since Storm topologies are just Thrift structures, and Nimbus is a Thrift daemon, you can create and submit topologies in any language.</p>
 
 <p>When you create the Thrift structs for spouts and bolts, the code for the spout or bolt is specified in the ComponentObject struct:</p>
 <div class="highlight"><pre><code class="language-" data-lang="">union ComponentObject {
@@ -165,7 +165,7 @@
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kt">void</span> <span class="nf">submitTopology</span><span class="o">(</span><span class="mi">1</span><span class="o">:</span> <span class="n">string</span> <span class="n">name</span><span class="o">,</span> <span class="mi">2</span><span class="o">:</span> <span class="n">string</span> <span class="n">uploadedJarLocation</span><span class="o">,</span> <span class="mi">3</span><span class="o">:</span> <span class="n">string</span> <span class="n">jsonConf</span><span class="o">,</span> <span class="mi">4</span><span class="o">:</span> <span class="n">StormTopology</span> <span class="n">topology</span><span class="o">)</span> <span class="kd">throws</span> <span class="o">(</span><span class="mi">1</span><span class="o">:</span> <span class="n">AlreadyAliveException</span> <span class="n">e</span><span class="o">,</span> <span class="mi">2</span><span class="o">:</span> <span class="n">InvalidTopologyException</span> <span class="n">ite</span><span class="o">);</span>
 </code></pre></div>
 <p>Finally, one of the key things to do in a non-JVM DSL is make it easy to define the entire topology in one file (the bolts, spouts, and the definition of the topology).</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Distributed-RPC.html b/content/releases/1.0.6/Distributed-RPC.html
index f11dc24..f699d63 100644
--- a/content/releases/1.0.6/Distributed-RPC.html
+++ b/content/releases/1.0.6/Distributed-RPC.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The idea behind distributed RPC (DRPC) is to parallelize the computation of really intense functions on the fly using Storm. The Storm topology takes in as input a stream of function arguments, and it emits an output stream of the results for each of those function calls. </p>
+<div class="documentation-content"><p>The idea behind distributed RPC (DRPC) is to parallelize the computation of really intense functions on the fly using Storm. The Storm topology takes in as input a stream of function arguments, and it emits an output stream of the results for each of those function calls. </p>
 
 <p>DRPC is not so much a feature of Storm as it is a pattern expressed from Storm&#39;s primitives of streams, spouts, bolts, and topologies. DRPC could have been packaged as a separate library from Storm, but it&#39;s so useful that it&#39;s bundled with Storm.</p>
 
@@ -330,7 +330,7 @@
 <li>KeyedFairBolt for weaving the processing of multiple requests at the same time</li>
 <li>How to use <code>CoordinatedBolt</code> directly</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Eventlogging.html b/content/releases/1.0.6/Eventlogging.html
index dcd75e8..78eca14 100644
--- a/content/releases/1.0.6/Eventlogging.html
+++ b/content/releases/1.0.6/Eventlogging.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="introduction">Introduction</h1>
+<div class="documentation-content"><h1 id="introduction">Introduction</h1>
 
 <p>Topology event inspector provides the ability to view the tuples as it flows through different stages in a storm topology.
 This could be useful for inspecting the tuples emitted at a spout or a bolt in the topology pipeline while the topology is running, without stopping or redeploying the topology. The normal flow of tuples from the spouts to the bolts is not affected by turning on event logging.</p>
@@ -247,7 +247,7 @@
     */</span>
     <span class="kt">void</span> <span class="nf">close</span><span class="o">();</span>
 <span class="o">}</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/FAQ.html b/content/releases/1.0.6/FAQ.html
index 49e1a4d..73a0066 100644
--- a/content/releases/1.0.6/FAQ.html
+++ b/content/releases/1.0.6/FAQ.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="best-practices">Best Practices</h2>
+<div class="documentation-content"><h2 id="best-practices">Best Practices</h2>
 
 <h3 id="what-rules-of-thumb-can-you-give-me-for-configuring-storm-trident">What rules of thumb can you give me for configuring Storm+Trident?</h3>
 
@@ -276,7 +276,7 @@
 <li>When possible, make your process incremental: each value that comes in makes the answer more an more true. A Trident ReducerAggregator is an operator that takes a prior result and a set of new records and returns a new result. This lets the result be cached and serialized to a datastore; if a server drops off line for a day and then comes back with a full day&#39;s worth of data in a rush, the old results will be calmly retrieved and updated.</li>
 <li>Lambda architecture: Record all events into an archival store (S3, HBase, HDFS) on receipt. in the fast layer, once the time window is clear, process the bucket to get an actionable answer, and ignore everything older than the time window. Periodically run a global aggregation to calculate a &quot;correct&quot; answer.</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Fault-tolerance.html b/content/releases/1.0.6/Fault-tolerance.html
index 20551f4..0cdc4e3 100644
--- a/content/releases/1.0.6/Fault-tolerance.html
+++ b/content/releases/1.0.6/Fault-tolerance.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page explains the design details of Storm that make it a fault-tolerant system.</p>
+<div class="documentation-content"><p>This page explains the design details of Storm that make it a fault-tolerant system.</p>
 
 <h2 id="what-happens-when-a-worker-dies">What happens when a worker dies?</h2>
 
@@ -169,7 +169,7 @@
 <h2 id="how-does-storm-guarantee-data-processing">How does Storm guarantee data processing?</h2>
 
 <p>Storm provides mechanisms to guarantee data processing even if nodes die or messages are lost. See <a href="Guaranteeing-message-processing.html">Guaranteeing message processing</a> for the details.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Guaranteeing-message-processing.html b/content/releases/1.0.6/Guaranteeing-message-processing.html
index 0fc48a7..29c9423 100644
--- a/content/releases/1.0.6/Guaranteeing-message-processing.html
+++ b/content/releases/1.0.6/Guaranteeing-message-processing.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm offers several different levels of guaranteed message processing, including best effort, at least once, and exactly once through <a href="Trident-tutorial.html">Trident</a>.
+<div class="documentation-content"><p>Storm offers several different levels of guaranteed message processing, including best effort, at least once, and exactly once through <a href="Trident-tutorial.html">Trident</a>.
 This page describes how Storm can guarantee at least once processing.</p>
 
 <h3 id="what-does-it-mean-for-a-message-to-be-fully-processed">What does it mean for a message to be &quot;fully processed&quot;?</h3>
@@ -301,7 +301,7 @@
 <p>The second way is to remove reliability on a message by message basis. You can turn off tracking for an individual spout tuple by omitting a message id in the <code>SpoutOutputCollector.emit</code> method.</p>
 
 <p>Finally, if you don&#39;t care if a particular subset of the tuples downstream in the topology fail to be processed, you can emit them as unanchored tuples. Since they&#39;re not anchored to any spout tuples, they won&#39;t cause any spout tuples to fail if they aren&#39;t acked.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Hooks.html b/content/releases/1.0.6/Hooks.html
index 4707651..567923a 100644
--- a/content/releases/1.0.6/Hooks.html
+++ b/content/releases/1.0.6/Hooks.html
@@ -144,13 +144,13 @@
 
 <p class="post-meta"></p>
 
-<p>Storm provides hooks with which you can insert custom code to run on any number of events within Storm. You create a hook by extending the <a href="javadocs/org/apache/storm/hooks/BaseTaskHook.html">BaseTaskHook</a> class and overriding the appropriate method for the event you want to catch. There are two ways to register your hook:</p>
+<div class="documentation-content"><p>Storm provides hooks with which you can insert custom code to run on any number of events within Storm. You create a hook by extending the <a href="javadocs/org/apache/storm/hooks/BaseTaskHook.html">BaseTaskHook</a> class and overriding the appropriate method for the event you want to catch. There are two ways to register your hook:</p>
 
 <ol>
 <li>In the open method of your spout or prepare method of your bolt using the <a href="javadocs/org/apache/storm/task/TopologyContext.html#addTaskHook">TopologyContext</a> method.</li>
 <li>Through the Storm configuration using the <a href="javadocs/org/apache/storm/Config.html#TOPOLOGY_AUTO_TASK_HOOKS">&quot;topology.auto.task.hooks&quot;</a> config. These hooks are automatically registered in every spout or bolt, and are useful for doing things like integrating with a custom monitoring system.</li>
 </ol>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Implementation-docs.html b/content/releases/1.0.6/Implementation-docs.html
index 3811c0c..fed0756 100644
--- a/content/releases/1.0.6/Implementation-docs.html
+++ b/content/releases/1.0.6/Implementation-docs.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This section of the wiki is dedicated to explaining how Storm is implemented. You should have a good grasp of how to use Storm before reading these sections. </p>
+<div class="documentation-content"><p>This section of the wiki is dedicated to explaining how Storm is implemented. You should have a good grasp of how to use Storm before reading these sections. </p>
 
 <ul>
 <li><a href="Structure-of-the-codebase.html">Structure of the codebase</a></li>
@@ -154,7 +154,7 @@
 <li><a href="nimbus-ha-design.html">Nimbus HA</a></li>
 <li><a href="storm-sql-internal.html">Storm SQL</a></li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Installing-native-dependencies.html b/content/releases/1.0.6/Installing-native-dependencies.html
index 89fdf74..5d7fff7 100644
--- a/content/releases/1.0.6/Installing-native-dependencies.html
+++ b/content/releases/1.0.6/Installing-native-dependencies.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The native dependencies are only needed on actual Storm clusters. When running Storm in local mode, Storm uses a pure Java messaging system so that you don&#39;t need to install native dependencies on your development machine.</p>
+<div class="documentation-content"><p>The native dependencies are only needed on actual Storm clusters. When running Storm in local mode, Storm uses a pure Java messaging system so that you don&#39;t need to install native dependencies on your development machine.</p>
 
 <p>Installing ZeroMQ and JZMQ is usually straightforward. Sometimes, however, people run into issues with autoconf and get strange errors. If you run into any issues, please email the <a href="http://groups.google.com/group/storm-user">Storm mailing list</a> or come get help in the #storm-user room on freenode. </p>
 
@@ -175,7 +175,7 @@
 </ol>
 
 <p>If you run into any errors when running <code>./configure</code>, <a href="http://stackoverflow.com/questions/3522248/how-do-i-compile-jzmq-for-zeromq-on-osx">this thread</a> may provide a solution.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Kestrel-and-Storm.html b/content/releases/1.0.6/Kestrel-and-Storm.html
index 0ef1966..dacc415 100644
--- a/content/releases/1.0.6/Kestrel-and-Storm.html
+++ b/content/releases/1.0.6/Kestrel-and-Storm.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page explains how to use Storm to consume items from a Kestrel cluster.</p>
+<div class="documentation-content"><p>This page explains how to use Storm to consume items from a Kestrel cluster.</p>
 
 <h2 id="preliminaries">Preliminaries</h2>
 
@@ -334,7 +334,7 @@
 Now execute the program to add items to the queue and launch the Storm topology. The order in which you launch the programs is of no importance.
 
 If you run the topology with TOPOLOGY_DEBUG you should see tuples being emitted in the topology.
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Lifecycle-of-a-topology.html b/content/releases/1.0.6/Lifecycle-of-a-topology.html
index 2999f9f..85fcd42 100644
--- a/content/releases/1.0.6/Lifecycle-of-a-topology.html
+++ b/content/releases/1.0.6/Lifecycle-of-a-topology.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>(<strong>NOTE</strong>: this page is based on the 0.7.1 code; many things have changed since then, including a split between tasks and executors, and a reorganization of the code under <code>storm-core/src</code> rather than <code>src/</code>.)</p>
+<div class="documentation-content"><p>(<strong>NOTE</strong>: this page is based on the 0.7.1 code; many things have changed since then, including a split between tasks and executors, and a reorganization of the code under <code>storm-core/src</code> rather than <code>src/</code>.)</p>
 
 <p>This page explains in detail the lifecycle of a topology from running the &quot;storm jar&quot; command to uploading the topology to Nimbus to the supervisors starting/stopping workers to workers and tasks setting themselves up. It also explains how Nimbus monitors topologies and how topologies are shutdown when they are killed.</p>
 
@@ -261,7 +261,7 @@
 <li>Removing a topology cleans out the assignment and static information from ZK <a href="https://github.com/apache/storm/blob/0.7.1/src/clj/org/apache/storm/daemon/nimbus.clj#L116">code</a></li>
 <li>A separate cleanup thread runs the <code>do-cleanup</code> function which will clean up the heartbeat dir and the jars/configs stored locally. <a href="https://github.com/apache/storm/blob/0.7.1/src/clj/org/apache/storm/daemon/nimbus.clj#L577">code</a></li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Local-mode.html b/content/releases/1.0.6/Local-mode.html
index 11db27e..e2a1839 100644
--- a/content/releases/1.0.6/Local-mode.html
+++ b/content/releases/1.0.6/Local-mode.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Local mode simulates a Storm cluster in process and is useful for developing and testing topologies. Running topologies in local mode is similar to running topologies <a href="Running-topologies-on-a-production-cluster.html">on a cluster</a>. </p>
+<div class="documentation-content"><p>Local mode simulates a Storm cluster in process and is useful for developing and testing topologies. Running topologies in local mode is similar to running topologies <a href="Running-topologies-on-a-production-cluster.html">on a cluster</a>. </p>
 
 <p>To create an in-process cluster, simply use the <code>LocalCluster</code> class. For example:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kn">import</span> <span class="nn">org.apache.storm.LocalCluster</span><span class="o">;</span>
@@ -164,7 +164,7 @@
 <li><strong>Config.TOPOLOGY_MAX_TASK_PARALLELISM</strong>: This config puts a ceiling on the number of threads spawned for a single component. Oftentimes production topologies have a lot of parallelism (hundreds of threads) which places unreasonable load when trying to test the topology in local mode. This config lets you easy control that parallelism.</li>
 <li><strong>Config.TOPOLOGY_DEBUG</strong>: When this is set to true, Storm will log a message every time a tuple is emitted from any spout or bolt. This is extremely useful for debugging.</li>
 </ol>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Logs.html b/content/releases/1.0.6/Logs.html
index f9a7637..266141a 100644
--- a/content/releases/1.0.6/Logs.html
+++ b/content/releases/1.0.6/Logs.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Logs in Storm are essential for tracking the status, operations, error messages and debug information for all the 
+<div class="documentation-content"><p>Logs in Storm are essential for tracking the status, operations, error messages and debug information for all the 
 daemons (e.g., nimbus, supervisor, logviewer, drpc, ui, pacemaker) and topologies&#39; workers.</p>
 
 <h3 id="location-of-the-logs">Location of the Logs</h3>
@@ -171,7 +171,7 @@
 <p>Search in a topology: a user can also search a string for a certain topology by clicking the icon of magnifying lens at the top right corner of the UI page. This means the UI will try to search on all the supervisor nodes in a distributed way to find the matched string in all logs for this topology. The search can happen for either normal text log files or rolled zip log files by checking/unchecking the &quot;Search archived logs:&quot; box. Then the matched results can be shown on the UI with url links, directing the user to the certain logs on each supervisor node. This powerful feature is very helpful for users to find certain problematic supervisor nodes running this topology.</p>
 
 <p><img src="images/search-a-topology.png" alt="Search in a topology" title="Search in a topology"></p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Maven.html b/content/releases/1.0.6/Maven.html
index 810d427..39ea13e 100644
--- a/content/releases/1.0.6/Maven.html
+++ b/content/releases/1.0.6/Maven.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>To develop topologies, you&#39;ll need the Storm jars on your classpath. You should either include the unpacked jars in the classpath for your project or use Maven to include Storm as a development dependency. Storm is hosted on Maven Central. To include Storm in your project as a development dependency, add the following to your pom.xml:</p>
+<div class="documentation-content"><p>To develop topologies, you&#39;ll need the Storm jars on your classpath. You should either include the unpacked jars in the classpath for your project or use Maven to include Storm as a development dependency. Storm is hosted on Maven Central. To include Storm in your project as a development dependency, add the following to your pom.xml:</p>
 <div class="highlight"><pre><code class="language-xml" data-lang="xml"><span class="nt">&lt;dependency&gt;</span>
   <span class="nt">&lt;groupId&gt;</span>org.apache.storm<span class="nt">&lt;/groupId&gt;</span>
   <span class="nt">&lt;artifactId&gt;</span>storm-core<span class="nt">&lt;/artifactId&gt;</span>
@@ -157,7 +157,7 @@
 <h3 id="developing-storm">Developing Storm</h3>
 
 <p>Please refer to <a href="http://github.com/apache/storm/blob/v1.0.6/DEVELOPER.md">DEVELOPER.md</a> for more details.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Message-passing-implementation.html b/content/releases/1.0.6/Message-passing-implementation.html
index 8170a39..9f33ef0 100644
--- a/content/releases/1.0.6/Message-passing-implementation.html
+++ b/content/releases/1.0.6/Message-passing-implementation.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>(Note: this walkthrough is out of date as of 0.8.0. 0.8.0 revamped the message passing infrastructure to be based on the Disruptor)</p>
+<div class="documentation-content"><p>(Note: this walkthrough is out of date as of 0.8.0. 0.8.0 revamped the message passing infrastructure to be based on the Disruptor)</p>
 
 <p>This page walks through how emitting and transferring tuples works in Storm.</p>
 
@@ -186,7 +186,7 @@
 </ul></li>
 </ul></li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Metrics.html b/content/releases/1.0.6/Metrics.html
index d90c0fd..2ff22c6 100644
--- a/content/releases/1.0.6/Metrics.html
+++ b/content/releases/1.0.6/Metrics.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm exposes a metrics interface to report summary statistics across the full topology.
+<div class="documentation-content"><p>Storm exposes a metrics interface to report summary statistics across the full topology.
 It&#39;s used internally to track the numbers you see in the Nimbus UI console: counts of executes and acks; average process latency per bolt; worker heap usage; and so forth.</p>
 
 <h3 id="metric-types">Metric Types</h3>
@@ -255,7 +255,7 @@
 <p>The <a href="http://github.com/apache/storm/blob/v1.0.6/storm-core/src/clj/org/apache/storm/daemon/builtin_metrics.clj">builtin metrics</a> instrument Storm itself.</p>
 
 <p><a href="http://github.com/apache/storm/blob/v1.0.6/storm-core/src/clj/org/apache/storm/daemon/builtin_metrics.clj">builtin_metrics.clj</a> sets up data structures for the built-in metrics, and facade methods that the other framework components can use to update them. The metrics themselves are calculated in the calling code -- see for example <a href="http://github.com/apache/storm/blob/v1.0.6/storm-core/src/clj/org/apache/storm/daemon/executor.clj#358"><code>ack-spout-msg</code></a>  in <code>clj/b/s/daemon/daemon/executor.clj</code></p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Multilang-protocol.html b/content/releases/1.0.6/Multilang-protocol.html
index 8bfa617..b430d39 100644
--- a/content/releases/1.0.6/Multilang-protocol.html
+++ b/content/releases/1.0.6/Multilang-protocol.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page explains the multilang protocol as of Storm 0.7.1. Versions prior to 0.7.1 used a somewhat different protocol, documented [here](Storm-multi-language-protocol-(versions-0.7.0-and-below).html).</p>
+<div class="documentation-content"><p>This page explains the multilang protocol as of Storm 0.7.1. Versions prior to 0.7.1 used a somewhat different protocol, documented [here](Storm-multi-language-protocol-(versions-0.7.0-and-below).html).</p>
 
 <h1 id="storm-multi-language-protocol">Storm Multi-Language Protocol</h1>
 
@@ -404,7 +404,7 @@
 </code></pre></div>
 <p>When subprocess receives heartbeat tuple, it must send a <code>sync</code> command back to
 ShellBolt.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Pacemaker.html b/content/releases/1.0.6/Pacemaker.html
index fa2d6fa..8d18f28 100644
--- a/content/releases/1.0.6/Pacemaker.html
+++ b/content/releases/1.0.6/Pacemaker.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="introduction">Introduction</h3>
+<div class="documentation-content"><h3 id="introduction">Introduction</h3>
 
 <p>Pacemaker is a storm daemon designed to process heartbeats from workers. As Storm is scaled up, ZooKeeper begins to become a bottleneck due to high volumes of writes from workers doing heartbeats. Lots of writes to disk and too much traffic across the network is generated as ZooKeeper tries to maintain consistency.</p>
 
@@ -253,7 +253,7 @@
 <p>There is an easy route to HA for Pacemaker. Unlike ZooKeeper, Pacemaker should be able to scale horizontally without overhead. By contrast, with ZooKeeper, there are diminishing returns when adding ZK nodes.</p>
 
 <p>In short, a single Pacemaker node should be able to handle many times the load that a ZooKeeper cluster can, and future HA work allowing horizontal scaling will increase that even farther.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Powered-By.html b/content/releases/1.0.6/Powered-By.html
index 229c5a7..06c4e04 100644
--- a/content/releases/1.0.6/Powered-By.html
+++ b/content/releases/1.0.6/Powered-By.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Want to be added to this page? Send an email <a href="mailto:nathan.marz@gmail.com">here</a>.</p>
+<div class="documentation-content"><p>Want to be added to this page? Send an email <a href="mailto:nathan.marz@gmail.com">here</a>.</p>
 
 <table>
 
@@ -1169,7 +1169,7 @@
 
 
 </table>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Project-ideas.html b/content/releases/1.0.6/Project-ideas.html
index abdec39..05e6b93 100644
--- a/content/releases/1.0.6/Project-ideas.html
+++ b/content/releases/1.0.6/Project-ideas.html
@@ -144,12 +144,12 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><strong>DSLs for non-JVM languages:</strong> These DSL&#39;s should be all-inclusive and not require any Java for the creation of topologies, spouts, or bolts. Since topologies are <a href="http://thrift.apache.org/">Thrift</a> structs, Nimbus is a Thrift service, and bolts can be written in any language, this is possible.</li>
 <li><strong>Online machine learning algorithms:</strong> Something like <a href="http://mahout.apache.org/">Mahout</a> but for online algorithms</li>
 <li><strong>Suite of performance benchmarks:</strong> These benchmarks should test Storm&#39;s performance on CPU and IO intensive workloads. There should be benchmarks for different classes of applications, such as stream processing (where throughput is the priority) and distributed RPC (where latency is the priority). </li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Rationale.html b/content/releases/1.0.6/Rationale.html
index d21caf8..5c1ad13 100644
--- a/content/releases/1.0.6/Rationale.html
+++ b/content/releases/1.0.6/Rationale.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The past decade has seen a revolution in data processing. MapReduce, Hadoop, and related technologies have made it possible to store and process data at scales previously unthinkable. Unfortunately, these data processing technologies are not realtime systems, nor are they meant to be. There&#39;s no hack that will turn Hadoop into a realtime system; realtime data processing has a fundamentally different set of requirements than batch processing.</p>
+<div class="documentation-content"><p>The past decade has seen a revolution in data processing. MapReduce, Hadoop, and related technologies have made it possible to store and process data at scales previously unthinkable. Unfortunately, these data processing technologies are not realtime systems, nor are they meant to be. There&#39;s no hack that will turn Hadoop into a realtime system; realtime data processing has a fundamentally different set of requirements than batch processing.</p>
 
 <p>However, realtime data processing at massive scale is becoming more and more of a requirement for businesses. The lack of a &quot;Hadoop of realtime&quot; has become the biggest hole in the data processing ecosystem.</p>
 
@@ -176,7 +176,7 @@
 <li><strong>Fault-tolerant</strong>: If there are faults during execution of your computation, Storm will reassign tasks as necessary. Storm makes sure that a computation can run forever (or until you kill the computation).</li>
 <li><strong>Programming language agnostic</strong>: Robust and scalable realtime processing shouldn&#39;t be limited to a single platform. Storm topologies and processing components can be defined in any language, making Storm accessible to nearly anyone.</li>
 </ol>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Resource_Aware_Scheduler_overview.html b/content/releases/1.0.6/Resource_Aware_Scheduler_overview.html
index d06a9e3..2f51750 100644
--- a/content/releases/1.0.6/Resource_Aware_Scheduler_overview.html
+++ b/content/releases/1.0.6/Resource_Aware_Scheduler_overview.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="introduction">Introduction</h1>
+<div class="documentation-content"><h1 id="introduction">Introduction</h1>
 
 <p>The purpose of this document is to provide a description of the Resource Aware Scheduler for the Storm distributed real-time computation system.  This document will provide you with both a high level description of the resource aware scheduler in Storm</p>
 
@@ -364,7 +364,7 @@
 <p>We should never evict a topology from a user that does not have his or her resource guarantees satisfied.  The following flow chart should describe the logic for the eviction process.</p>
 
 <p><img src="images/resource_aware_scheduler_default_eviction_strategy.svg" alt="Viewing metrics with VisualVM"></p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Running-topologies-on-a-production-cluster.html b/content/releases/1.0.6/Running-topologies-on-a-production-cluster.html
index 084780c..e107a08 100644
--- a/content/releases/1.0.6/Running-topologies-on-a-production-cluster.html
+++ b/content/releases/1.0.6/Running-topologies-on-a-production-cluster.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Running topologies on a production cluster is similar to running in <a href="Local-mode.html">Local mode</a>. Here are the steps:</p>
+<div class="documentation-content"><p>Running topologies on a production cluster is similar to running in <a href="Local-mode.html">Local mode</a>. Here are the steps:</p>
 
 <p>1) Define the topology (Use <a href="javadocs/org/apache/storm/topology/TopologyBuilder.html">TopologyBuilder</a> if defining using Java)</p>
 
@@ -212,7 +212,7 @@
 <p>The best place to monitor a topology is using the Storm UI. The Storm UI provides information about errors happening in tasks and fine-grained stats on the throughput and latency performance of each component of each running topology.</p>
 
 <p>You can also look at the worker logs on the cluster machines.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/SECURITY.html b/content/releases/1.0.6/SECURITY.html
index 0b9f5de..c2dcd5f 100644
--- a/content/releases/1.0.6/SECURITY.html
+++ b/content/releases/1.0.6/SECURITY.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="running-apache-storm-securely">Running Apache Storm Securely</h1>
+<div class="documentation-content"><h1 id="running-apache-storm-securely">Running Apache Storm Securely</h1>
 
 <p>Apache Storm offers a range of configuration options when trying to secure
 your cluster.  By default all authentication and authorization is disabled but 
@@ -680,7 +680,7 @@
  | storm.zookeeper.topology.auth.payload | A string representing the payload for topology Zookeeper authentication. |</p>
 
 <p>Note: If storm.zookeeper.topology.auth.payload isn&#39;t set,storm will generate a ZooKeeper secret payload for MD5-digest with generateZookeeperDigestSecretPayload() method.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/STORM-UI-REST-API.html b/content/releases/1.0.6/STORM-UI-REST-API.html
index deb9bb0..3513378 100644
--- a/content/releases/1.0.6/STORM-UI-REST-API.html
+++ b/content/releases/1.0.6/STORM-UI-REST-API.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The Storm UI daemon provides a REST API that allows you to interact with a Storm cluster, which includes retrieving
+<div class="documentation-content"><p>The Storm UI daemon provides a REST API that allows you to interact with a Storm cluster, which includes retrieving
 metrics data and configuration information as well as management operations such as starting or stopping topologies.</p>
 
 <h1 id="data-format">Data format</h1>
@@ -1857,7 +1857,7 @@
   </span><span class="s2">"error"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Internal Server Error"</span><span class="p">,</span><span class="w">
   </span><span class="s2">"errorMessage"</span><span class="p">:</span><span class="w"> </span><span class="s2">"java.lang.NullPointerException</span><span class="se">\n\t</span><span class="s2">at clojure.core$name.invoke(core.clj:1505)</span><span class="se">\n\t</span><span class="s2">at org.apache.storm.ui.core$component_page.invoke(core.clj:752)</span><span class="se">\n\t</span><span class="s2">at org.apache.storm.ui.core$fn__7766.invoke(core.clj:782)</span><span class="se">\n\t</span><span class="s2">at compojure.core$make_route$fn__5755.invoke(core.clj:93)</span><span class="se">\n\t</span><span class="s2">at compojure.core$if_route$fn__5743.invoke(core.clj:39)</span><span class="se">\n\t</span><span class="s2">at compojure.core$if_method$fn__5736.invoke(core.clj:24)</span><span class="se">\n\t</span><span class="s2">at compojure.core$routing$fn__5761.invoke(core.clj:106)</span><span class="se">\n\t</span><span class="s2">at clojure.core$some.invoke(core.clj:2443)</span><span class="se">\n\t</span><span class="s2">at compojure.core$routing.doInvoke(core.clj:106)</span><span class="se">\n\t</span><span class="s2">at clojure.lang.RestFn.applyTo(RestFn.java:139)</span><span class="se">\n\t</span><span class="s2">at clojure.core$apply.invoke(core.clj:619)</span><span class="se">\n\t</span><span class="s2">at compojure.core$routes$fn__5765.invoke(core.clj:111)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.reload$wrap_reload$fn__6880.invoke(reload.clj:14)</span><span class="se">\n\t</span><span class="s2">at org.apache.storm.ui.core$catch_errors$fn__7800.invoke(core.clj:836)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.keyword_params$wrap_keyword_params$fn__6319.invoke(keyword_params.clj:27)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.nested_params$wrap_nested_params$fn__6358.invoke(nested_params.clj:65)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.params$wrap_params$fn__6291.invoke(params.clj:55)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.multipart_params$wrap_multipart_params$fn__6386.invoke(multipart_params.clj:103)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.flash$wrap_flash$fn__6675.invoke(flash.clj:14)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.session$wrap_session$fn__6664.invoke(session.clj:43)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.cookies$wrap_cookies$fn__6595.invoke(cookies.clj:160)</span><span class="se">\n\t</span><span class="s2">at ring.adapter.jetty$proxy_handler$fn__6112.invoke(jetty.clj:16)</span><span class="se">\n\t</span><span class="s2">at ring.adapter.jetty.proxy$org.mortbay.jetty.handler.AbstractHandler$0.handle(Unknown Source)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.Server.handle(Server.java:326)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)</span><span class="se">\n</span><span class="s2">"</span><span class="w">
 </span><span class="p">}</span><span class="w">
-</span></code></pre></div>
+</span></code></pre></div></div>
 
 
 	          </div>
diff --git "a/content/releases/1.0.6/Serialization-\050prior-to-0.6.0\051.html" "b/content/releases/1.0.6/Serialization-\050prior-to-0.6.0\051.html"
index e6da1a9..0c392ab 100644
--- "a/content/releases/1.0.6/Serialization-\050prior-to-0.6.0\051.html"
+++ "b/content/releases/1.0.6/Serialization-\050prior-to-0.6.0\051.html"
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Tuples can be comprised of objects of any types. Since Storm is a distributed system, it needs to know how to serialize and deserialize objects when they&#39;re passed between tasks. By default Storm can serialize ints, shorts, longs, floats, doubles, bools, bytes, strings, and byte arrays, but if you want to use another type in your tuples, you&#39;ll need to implement a custom serializer.</p>
+<div class="documentation-content"><p>Tuples can be comprised of objects of any types. Since Storm is a distributed system, it needs to know how to serialize and deserialize objects when they&#39;re passed between tasks. By default Storm can serialize ints, shorts, longs, floats, doubles, bools, bytes, strings, and byte arrays, but if you want to use another type in your tuples, you&#39;ll need to implement a custom serializer.</p>
 
 <h3 id="dynamic-typing">Dynamic typing</h3>
 
@@ -188,7 +188,7 @@
 <p>Storm provides helpers for registering serializers in a topology config. The <a href="javadocs/backtype/storm/Config.html">Config</a> class has a method called <code>addSerialization</code> that takes in a serializer class to add to the config.</p>
 
 <p>There&#39;s an advanced config called Config.TOPOLOGY_SKIP_MISSING_SERIALIZATIONS. If you set this to true, Storm will ignore any serializations that are registered but do not have their code available on the classpath. Otherwise, Storm will throw errors when it can&#39;t find a serialization. This is useful if you run many topologies on a cluster that each have different serializations, but you want to declare all the serializations across all topologies in the <code>storm.yaml</code> files.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Serialization.html b/content/releases/1.0.6/Serialization.html
index 4ea4822..f596ed9 100644
--- a/content/releases/1.0.6/Serialization.html
+++ b/content/releases/1.0.6/Serialization.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page is about how the serialization system in Storm works for versions 0.6.0 and onwards. Storm used a different serialization system prior to 0.6.0 which is documented on <a href="Serialization-(prior-to-0.6.0).html">Serialization (prior to 0.6.0)</a>. </p>
+<div class="documentation-content"><p>This page is about how the serialization system in Storm works for versions 0.6.0 and onwards. Storm used a different serialization system prior to 0.6.0 which is documented on <a href="Serialization-(prior-to-0.6.0).html">Serialization (prior to 0.6.0)</a>. </p>
 
 <p>Tuples can be comprised of objects of any types. Since Storm is a distributed system, it needs to know how to serialize and deserialize objects when they&#39;re passed between tasks.</p>
 
@@ -200,7 +200,7 @@
 <p>When a topology is submitted, a single set of serializations is chosen to be used by all components in the topology for sending messages. This is done by merging the component-specific serializer registrations with the regular set of serialization registrations. If two components define serializers for the same class, one of the serializers is chosen arbitrarily.</p>
 
 <p>To force a serializer for a particular class if there&#39;s a conflict between two component-specific registrations, just define the serializer you want to use in the topology-specific configuration. The topology-specific configuration has precedence over component-specific configurations for serialization registrations.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Serializers.html b/content/releases/1.0.6/Serializers.html
index 218a67c..521f358 100644
--- a/content/releases/1.0.6/Serializers.html
+++ b/content/releases/1.0.6/Serializers.html
@@ -144,10 +144,10 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><a href="https://github.com/rapportive-oss/storm-json">storm-json</a>: Simple JSON serializer for Storm</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Setting-up-a-Storm-cluster.html b/content/releases/1.0.6/Setting-up-a-Storm-cluster.html
index 50ab855..fdf97e0 100644
--- a/content/releases/1.0.6/Setting-up-a-Storm-cluster.html
+++ b/content/releases/1.0.6/Setting-up-a-Storm-cluster.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page outlines the steps for getting a Storm cluster up and running. If you&#39;re on AWS, you should check out the <a href="https://github.com/nathanmarz/storm-deploy/wiki">storm-deploy</a> project. <a href="https://github.com/nathanmarz/storm-deploy/wiki">storm-deploy</a> completely automates the provisioning, configuration, and installation of Storm clusters on EC2. It also sets up Ganglia for you so you can monitor CPU, disk, and network usage.</p>
+<div class="documentation-content"><p>This page outlines the steps for getting a Storm cluster up and running. If you&#39;re on AWS, you should check out the <a href="https://github.com/nathanmarz/storm-deploy/wiki">storm-deploy</a> project. <a href="https://github.com/nathanmarz/storm-deploy/wiki">storm-deploy</a> completely automates the provisioning, configuration, and installation of Storm clusters on EC2. It also sets up Ganglia for you so you can monitor CPU, disk, and network usage.</p>
 
 <p>If you run into difficulties with your Storm cluster, first check for a solution is in the <a href="Troubleshooting.html">Troubleshooting</a> page. Otherwise, email the mailing list.</p>
 
@@ -246,7 +246,7 @@
 </ol>
 
 <p>As you can see, running the daemons is very straightforward. The daemons will log to the logs/ directory in wherever you extracted the Storm release.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Setting-up-development-environment.html b/content/releases/1.0.6/Setting-up-development-environment.html
index a369930..9abaecd 100644
--- a/content/releases/1.0.6/Setting-up-development-environment.html
+++ b/content/releases/1.0.6/Setting-up-development-environment.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page outlines what you need to do to get a Storm development environment set up. In summary, the steps are:</p>
+<div class="documentation-content"><p>This page outlines what you need to do to get a Storm development environment set up. In summary, the steps are:</p>
 
 <ol>
 <li>Download a <a href="..//downloads.html">Storm release</a> , unpack it, and put the unpacked <code>bin/</code> directory on your PATH</li>
@@ -171,7 +171,7 @@
 
 <p>The previous step installed the <code>storm</code> client on your machine which is used to communicate with remote Storm clusters. Now all you have to do is tell the client which Storm cluster to talk to. To do this, all you have to do is put the host address of the master in the <code>~/.storm/storm.yaml</code> file. It should look something like this:</p>
 <div class="highlight"><pre><code class="language-" data-lang="">nimbus.seeds: ["123.45.678.890"]
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Spout-implementations.html b/content/releases/1.0.6/Spout-implementations.html
index c6ef895..7db2939 100644
--- a/content/releases/1.0.6/Spout-implementations.html
+++ b/content/releases/1.0.6/Spout-implementations.html
@@ -144,14 +144,14 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><a href="https://github.com/nathanmarz/storm-kestrel">storm-kestrel</a>: Adapter to use Kestrel as a spout</li>
 <li><a href="https://github.com/rapportive-oss/storm-amqp-spout">storm-amqp-spout</a>: Adapter to use AMQP source as a spout</li>
 <li><a href="https://github.com/ptgoetz/storm-jms">storm-jms</a>: Adapter to use a JMS source as a spout</li>
 <li><a href="https://github.com/sorenmacbeth/storm-redis-pubsub">storm-redis-pubsub</a>: A spout that subscribes to a Redis pubsub stream</li>
 <li><a href="https://github.com/haitaoyao/storm-beanstalkd-spout">storm-beanstalkd-spout</a>: A spout that subscribes to a beanstalkd queue</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/State-checkpointing.html b/content/releases/1.0.6/State-checkpointing.html
index c429fe4..34ca603 100644
--- a/content/releases/1.0.6/State-checkpointing.html
+++ b/content/releases/1.0.6/State-checkpointing.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="state-support-in-core-storm">State support in core storm</h1>
+<div class="documentation-content"><h1 id="state-support-in-core-storm">State support in core storm</h1>
 
 <p>Storm core has abstractions for bolts to save and retrieve the state of its operations. There is a default in-memory
 based state implementation and also a Redis backed implementation that provides state persistence.</p>
@@ -302,7 +302,7 @@
 a `StateProvider` implementation which can load and return the state based on the namespace. Each state belongs to a unique namespace.
 The namespace is typically unique per task so that each task can have its own state. The StateProvider and the corresponding
 State implementation should be available in the class path of Storm (by placing them in the extlib directory).
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Storm-Scheduler.html b/content/releases/1.0.6/Storm-Scheduler.html
index 39adc6f..b3871cd 100644
--- a/content/releases/1.0.6/Storm-Scheduler.html
+++ b/content/releases/1.0.6/Storm-Scheduler.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm now has 4 kinds of built-in schedulers: <a href="http://github.com/apache/storm/blob/v1.0.6/storm-core/src/clj/org/apache/storm/scheduler/DefaultScheduler.clj">DefaultScheduler</a>, <a href="http://github.com/apache/storm/blob/v1.0.6/storm-core/src/clj/org/apache/storm/scheduler/IsolationScheduler.clj">IsolationScheduler</a>, <a href="http://github.com/apache/storm/blob/v1.0.6/storm-core/src/jvm/org/apache/storm/scheduler/multitenant/MultitenantScheduler.java">MultitenantScheduler</a>, <a href="Resource_Aware_Scheduler_overview.html">ResourceAwareScheduler</a>. </p>
+<div class="documentation-content"><p>Storm now has 4 kinds of built-in schedulers: <a href="http://github.com/apache/storm/blob/v1.0.6/storm-core/src/clj/org/apache/storm/scheduler/DefaultScheduler.clj">DefaultScheduler</a>, <a href="http://github.com/apache/storm/blob/v1.0.6/storm-core/src/clj/org/apache/storm/scheduler/IsolationScheduler.clj">IsolationScheduler</a>, <a href="http://github.com/apache/storm/blob/v1.0.6/storm-core/src/jvm/org/apache/storm/scheduler/multitenant/MultitenantScheduler.java">MultitenantScheduler</a>, <a href="Resource_Aware_Scheduler_overview.html">ResourceAwareScheduler</a>. </p>
 
 <h2 id="pluggable-scheduler">Pluggable scheduler</h2>
 
@@ -163,7 +163,7 @@
 <p>Any topologies submitted to the cluster not listed there will not be isolated. Note that there is no way for a user of Storm to affect their isolation settings – this is only allowed by the administrator of the cluster (this is very much intentional).</p>
 
 <p>The isolation scheduler solves the multi-tenancy problem – avoiding resource contention between topologies – by providing full isolation between topologies. The intention is that &quot;productionized&quot; topologies should be listed in the isolation config, and test or in-development topologies should not. The remaining machines on the cluster serve the dual role of failover for isolated topologies and for running the non-isolated topologies.</p>
-
+</div>
 
 
 	          </div>
diff --git "a/content/releases/1.0.6/Storm-multi-language-protocol-\050versions-0.7.0-and-below\051.html" "b/content/releases/1.0.6/Storm-multi-language-protocol-\050versions-0.7.0-and-below\051.html"
index 7a40d96..b0e9805 100644
--- "a/content/releases/1.0.6/Storm-multi-language-protocol-\050versions-0.7.0-and-below\051.html"
+++ "b/content/releases/1.0.6/Storm-multi-language-protocol-\050versions-0.7.0-and-below\051.html"
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page explains the multilang protocol for versions 0.7.0 and below. The protocol changed in version 0.7.1.</p>
+<div class="documentation-content"><p>This page explains the multilang protocol for versions 0.7.0 and below. The protocol changed in version 0.7.1.</p>
 
 <h1 id="storm-multi-language-protocol">Storm Multi-Language Protocol</h1>
 
@@ -253,7 +253,7 @@
 <p>Note: This command is not JSON encoded, it is sent as a simple string.</p>
 
 <p>This lets the parent bolt know that the script has finished processing and is ready for another tuple.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Structure-of-the-codebase.html b/content/releases/1.0.6/Structure-of-the-codebase.html
index c749f0b..f690c84 100644
--- a/content/releases/1.0.6/Structure-of-the-codebase.html
+++ b/content/releases/1.0.6/Structure-of-the-codebase.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>There are three distinct layers to Storm&#39;s codebase.</p>
+<div class="documentation-content"><p>There are three distinct layers to Storm&#39;s codebase.</p>
 
 <p>First, Storm was designed from the very beginning to be compatible with multiple languages. Nimbus is a Thrift service and topologies are defined as Thrift structures. The usage of Thrift allows Storm to be used from any language.</p>
 
@@ -287,7 +287,7 @@
 <p><a href="http://github.com/apache/storm/blob/v1.0.6/storm-core/src/clj/org/apache/storm/util.clj">org.apache.storm.util</a>: Contains generic utility functions used throughout the code base.</p>
 
 <p><a href="http://github.com/apache/storm/blob/v1.0.6/storm-core/src/clj/org/apache/storm/zookeeper.clj">org.apache.storm.zookeeper</a>: Clojure wrapper around the Zookeeper API and implements some &quot;high-level&quot; stuff like &quot;mkdirs&quot; and &quot;delete-recursive&quot;.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Support-for-non-java-languages.html b/content/releases/1.0.6/Support-for-non-java-languages.html
index 16efdb7..486ed89 100644
--- a/content/releases/1.0.6/Support-for-non-java-languages.html
+++ b/content/releases/1.0.6/Support-for-non-java-languages.html
@@ -144,13 +144,13 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><a href="https://github.com/velvia/ScalaStorm">Scala DSL</a></li>
 <li><a href="https://github.com/colinsurprenant/storm-jruby">JRuby DSL</a></li>
 <li><a href="Clojure-DSL.html">Clojure DSL</a></li>
 <li><a href="https://github.com/gphat/io-storm">io-storm</a>: Perl multilang adapter</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Transactional-topologies.html b/content/releases/1.0.6/Transactional-topologies.html
index 3d61ef4..d0b9dae 100644
--- a/content/releases/1.0.6/Transactional-topologies.html
+++ b/content/releases/1.0.6/Transactional-topologies.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p><strong>NOTE</strong>: Transactional topologies have been deprecated -- use the <a href="Trident-tutorial.html">Trident</a> framework instead.</p>
+<div class="documentation-content"><p><strong>NOTE</strong>: Transactional topologies have been deprecated -- use the <a href="Trident-tutorial.html">Trident</a> framework instead.</p>
 
 <hr>
 
@@ -510,7 +510,7 @@
 <li>so it can&#39;t call finishbatch until it&#39;s received all tuples from all subscribed components AND its received the commit stream tuple (for committers). this ensures that it can&#39;t prematurely call finishBatch</li>
 </ul></li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Trident-API-Overview.html b/content/releases/1.0.6/Trident-API-Overview.html
index 40538ad..a787d91 100644
--- a/content/releases/1.0.6/Trident-API-Overview.html
+++ b/content/releases/1.0.6/Trident-API-Overview.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The core data model in Trident is the &quot;Stream&quot;, processed as a series of batches. A stream is partitioned among the nodes in the cluster, and operations applied to a stream are applied in parallel across each partition.</p>
+<div class="documentation-content"><p>The core data model in Trident is the &quot;Stream&quot;, processed as a series of batches. A stream is partitioned among the nodes in the cluster, and operations applied to a stream are applied in parallel across each partition.</p>
 
 <p>There are five kinds of operations in Trident:</p>
 
@@ -663,7 +663,7 @@
 <p>You might be wondering – how do you do something like a &quot;windowed join&quot;, where tuples from one side of the join are joined against the last hour of tuples from the other side of the join.</p>
 
 <p>To do this, you would make use of partitionPersist and stateQuery. The last hour of tuples from one side of the join would be stored and rotated in a source of state, keyed by the join field. Then the stateQuery would do lookups by the join field to perform the &quot;join&quot;.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Trident-RAS-API.html b/content/releases/1.0.6/Trident-RAS-API.html
index c5f45b4..ca87add 100644
--- a/content/releases/1.0.6/Trident-RAS-API.html
+++ b/content/releases/1.0.6/Trident-RAS-API.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="trident-ras-api">Trident RAS API</h2>
+<div class="documentation-content"><h2 id="trident-ras-api">Trident RAS API</h2>
 
 <p>The Trident RAS (Resource Aware Scheduler) API provides a mechanism to allow users to specify the resource consumption of a Trident topology. The API looks exactly like the base RAS API, only it is called on Trident Streams instead of Bolts and Spouts.</p>
 
@@ -192,7 +192,7 @@
 <p>Resource declarations may be called after any operation. The operations without explicit resources will get the defaults. If you choose to set resources for only some operations, defaults must be declared, or topology submission will fail.
 Resource declarations have the same <em>boundaries</em> as parallelism hints. They don&#39;t cross any groupings, shufflings, or any other kind of repartitioning.
 Resources are declared per operation, but get combined within boundaries.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Trident-spouts.html b/content/releases/1.0.6/Trident-spouts.html
index efbcaed..2ab8611 100644
--- a/content/releases/1.0.6/Trident-spouts.html
+++ b/content/releases/1.0.6/Trident-spouts.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="trident-spouts">Trident spouts</h1>
+<div class="documentation-content"><h1 id="trident-spouts">Trident spouts</h1>
 
 <p>Like in the vanilla Storm API, spouts are the source of streams in a Trident topology. On top of the vanilla Storm spouts, Trident exposes additional APIs for more sophisticated spouts.</p>
 
@@ -182,7 +182,7 @@
 </ol>
 
 <p>And, like mentioned in the beginning of this tutorial, you can use regular IRichSpout&#39;s as well.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Trident-state.html b/content/releases/1.0.6/Trident-state.html
index bbb776d..04f5c70 100644
--- a/content/releases/1.0.6/Trident-state.html
+++ b/content/releases/1.0.6/Trident-state.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Trident has first-class abstractions for reading from and writing to stateful sources. The state can either be internal to the topology – e.g., kept in-memory and backed by HDFS – or externally stored in a database like Memcached or Cassandra. There&#39;s no difference in the Trident API for either case.</p>
+<div class="documentation-content"><p>Trident has first-class abstractions for reading from and writing to stateful sources. The state can either be internal to the topology – e.g., kept in-memory and backed by HDFS – or externally stored in a database like Memcached or Cassandra. There&#39;s no difference in the Trident API for either case.</p>
 
 <p>Trident manages state in a fault-tolerant way so that state updates are idempotent in the face of retries and failures. This lets you reason about Trident topologies as if each message were processed exactly-once.</p>
 
@@ -413,7 +413,7 @@
 <p>Finally, Trident provides the <a href="http://github.com/apache/storm/blob/v1.0.6/storm-core/src/jvm/org/apache/storm/trident/state/map/SnapshottableMap.java">SnapshottableMap</a> class that turns a MapState into a Snapshottable object, by storing global aggregations into a fixed key.</p>
 
 <p>Take a look at the implementation of <a href="https://github.com/nathanmarz/trident-memcached/blob/master/src/jvm/trident/memcached/MemcachedState.java">MemcachedState</a> to see how all these utilities can be put together to make a high performance MapState implementation. MemcachedState allows you to choose between opaque transactional, transactional, and non-transactional semantics.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Trident-tutorial.html b/content/releases/1.0.6/Trident-tutorial.html
index ce93374..7654723 100644
--- a/content/releases/1.0.6/Trident-tutorial.html
+++ b/content/releases/1.0.6/Trident-tutorial.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Trident is a high-level abstraction for doing realtime computing on top of Storm. It allows you to seamlessly intermix high throughput (millions of messages per second), stateful stream processing with low latency distributed querying. If you&#39;re familiar with high level batch processing tools like Pig or Cascading, the concepts of Trident will be very familiar – Trident has joins, aggregations, grouping, functions, and filters. In addition to these, Trident adds primitives for doing stateful, incremental processing on top of any database or persistence store. Trident has consistent, exactly-once semantics, so it is easy to reason about Trident topologies.</p>
+<div class="documentation-content"><p>Trident is a high-level abstraction for doing realtime computing on top of Storm. It allows you to seamlessly intermix high throughput (millions of messages per second), stateful stream processing with low latency distributed querying. If you&#39;re familiar with high level batch processing tools like Pig or Cascading, the concepts of Trident will be very familiar – Trident has joins, aggregations, grouping, functions, and filters. In addition to these, Trident adds primitives for doing stateful, incremental processing on top of any database or persistence store. Trident has consistent, exactly-once semantics, so it is easy to reason about Trident topologies.</p>
 
 <h2 id="illustrative-example">Illustrative example</h2>
 
@@ -356,7 +356,7 @@
 <h2 id="conclusion">Conclusion</h2>
 
 <p>Trident makes realtime computation elegant. You&#39;ve seen how high throughput stream processing, state manipulation, and low-latency querying can be seamlessly intermixed via Trident&#39;s API. Trident lets you express your realtime computations in a natural way while still getting maximal performance.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Troubleshooting.html b/content/releases/1.0.6/Troubleshooting.html
index 81b6c37..07d91cd 100644
--- a/content/releases/1.0.6/Troubleshooting.html
+++ b/content/releases/1.0.6/Troubleshooting.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page lists issues people have run into when using Storm along with their solutions.</p>
+<div class="documentation-content"><p>This page lists issues people have run into when using Storm along with their solutions.</p>
 
 <h3 id="worker-processes-are-crashing-on-startup-with-no-stack-trace">Worker processes are crashing on startup with no stack trace</h3>
 
@@ -279,7 +279,7 @@
 <ul>
 <li>This means that you&#39;re emitting a mutable object as an output tuple. Everything you emit into the output collector must be immutable. What&#39;s happening is that your bolt is modifying the object while it is being serialized to be sent over the network.</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Tutorial.html b/content/releases/1.0.6/Tutorial.html
index 1c886c4..0ef8b3d 100644
--- a/content/releases/1.0.6/Tutorial.html
+++ b/content/releases/1.0.6/Tutorial.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>In this tutorial, you&#39;ll learn how to create Storm topologies and deploy them to a Storm cluster. Java will be the main language used, but a few examples will use Python to illustrate Storm&#39;s multi-language capabilities.</p>
+<div class="documentation-content"><p>In this tutorial, you&#39;ll learn how to create Storm topologies and deploy them to a Storm cluster. Java will be the main language used, but a few examples will use Python to illustrate Storm&#39;s multi-language capabilities.</p>
 
 <h2 id="preliminaries">Preliminaries</h2>
 
@@ -428,7 +428,7 @@
 <h2 id="conclusion">Conclusion</h2>
 
 <p>This tutorial gave a broad overview of developing, testing, and deploying Storm topologies. The rest of the documentation dives deeper into all the aspects of using Storm.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Understanding-the-parallelism-of-a-Storm-topology.html b/content/releases/1.0.6/Understanding-the-parallelism-of-a-Storm-topology.html
index da8b94a..9eef7a9 100644
--- a/content/releases/1.0.6/Understanding-the-parallelism-of-a-Storm-topology.html
+++ b/content/releases/1.0.6/Understanding-the-parallelism-of-a-Storm-topology.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="what-makes-a-running-topology-worker-processes-executors-and-tasks">What makes a running topology: worker processes, executors and tasks</h2>
+<div class="documentation-content"><h2 id="what-makes-a-running-topology-worker-processes-executors-and-tasks">What makes a running topology: worker processes, executors and tasks</h2>
 
 <p>Storm distinguishes between the following three main entities that are used to actually run a topology in a Storm cluster:</p>
 
@@ -274,7 +274,7 @@
 <li><a href="Tutorial.html">Tutorial</a></li>
 <li><a href="javadocs/">Storm API documentation</a>, most notably the class <code>Config</code></li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Using-non-JVM-languages-with-Storm.html b/content/releases/1.0.6/Using-non-JVM-languages-with-Storm.html
index 07d621d..9b8259c 100644
--- a/content/releases/1.0.6/Using-non-JVM-languages-with-Storm.html
+++ b/content/releases/1.0.6/Using-non-JVM-languages-with-Storm.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li>two pieces: creating topologies and implementing spouts and bolts in other languages</li>
 <li>creating topologies in another language is easy since topologies are just thrift structures (link to storm.thrift)</li>
 <li>implementing spouts and bolts in another language is called a &quot;multilang components&quot; or &quot;shelling&quot;
@@ -198,7 +198,7 @@
 <p>Then you can connect to Nimbus using the Thrift API and submit the topology, passing {uploaded-jar-location} into the submitTopology method. For reference, here&#39;s the submitTopology definition:</p>
 <div class="highlight"><pre><code class="language-" data-lang="">void submitTopology(1: string name, 2: string uploadedJarLocation, 3: string jsonConf, 4: StormTopology topology)
     throws (1: AlreadyAliveException e, 2: InvalidTopologyException ite);
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/Windowing.html b/content/releases/1.0.6/Windowing.html
index 4354e21..3a1e98e 100644
--- a/content/releases/1.0.6/Windowing.html
+++ b/content/releases/1.0.6/Windowing.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm core has support for processing a group of tuples that falls within a window. Windows are specified with the 
+<div class="documentation-content"><p>Storm core has support for processing a group of tuples that falls within a window. Windows are specified with the 
 following two parameters,</p>
 
 <ol>
@@ -359,7 +359,7 @@
 
 <p>An example toplogy <code>SlidingWindowTopology</code> shows how to use the apis to compute a sliding window sum and a tumbling window 
 average.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/distcache-blobstore.html b/content/releases/1.0.6/distcache-blobstore.html
index 043c62c..5cf40cc 100644
--- a/content/releases/1.0.6/distcache-blobstore.html
+++ b/content/releases/1.0.6/distcache-blobstore.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="storm-distributed-cache-api">Storm Distributed Cache API</h1>
+<div class="documentation-content"><h1 id="storm-distributed-cache-api">Storm Distributed Cache API</h1>
 
 <p>The distributed cache feature in storm is used to efficiently distribute files
 (or blobs, which is the equivalent terminology for a file in the distributed
@@ -799,7 +799,7 @@
  2: required string session;
  3: optional i64 data_size;
 }
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/dynamic-log-level-settings.html b/content/releases/1.0.6/dynamic-log-level-settings.html
index 2fe8576..240b6a6 100644
--- a/content/releases/1.0.6/dynamic-log-level-settings.html
+++ b/content/releases/1.0.6/dynamic-log-level-settings.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>We have added the ability to set log level settings for a running topology using the Storm UI and the Storm CLI. </p>
+<div class="documentation-content"><p>We have added the ability to set log level settings for a running topology using the Storm UI and the Storm CLI. </p>
 
 <p>The log level settings apply the same way as you&#39;d expect from log4j, as all we are doing is telling log4j to set the level of the logger you provide. If you set the log level of a parent logger, the children loggers start using that level (unless the children have a more restrictive level already). A timeout can optionally be provided (except for DEBUG mode, where it’s required in the UI), if workers should reset log levels automatically.</p>
 
@@ -179,7 +179,7 @@
 <p><code>./bin/storm set_log_level my_topology -r ROOT</code></p>
 
 <p>Clears the ROOT logger dynamic log level, resetting it to its original value.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/dynamic-worker-profiling.html b/content/releases/1.0.6/dynamic-worker-profiling.html
index 17a2b70..294706c 100644
--- a/content/releases/1.0.6/dynamic-worker-profiling.html
+++ b/content/releases/1.0.6/dynamic-worker-profiling.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>In multi-tenant mode, storm launches long-running JVMs across cluster without sudo access to user. Self-serving of Java heap-dumps, jstacks and java profiling of these JVMs would improve users&#39; ability to analyze and debug issues when monitoring it actively.</p>
+<div class="documentation-content"><p>In multi-tenant mode, storm launches long-running JVMs across cluster without sudo access to user. Self-serving of Java heap-dumps, jstacks and java profiling of these JVMs would improve users&#39; ability to analyze and debug issues when monitoring it actively.</p>
 
 <p>The storm dynamic profiler lets you dynamically take heap-dumps, jprofile or jstack for a worker jvm running on stock cluster. It let user download these dumps from the browser and use your favorite tools to analyze it  The UI component page provides list workers for the component and action buttons. The logviewer lets you download the dumps generated by these logs. Please see the screenshots for more information.</p>
 
@@ -171,7 +171,7 @@
 <h2 id="configuration">Configuration</h2>
 
 <p>The &quot;worker.profiler.command&quot; can be configured to point to specific pluggable profiler, heapdump commands. The &quot;worker.profiler.enabled&quot; can be disabled if plugin is not available or jdk does not support Jprofile flight recording so that worker JVM options will not have &quot;worker.profiler.childopts&quot;. To use different profiler plugin, you can change these configuration.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/flux.html b/content/releases/1.0.6/flux.html
index 2055517..01f1d66 100644
--- a/content/releases/1.0.6/flux.html
+++ b/content/releases/1.0.6/flux.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>A framework for creating and deploying Apache Storm streaming computations with less friction.</p>
+<div class="documentation-content"><p>A framework for creating and deploying Apache Storm streaming computations with less friction.</p>
 
 <h2 id="definition">Definition</h2>
 
@@ -908,7 +908,7 @@
   <span class="na">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">org.apache.storm.flux.test.TridentTopologySource"</span>
   <span class="c1"># Flux will look for "getTopology", this will override that.</span>
   <span class="na">methodName</span><span class="pi">:</span> <span class="s2">"</span><span class="s">getTopologyWithDifferentMethodName"</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/index.html b/content/releases/1.0.6/index.html
index e9aa523..bf2ad00 100644
--- a/content/releases/1.0.6/index.html
+++ b/content/releases/1.0.6/index.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<blockquote>
+<div class="documentation-content"><blockquote>
 <h4 id="note">NOTE</h4>
 
 <p>In the latest version, the class packages have been changed from &quot;backtype.storm&quot; to &quot;org.apache.storm&quot; so the topology code compiled with older version won&#39;t run on the Storm 1.0.0 just like that. Backward compatibility is available through following configuration </p>
@@ -256,7 +256,7 @@
 <li><a href="Multilang-protocol.html">Multilang protocol</a> (how to provide support for another language)</li>
 <li><a href="Implementation-docs.html">Implementation docs</a></li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/nimbus-ha-design.html b/content/releases/1.0.6/nimbus-ha-design.html
index 71e206d..57cde81 100644
--- a/content/releases/1.0.6/nimbus-ha-design.html
+++ b/content/releases/1.0.6/nimbus-ha-design.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="problem-statement">Problem Statement:</h2>
+<div class="documentation-content"><h2 id="problem-statement">Problem Statement:</h2>
 
 <p>Currently the storm master aka nimbus, is a process that runs on a single machine under supervision. In most cases the 
 nimbus failure is transient and it is restarted by the supervisor. However sometimes when disks fail and networks 
@@ -361,7 +361,7 @@
 <p>Note: Even though all nimbus hosts have watchers on zookeeper to be notified immediately as soon as a new topology is available for code
 download, the callback pretty much never results in code download. In practice we have observed that the desired replication is only achieved once the background-thread runs. 
 So you should expect your topology submission time to be somewhere between 0 to (2 * nimbus.code.sync.freq.secs) for any nimbus.min.replication.count &gt; 1.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/storm-cassandra.html b/content/releases/1.0.6/storm-cassandra.html
index 1d6885f..2eff1fb 100644
--- a/content/releases/1.0.6/storm-cassandra.html
+++ b/content/releases/1.0.6/storm-cassandra.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="bolt-api-implementation-for-apache-cassandra">Bolt API implementation for Apache Cassandra</h3>
+<div class="documentation-content"><h3 id="bolt-api-implementation-for-apache-cassandra">Bolt API implementation for Apache Cassandra</h3>
 
 <p>This library provides core storm bolt on top of Apache Cassandra.
 Provides simple DSL to map storm <em>Tuple</em> to Cassandra Query Language <em>Statement</em>.</p>
@@ -412,7 +412,7 @@
 <li>Sriharha Chintalapani (<a href="mailto:sriharsha@apache.org">sriharsha@apache.org</a>)</li>
 <li>P. Taylor Goetz (<a href="mailto:ptgoetz@apache.org">ptgoetz@apache.org</a>)</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/storm-elasticsearch.html b/content/releases/1.0.6/storm-elasticsearch.html
index a0e6a13..3356497 100644
--- a/content/releases/1.0.6/storm-elasticsearch.html
+++ b/content/releases/1.0.6/storm-elasticsearch.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="storm-elasticsearch-bolt-trident-state">Storm Elasticsearch Bolt &amp; Trident State</h1>
+<div class="documentation-content"><h1 id="storm-elasticsearch-bolt-trident-state">Storm Elasticsearch Bolt &amp; Trident State</h1>
 
 <p>EsIndexBolt, EsPercolateBolt and EsState allows users to stream data from storm into Elasticsearch directly.
   For detailed description, please refer to the following.</p>
@@ -245,7 +245,7 @@
 <li>Sriharsha Chintalapani (<a href="https://github.com/harshach">@harshach</a>)</li>
 <li>Jungtaek Lim (<a href="https://github.com/HeartSaVioR">@HeartSaVioR</a>)</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/storm-eventhubs.html b/content/releases/1.0.6/storm-eventhubs.html
index 9015298..aed231e 100644
--- a/content/releases/1.0.6/storm-eventhubs.html
+++ b/content/releases/1.0.6/storm-eventhubs.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm spout and bolt implementation for Microsoft Azure Eventhubs</p>
+<div class="documentation-content"><p>Storm spout and bolt implementation for Microsoft Azure Eventhubs</p>
 
 <h3 id="build">build</h3>
 <div class="highlight"><pre><code class="language-" data-lang="">mvn clean package
@@ -178,7 +178,7 @@
 
 <h3 id="windows-azure-eventhubs">Windows Azure Eventhubs</h3>
 <div class="highlight"><pre><code class="language-" data-lang="">http://azure.microsoft.com/en-us/services/event-hubs/
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/storm-hbase.html b/content/releases/1.0.6/storm-hbase.html
index 11e5e2d..d2a650e 100644
--- a/content/releases/1.0.6/storm-hbase.html
+++ b/content/releases/1.0.6/storm-hbase.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm/Trident integration for <a href="https://hbase.apache.org">Apache HBase</a></p>
+<div class="documentation-content"><p>Storm/Trident integration for <a href="https://hbase.apache.org">Apache HBase</a></p>
 
 <h2 id="usage">Usage</h2>
 
@@ -359,7 +359,7 @@
         <span class="o">}</span>
     <span class="o">}</span>
 <span class="o">}</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/storm-hdfs.html b/content/releases/1.0.6/storm-hdfs.html
index 9375660..47bbca2 100644
--- a/content/releases/1.0.6/storm-hdfs.html
+++ b/content/releases/1.0.6/storm-hdfs.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm components for interacting with HDFS file systems</p>
+<div class="documentation-content"><p>Storm components for interacting with HDFS file systems</p>
 
 <h2 id="usage">Usage</h2>
 
@@ -460,7 +460,7 @@
 <p>On worker hosts the bolt/trident-state code will use the keytab file with principal provided in the config to authenticate with 
 Namenode. This method is little dangerous as you need to ensure all workers have the keytab file at the same location and you need
 to remember this as you bring up new hosts in the cluster.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/storm-hive.html b/content/releases/1.0.6/storm-hive.html
index c96ca92..2fe563b 100644
--- a/content/releases/1.0.6/storm-hive.html
+++ b/content/releases/1.0.6/storm-hive.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Hive offers streaming API that allows data to be written continuously into Hive. The incoming data 
+<div class="documentation-content"><p>Hive offers streaming API that allows data to be written continuously into Hive. The incoming data 
   can be continuously committed in small batches of records into existing Hive partition or table. Once the data
   is committed its immediately visible to all hive queries. More info on Hive Streaming API 
   <a href="https://cwiki.apache.org/confluence/display/Hive/Streaming+Data+Ingest">https://cwiki.apache.org/confluence/display/Hive/Streaming+Data+Ingest</a></p>
@@ -303,7 +303,7 @@
 
    <span class="n">StateFactory</span> <span class="n">factory</span> <span class="o">=</span> <span class="k">new</span> <span class="n">HiveStateFactory</span><span class="o">().</span><span class="na">withOptions</span><span class="o">(</span><span class="n">hiveOptions</span><span class="o">);</span>
    <span class="n">TridentState</span> <span class="n">state</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">partitionPersist</span><span class="o">(</span><span class="n">factory</span><span class="o">,</span> <span class="n">hiveFields</span><span class="o">,</span> <span class="k">new</span> <span class="n">HiveUpdater</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">());</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/storm-jdbc.html b/content/releases/1.0.6/storm-jdbc.html
index ccd55a9..72c0c3e 100644
--- a/content/releases/1.0.6/storm-jdbc.html
+++ b/content/releases/1.0.6/storm-jdbc.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm/Trident integration for JDBC. This package includes the core bolts and trident states that allows a storm topology
+<div class="documentation-content"><p>Storm/Trident integration for JDBC. This package includes the core bolts and trident states that allows a storm topology
 to either insert storm tuples in a database table or to execute select queries against a database and enrich tuples 
 in a storm topology.</p>
 
@@ -403,7 +403,7 @@
 <div class="highlight"><pre><code class="language-" data-lang="">select * from user;
 </code></pre></div>
 <p>For trident you can view <code>org.apache.storm.jdbc.topology.UserPersistanceTridentTopology</code>.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/storm-jms-example.html b/content/releases/1.0.6/storm-jms-example.html
index be6ad5b..7b88bd6 100644
--- a/content/releases/1.0.6/storm-jms-example.html
+++ b/content/releases/1.0.6/storm-jms-example.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="example-storm-jms-topology">Example Storm JMS Topology</h2>
+<div class="documentation-content"><h2 id="example-storm-jms-topology">Example Storm JMS Topology</h2>
 
 <p>The storm-jms source code contains an example project (in the &quot;examples&quot; directory) 
 builds a multi-bolt/multi-spout topology (depicted below) that uses the JMS Spout and JMS Bolt components.</p>
@@ -248,7 +248,7 @@
 DEBUG (backtype.storm.contrib.jms.spout.JmsSpout:251) - JMS Message acked: ID:budreau.home-60117-1321735025796-0:0:1:1:1
 </code></pre></div>
 <p>The topology will run for 2 minutes, then gracefully shut down.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/storm-jms-spring.html b/content/releases/1.0.6/storm-jms-spring.html
index 878df7a..d5938b0 100644
--- a/content/releases/1.0.6/storm-jms-spring.html
+++ b/content/releases/1.0.6/storm-jms-spring.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="connecting-to-jms-using-springs-jms-support">Connecting to JMS Using Spring&#39;s JMS Support</h3>
+<div class="documentation-content"><h3 id="connecting-to-jms-using-springs-jms-support">Connecting to JMS Using Spring&#39;s JMS Support</h3>
 
 <p>Create a Spring applicationContext.xml file that defines one or more destination (topic/queue) beans, as well as a connecton factory.</p>
 <div class="highlight"><pre><code class="language-" data-lang=""><span class="cp">&lt;?xml version="1.0" encoding="UTF-8"?&gt;</span>
@@ -163,7 +163,7 @@
         <span class="na">brokerURL=</span><span class="s">"tcp://localhost:61616"</span> <span class="nt">/&gt;</span>
 
 <span class="nt">&lt;/beans&gt;</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/storm-jms.html b/content/releases/1.0.6/storm-jms.html
index 1dc4c80..669f321 100644
--- a/content/releases/1.0.6/storm-jms.html
+++ b/content/releases/1.0.6/storm-jms.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="about-storm-jms">About Storm JMS</h2>
+<div class="documentation-content"><h2 id="about-storm-jms">About Storm JMS</h2>
 
 <p>Storm JMS is a generic framework for integrating JMS messaging within the Storm framework.</p>
 
@@ -169,7 +169,7 @@
 <p><a href="storm-jms-example.html">Example Topology</a></p>
 
 <p><a href="storm-jms-spring.html">Using Spring JMS</a></p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/storm-kafka-client.html b/content/releases/1.0.6/storm-kafka-client.html
index dbfa210..23c386a 100644
--- a/content/releases/1.0.6/storm-kafka-client.html
+++ b/content/releases/1.0.6/storm-kafka-client.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="storm-apache-kafka-integration-using-the-kafka-client-jar">Storm Apache Kafka integration using the kafka-client jar</h1>
+<div class="documentation-content"><h1 id="storm-apache-kafka-integration-using-the-kafka-client-jar">Storm Apache Kafka integration using the kafka-client jar</h1>
 
 <p>This includes the new Apache Kafka consumer API.</p>
 
@@ -476,7 +476,7 @@
   <span class="o">.</span><span class="na">setTupleTrackingEnforced</span><span class="o">(</span><span class="kc">true</span><span class="o">)</span>
 </code></pre></div>
 <p>Note: This setting has no effect with AT_LEAST_ONCE processing guarantee, where tuple tracking is required and therefore always enabled.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/storm-kafka.html b/content/releases/1.0.6/storm-kafka.html
index dfc2335..5bdc618 100644
--- a/content/releases/1.0.6/storm-kafka.html
+++ b/content/releases/1.0.6/storm-kafka.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Provides core Storm and Trident spout implementations for consuming data from Apache Kafka 0.8.x.</p>
+<div class="documentation-content"><p>Provides core Storm and Trident spout implementations for consuming data from Apache Kafka 0.8.x.</p>
 
 <h2 id="spouts">Spouts</h2>
 
@@ -493,7 +493,7 @@
 
         <span class="n">Config</span> <span class="n">conf</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Config</span><span class="o">();</span>
         <span class="n">StormSubmitter</span><span class="o">.</span><span class="na">submitTopology</span><span class="o">(</span><span class="s">"kafkaTridentTest"</span><span class="o">,</span> <span class="n">conf</span><span class="o">,</span> <span class="n">topology</span><span class="o">.</span><span class="na">build</span><span class="o">());</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/storm-metrics-profiling-internal-actions.html b/content/releases/1.0.6/storm-metrics-profiling-internal-actions.html
index 1072ea4..a66bc93 100644
--- a/content/releases/1.0.6/storm-metrics-profiling-internal-actions.html
+++ b/content/releases/1.0.6/storm-metrics-profiling-internal-actions.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>With the addition of these metrics, Storm users can collect, view, and analyze the performance of various internal actions.  The actions that are profiled include thrift rpc calls and http quests within the storm daemons. For instance, in the Storm Nimbus daemon, the following thrift calls defined in the Nimbus$Iface are profiled:</p>
+<div class="documentation-content"><p>With the addition of these metrics, Storm users can collect, view, and analyze the performance of various internal actions.  The actions that are profiled include thrift rpc calls and http quests within the storm daemons. For instance, in the Storm Nimbus daemon, the following thrift calls defined in the Nimbus$Iface are profiled:</p>
 
 <ul>
 <li>submitTopology</li>
@@ -211,7 +211,7 @@
 <p>For more information about io.dropwizard.metrics and metrics-clojure packages please reference their original documentation:
 - <a href="https://dropwizard.github.io/metrics/3.1.0/">https://dropwizard.github.io/metrics/3.1.0/</a>
 - <a href="http://metrics-clojure.readthedocs.org/en/latest/">http://metrics-clojure.readthedocs.org/en/latest/</a></p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/storm-mongodb.html b/content/releases/1.0.6/storm-mongodb.html
index 9dd4b0f..44d6e18 100644
--- a/content/releases/1.0.6/storm-mongodb.html
+++ b/content/releases/1.0.6/storm-mongodb.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm/Trident integration for <a href="https://www.mongodb.org/">MongoDB</a>. This package includes the core bolts and trident states that allows a storm topology to either insert storm tuples in a database collection or to execute update queries against a database collection in a storm topology.</p>
+<div class="documentation-content"><p>Storm/Trident integration for <a href="https://www.mongodb.org/">MongoDB</a>. This package includes the core bolts and trident states that allows a storm topology to either insert storm tuples in a database collection or to execute update queries against a database collection in a storm topology.</p>
 
 <h2 id="insert-into-database">Insert into Database</h2>
 
@@ -323,7 +323,7 @@
 <ul>
 <li>Sriharsha Chintalapani (<a href="mailto:sriharsha@apache.org">sriharsha@apache.org</a>)</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/storm-mqtt.html b/content/releases/1.0.6/storm-mqtt.html
index 3baf7a2..4bd9360 100644
--- a/content/releases/1.0.6/storm-mqtt.html
+++ b/content/releases/1.0.6/storm-mqtt.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="about">About</h2>
+<div class="documentation-content"><h2 id="about">About</h2>
 
 <p>MQTT is a lightweight publish/subscribe protocol frequently used in IoT applications.</p>
 
@@ -483,7 +483,7 @@
 <ul>
 <li>P. Taylor Goetz (<a href="mailto:ptgoetz@apache.org">ptgoetz@apache.org</a>)</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/storm-redis.html b/content/releases/1.0.6/storm-redis.html
index 9885d0b..9add4ee 100644
--- a/content/releases/1.0.6/storm-redis.html
+++ b/content/releases/1.0.6/storm-redis.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm/Trident integration for <a href="http://redis.io/">Redis</a></p>
+<div class="documentation-content"><p>Storm/Trident integration for <a href="http://redis.io/">Redis</a></p>
 
 <p>Storm-redis uses Jedis for Redis client.</p>
 
@@ -378,7 +378,7 @@
 <li>Robert Evans (<a href="https://github.com/revans2">@revans2</a>)</li>
 <li>Jungtaek Lim (<a href="https://github.com/HeartSaVioR">@HeartSaVioR</a>)</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/storm-solr.html b/content/releases/1.0.6/storm-solr.html
index 0314b6f..4f3a2bc 100644
--- a/content/releases/1.0.6/storm-solr.html
+++ b/content/releases/1.0.6/storm-solr.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm and Trident integration for Apache Solr. This package includes a bolt and a trident state that enable a Storm topology
+<div class="documentation-content"><p>Storm and Trident integration for Apache Solr. This package includes a bolt and a trident state that enable a Storm topology
 stream the contents of storm tuples to index Solr collections.</p>
 
 <h1 id="index-storm-tuples-into-a-solr-collection">Index Storm tuples into a Solr collection</h1>
@@ -312,7 +312,7 @@
 <p>You can also see the results by opening the Apache Solr UI and pasting the <code>id</code> pattern in the <code>q</code> textbox in the queries page</p>
 
 <p><a href="http://localhost:8983/solr/#/gettingstarted_shard1_replica2/query">http://localhost:8983/solr/#/gettingstarted_shard1_replica2/query</a></p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/storm-sql-internal.html b/content/releases/1.0.6/storm-sql-internal.html
index 523444e..d299f20 100644
--- a/content/releases/1.0.6/storm-sql-internal.html
+++ b/content/releases/1.0.6/storm-sql-internal.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page describes the design and the implementation of the Storm SQL integration.</p>
+<div class="documentation-content"><p>This page describes the design and the implementation of the Storm SQL integration.</p>
 
 <h2 id="overview">Overview</h2>
 
@@ -191,7 +191,7 @@
 <h2 id="dependency">Dependency</h2>
 
 <p>StormSQL does not ship the dependency of the external data sources in the packaged JAR. The users have to provide the dependency in the <code>extlib</code> directory of the worker node.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/storm-sql.html b/content/releases/1.0.6/storm-sql.html
index f93f9de..c3f3824 100644
--- a/content/releases/1.0.6/storm-sql.html
+++ b/content/releases/1.0.6/storm-sql.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The Storm SQL integration allows users to run SQL queries over streaming data in Storm. Not only the SQL interface allows faster development cycles on streaming analytics, but also opens up the opportunities to unify batch data processing like <a href="///hive.apache.org">Apache Hive</a> and real-time streaming data analytics.</p>
+<div class="documentation-content"><p>The Storm SQL integration allows users to run SQL queries over streaming data in Storm. Not only the SQL interface allows faster development cycles on streaming analytics, but also opens up the opportunities to unify batch data processing like <a href="///hive.apache.org">Apache Hive</a> and real-time streaming data analytics.</p>
 
 <p>At a very high level StormSQL compiles the SQL queries to <a href="Trident-API-Overview.html">Trident</a> topologies and executes them in Storm clusters. This document provides information of how to use StormSQL as end users. For people that are interested in more details in the design and the implementation of StormSQL please refer to the <a href="storm-sql-internal.html">this</a> page.</p>
 
@@ -231,7 +231,7 @@
 <p>Users also need to provide the dependency of the external data sources in the <code>extlib</code> directory. Otherwise the topology will fail to run because of <code>ClassNotFoundException</code>.</p>
 
 <p>The current implementation of the Kafka connector in StormSQL assumes both the input and the output are in JSON formats. The connector has not yet recognized the <code>INPUTFORMAT</code> and <code>OUTPUTFORMAT</code> clauses yet.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.0.6/windows-users-guide.html b/content/releases/1.0.6/windows-users-guide.html
index d98adea..164bcd7 100644
--- a/content/releases/1.0.6/windows-users-guide.html
+++ b/content/releases/1.0.6/windows-users-guide.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page guides how to set up environment on Windows for Apache Storm.</p>
+<div class="documentation-content"><p>This page guides how to set up environment on Windows for Apache Storm.</p>
 
 <h2 id="symbolic-link">Symbolic Link</h2>
 
@@ -172,7 +172,7 @@
 on Nimbus and all of the Supervisor nodes.  This will also disable features that require symlinks.  Currently this is only downloading
 dependent blobs, but may change in the future.  Some topologies may rely on symbolic links to resources in the current working directory of the worker that are
 created as a convienence, so it is not a 100% backwards compatible change.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Acking-framework-implementation.html b/content/releases/1.1.2/Acking-framework-implementation.html
index 539c1fa..5300430 100644
--- a/content/releases/1.1.2/Acking-framework-implementation.html
+++ b/content/releases/1.1.2/Acking-framework-implementation.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p><a href="https://github.com/apache/incubator-storm/blob/46c3ba7/storm-core/src/clj/backtype/storm/daemon/acker.clj#L28">Storm&#39;s acker</a> tracks completion of each tupletree with a checksum hash: each time a tuple is sent, its value is XORed into the checksum, and each time a tuple is acked its value is XORed in again. If all tuples have been successfully acked, the checksum will be zero (the odds that the checksum will be zero otherwise are vanishingly small).</p>
+<div class="documentation-content"><p><a href="https://github.com/apache/incubator-storm/blob/46c3ba7/storm-core/src/clj/backtype/storm/daemon/acker.clj#L28">Storm&#39;s acker</a> tracks completion of each tupletree with a checksum hash: each time a tuple is sent, its value is XORed into the checksum, and each time a tuple is acked its value is XORed in again. If all tuples have been successfully acked, the checksum will be zero (the odds that the checksum will be zero otherwise are vanishingly small).</p>
 
 <p>You can read a bit more about the <a href="Guaranteeing-message-processing.html#what-is-storms-reliability-api">reliability mechanism</a> elsewhere on the wiki -- this explains the internal details.</p>
 
@@ -180,7 +180,7 @@
 <p>Internally, it holds several HashMaps (&#39;buckets&#39;) of its own, each holding a cohort of records that will expire at the same time.  Let&#39;s call the longest-lived bucket death row, and the most recent the nursery. Whenever a value is <code>.put()</code> to the RotatingMap, it is relocated to the nursery -- and removed from any other bucket it might have been in (effectively resetting its death clock).</p>
 
 <p>Whenever its owner calls <code>.rotate()</code>, the RotatingMap advances each cohort one step further towards expiration. (Typically, Storm objects call rotate on every receipt of a system tick stream tuple.) If there are any key-value pairs in the former death row bucket, the RotatingMap invokes a callback (given in the constructor) for each key-value pair, letting its owner take appropriate action (eg, failing a tuple.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Clojure-DSL.html b/content/releases/1.1.2/Clojure-DSL.html
index 65eb4cd..c42d15c 100644
--- a/content/releases/1.1.2/Clojure-DSL.html
+++ b/content/releases/1.1.2/Clojure-DSL.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm comes with a Clojure DSL for defining spouts, bolts, and topologies. The Clojure DSL has access to everything the Java API exposes, so if you&#39;re a Clojure user you can code Storm topologies without touching Java at all. The Clojure DSL is defined in the source in the <a href="http://github.com/apache/storm/blob/v1.1.2/storm-core/src/clj/org/apache/storm/clojure.clj">org.apache.storm.clojure</a> namespace.</p>
+<div class="documentation-content"><p>Storm comes with a Clojure DSL for defining spouts, bolts, and topologies. The Clojure DSL has access to everything the Java API exposes, so if you&#39;re a Clojure user you can code Storm topologies without touching Java at all. The Clojure DSL is defined in the source in the <a href="http://github.com/apache/storm/blob/v1.1.2/storm-core/src/clj/org/apache/storm/clojure.clj">org.apache.storm.clojure</a> namespace.</p>
 
 <p>This page outlines all the pieces of the Clojure DSL, including:</p>
 
@@ -371,7 +371,7 @@
 <h3 id="testing-topologies">Testing topologies</h3>
 
 <p><a href="http://www.pixelmachine.org/2011/12/17/Testing-Storm-Topologies.html">This blog post</a> and its <a href="http://www.pixelmachine.org/2011/12/21/Testing-Storm-Topologies-Part-2.html">follow-up</a> give a good overview of Storm&#39;s powerful built-in facilities for testing topologies in Clojure.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Command-line-client.html b/content/releases/1.1.2/Command-line-client.html
index 21af14d..25abb98 100644
--- a/content/releases/1.1.2/Command-line-client.html
+++ b/content/releases/1.1.2/Command-line-client.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page describes all the commands that are possible with the &quot;storm&quot; command line client. To learn how to set up your &quot;storm&quot; client to talk to a remote cluster, follow the instructions in <a href="Setting-up-development-environment.html">Setting up development environment</a>.</p>
+<div class="documentation-content"><p>This page describes all the commands that are possible with the &quot;storm&quot; command line client. To learn how to set up your &quot;storm&quot; client to talk to a remote cluster, follow the instructions in <a href="Setting-up-development-environment.html">Setting up development environment</a>.</p>
 
 <p>These commands are:</p>
 
@@ -423,7 +423,7 @@
 <p>Syntax: <code>storm help [command]</code></p>
 
 <p>Print one help message or list of available commands</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Common-patterns.html b/content/releases/1.1.2/Common-patterns.html
index 8a76bee..b0890a9 100644
--- a/content/releases/1.1.2/Common-patterns.html
+++ b/content/releases/1.1.2/Common-patterns.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page lists a variety of common patterns in Storm topologies.</p>
+<div class="documentation-content"><p>This page lists a variety of common patterns in Storm topologies.</p>
 
 <ol>
 <li>Batching</li>
@@ -212,7 +212,7 @@
 <p><code>KeyedFairBolt</code> also wraps the bolt containing your logic and makes sure your topology processes multiple DRPC invocations at the same time, instead of doing them serially one at a time.</p>
 
 <p>See <a href="Distributed-RPC.html">Distributed RPC</a> for more details.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Concepts.html b/content/releases/1.1.2/Concepts.html
index a08d6ba..ef1c678 100644
--- a/content/releases/1.1.2/Concepts.html
+++ b/content/releases/1.1.2/Concepts.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page lists the main concepts of Storm and links to resources where you can find more information. The concepts discussed are:</p>
+<div class="documentation-content"><p>This page lists the main concepts of Storm and links to resources where you can find more information. The concepts discussed are:</p>
 
 <ol>
 <li>Topologies</li>
@@ -268,7 +268,7 @@
 <ul>
 <li><a href="javadocs/org/apache/storm/Config.html#TOPOLOGY_WORKERS">Config.TOPOLOGY_WORKERS</a>: this config sets the number of workers to allocate for executing the topology</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Configuration.html b/content/releases/1.1.2/Configuration.html
index 9f5c91e..7afea1f 100644
--- a/content/releases/1.1.2/Configuration.html
+++ b/content/releases/1.1.2/Configuration.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm has a variety of configurations for tweaking the behavior of nimbus, supervisors, and running topologies. Some configurations are system configurations and cannot be modified on a topology by topology basis, whereas other configurations can be modified per topology. </p>
+<div class="documentation-content"><p>Storm has a variety of configurations for tweaking the behavior of nimbus, supervisors, and running topologies. Some configurations are system configurations and cannot be modified on a topology by topology basis, whereas other configurations can be modified per topology. </p>
 
 <p>Every configuration has a default value defined in <a href="http://github.com/apache/storm/blob/v1.1.2/conf/defaults.yaml">defaults.yaml</a> in the Storm codebase. You can override these configurations by defining a storm.yaml in the classpath of Nimbus and the supervisors. Finally, you can define a topology-specific configuration that you submit along with your topology when using <a href="javadocs/org/apache/storm/StormSubmitter.html">StormSubmitter</a>. However, the topology-specific configuration can only override configs prefixed with &quot;TOPOLOGY&quot;.</p>
 
@@ -175,7 +175,7 @@
 <li><a href="Running-topologies-on-a-production-cluster.html">Running topologies on a production cluster</a>: lists useful configurations when running topologies on a cluster</li>
 <li><a href="Local-mode.html">Local mode</a>: lists useful configurations when using local mode</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Contributing-to-Storm.html b/content/releases/1.1.2/Contributing-to-Storm.html
index 3260ba8..9884bb1 100644
--- a/content/releases/1.1.2/Contributing-to-Storm.html
+++ b/content/releases/1.1.2/Contributing-to-Storm.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="getting-started-with-contributing">Getting started with contributing</h3>
+<div class="documentation-content"><h3 id="getting-started-with-contributing">Getting started with contributing</h3>
 
 <p>Some of the issues on the <a href="https://issues.apache.org/jira/browse/STORM">issue tracker</a> are marked with the <a href="https://issues.apache.org/jira/browse/STORM-2891?jql=project%20%3D%20STORM%20AND%20status%20%3D%20Open%20AND%20labels%20in%20(newbie%2C%20%22newbie%2B%2B%22)">&quot;Newbie&quot;</a> label. If you&#39;re interested in contributing to Storm but don&#39;t know where to begin, these are good issues to start with. These issues are a great way to get your feet wet with learning the codebase because they require learning about only an isolated portion of the codebase and are a relatively small amount of work.</p>
 
@@ -172,7 +172,7 @@
 <h3 id="contributing-documentation">Contributing documentation</h3>
 
 <p>Documentation contributions are very welcome! The best way to send contributions is as emails through the mailing list.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Creating-a-new-Storm-project.html b/content/releases/1.1.2/Creating-a-new-Storm-project.html
index 6fece99..c05609a 100644
--- a/content/releases/1.1.2/Creating-a-new-Storm-project.html
+++ b/content/releases/1.1.2/Creating-a-new-Storm-project.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page outlines how to set up a Storm project for development. The steps are:</p>
+<div class="documentation-content"><p>This page outlines how to set up a Storm project for development. The steps are:</p>
 
 <ol>
 <li>Add Storm jars to classpath</li>
@@ -166,7 +166,7 @@
 <p>For more information on writing topologies in other languages, see <a href="Using-non-JVM-languages-with-Storm.html">Using non-JVM languages with Storm</a>.</p>
 
 <p>To test that everything is working in Eclipse, you should now be able to <code>Run</code> the <code>WordCountTopology.java</code> file. You will see messages being emitted at the console for 10 seconds.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/DSLs-and-multilang-adapters.html b/content/releases/1.1.2/DSLs-and-multilang-adapters.html
index e50dc7d..b96ceb1 100644
--- a/content/releases/1.1.2/DSLs-and-multilang-adapters.html
+++ b/content/releases/1.1.2/DSLs-and-multilang-adapters.html
@@ -144,14 +144,14 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><a href="https://github.com/velvia/ScalaStorm">Scala DSL</a></li>
 <li><a href="https://github.com/colinsurprenant/redstorm">JRuby DSL</a></li>
 <li><a href="Clojure-DSL.html">Clojure DSL</a></li>
 <li><a href="https://github.com/tomdz/storm-esper">Storm/Esper integration</a>: Streaming SQL on top of Storm</li>
 <li><a href="https://github.com/dan-blanchard/io-storm">io-storm</a>: Perl multilang adapter</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Daemon-Fault-Tolerance.html b/content/releases/1.1.2/Daemon-Fault-Tolerance.html
index e494fef..dfddcec 100644
--- a/content/releases/1.1.2/Daemon-Fault-Tolerance.html
+++ b/content/releases/1.1.2/Daemon-Fault-Tolerance.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm has several different daemon processes.  Nimbus that schedules workers, supervisors that launch and kill workers, the log viewer that gives access to logs, and the UI that shows the status of a cluster.</p>
+<div class="documentation-content"><p>Storm has several different daemon processes.  Nimbus that schedules workers, supervisors that launch and kill workers, the log viewer that gives access to logs, and the UI that shows the status of a cluster.</p>
 
 <h2 id="what-happens-when-a-worker-dies">What happens when a worker dies?</h2>
 
@@ -169,7 +169,7 @@
 <h2 id="how-does-storm-guarantee-data-processing">How does Storm guarantee data processing?</h2>
 
 <p>Storm provides mechanisms to guarantee data processing even if nodes die or messages are lost. See <a href="Guaranteeing-message-processing.html">Guaranteeing message processing</a> for the details.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Defining-a-non-jvm-language-dsl-for-storm.html b/content/releases/1.1.2/Defining-a-non-jvm-language-dsl-for-storm.html
index 5cd4db3..3d9d722 100644
--- a/content/releases/1.1.2/Defining-a-non-jvm-language-dsl-for-storm.html
+++ b/content/releases/1.1.2/Defining-a-non-jvm-language-dsl-for-storm.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The right place to start to learn how to make a non-JVM DSL for Storm is <a href="http://github.com/apache/storm/blob/v1.1.2/storm-core/src/storm.thrift">storm-core/src/storm.thrift</a>. Since Storm topologies are just Thrift structures, and Nimbus is a Thrift daemon, you can create and submit topologies in any language.</p>
+<div class="documentation-content"><p>The right place to start to learn how to make a non-JVM DSL for Storm is <a href="http://github.com/apache/storm/blob/v1.1.2/storm-core/src/storm.thrift">storm-core/src/storm.thrift</a>. Since Storm topologies are just Thrift structures, and Nimbus is a Thrift daemon, you can create and submit topologies in any language.</p>
 
 <p>When you create the Thrift structs for spouts and bolts, the code for the spout or bolt is specified in the ComponentObject struct:</p>
 <div class="highlight"><pre><code class="language-" data-lang="">union ComponentObject {
@@ -165,7 +165,7 @@
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kt">void</span> <span class="nf">submitTopology</span><span class="o">(</span><span class="mi">1</span><span class="o">:</span> <span class="n">string</span> <span class="n">name</span><span class="o">,</span> <span class="mi">2</span><span class="o">:</span> <span class="n">string</span> <span class="n">uploadedJarLocation</span><span class="o">,</span> <span class="mi">3</span><span class="o">:</span> <span class="n">string</span> <span class="n">jsonConf</span><span class="o">,</span> <span class="mi">4</span><span class="o">:</span> <span class="n">StormTopology</span> <span class="n">topology</span><span class="o">)</span> <span class="kd">throws</span> <span class="o">(</span><span class="mi">1</span><span class="o">:</span> <span class="n">AlreadyAliveException</span> <span class="n">e</span><span class="o">,</span> <span class="mi">2</span><span class="o">:</span> <span class="n">InvalidTopologyException</span> <span class="n">ite</span><span class="o">);</span>
 </code></pre></div>
 <p>Finally, one of the key things to do in a non-JVM DSL is make it easy to define the entire topology in one file (the bolts, spouts, and the definition of the topology).</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Distributed-RPC.html b/content/releases/1.1.2/Distributed-RPC.html
index 473fd08..c8b8ab0 100644
--- a/content/releases/1.1.2/Distributed-RPC.html
+++ b/content/releases/1.1.2/Distributed-RPC.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The idea behind distributed RPC (DRPC) is to parallelize the computation of really intense functions on the fly using Storm. The Storm topology takes in as input a stream of function arguments, and it emits an output stream of the results for each of those function calls. </p>
+<div class="documentation-content"><p>The idea behind distributed RPC (DRPC) is to parallelize the computation of really intense functions on the fly using Storm. The Storm topology takes in as input a stream of function arguments, and it emits an output stream of the results for each of those function calls. </p>
 
 <p>DRPC is not so much a feature of Storm as it is a pattern expressed from Storm&#39;s primitives of streams, spouts, bolts, and topologies. DRPC could have been packaged as a separate library from Storm, but it&#39;s so useful that it&#39;s bundled with Storm.</p>
 
@@ -330,7 +330,7 @@
 <li>KeyedFairBolt for weaving the processing of multiple requests at the same time</li>
 <li>How to use <code>CoordinatedBolt</code> directly</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Eventlogging.html b/content/releases/1.1.2/Eventlogging.html
index fbb102b..9cba6ca 100644
--- a/content/releases/1.1.2/Eventlogging.html
+++ b/content/releases/1.1.2/Eventlogging.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="introduction">Introduction</h1>
+<div class="documentation-content"><h1 id="introduction">Introduction</h1>
 
 <p>Topology event inspector provides the ability to view the tuples as it flows through different stages in a storm topology.
 This could be useful for inspecting the tuples emitted at a spout or a bolt in the topology pipeline while the topology is running, without stopping or redeploying the topology. The normal flow of tuples from the spouts to the bolts is not affected by turning on event logging.</p>
@@ -247,7 +247,7 @@
     */</span>
     <span class="kt">void</span> <span class="nf">close</span><span class="o">();</span>
 <span class="o">}</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/FAQ.html b/content/releases/1.1.2/FAQ.html
index 0a2f46a..ba2aad6 100644
--- a/content/releases/1.1.2/FAQ.html
+++ b/content/releases/1.1.2/FAQ.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="best-practices">Best Practices</h2>
+<div class="documentation-content"><h2 id="best-practices">Best Practices</h2>
 
 <h3 id="what-rules-of-thumb-can-you-give-me-for-configuring-storm-trident">What rules of thumb can you give me for configuring Storm+Trident?</h3>
 
@@ -276,7 +276,7 @@
 <li>When possible, make your process incremental: each value that comes in makes the answer more an more true. A Trident ReducerAggregator is an operator that takes a prior result and a set of new records and returns a new result. This lets the result be cached and serialized to a datastore; if a server drops off line for a day and then comes back with a full day&#39;s worth of data in a rush, the old results will be calmly retrieved and updated.</li>
 <li>Lambda architecture: Record all events into an archival store (S3, HBase, HDFS) on receipt. in the fast layer, once the time window is clear, process the bucket to get an actionable answer, and ignore everything older than the time window. Periodically run a global aggregation to calculate a &quot;correct&quot; answer.</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Fault-tolerance.html b/content/releases/1.1.2/Fault-tolerance.html
index 72d6e9e..d024133 100644
--- a/content/releases/1.1.2/Fault-tolerance.html
+++ b/content/releases/1.1.2/Fault-tolerance.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page explains the design details of Storm that make it a fault-tolerant system.</p>
+<div class="documentation-content"><p>This page explains the design details of Storm that make it a fault-tolerant system.</p>
 
 <h2 id="what-happens-when-a-worker-dies">What happens when a worker dies?</h2>
 
@@ -169,7 +169,7 @@
 <h2 id="how-does-storm-guarantee-data-processing">How does Storm guarantee data processing?</h2>
 
 <p>Storm provides mechanisms to guarantee data processing even if nodes die or messages are lost. See <a href="Guaranteeing-message-processing.html">Guaranteeing message processing</a> for the details.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Guaranteeing-message-processing.html b/content/releases/1.1.2/Guaranteeing-message-processing.html
index ab97d78..4e2c355 100644
--- a/content/releases/1.1.2/Guaranteeing-message-processing.html
+++ b/content/releases/1.1.2/Guaranteeing-message-processing.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm offers several different levels of guaranteed message processing, including best effort, at least once, and exactly once through <a href="Trident-tutorial.html">Trident</a>.
+<div class="documentation-content"><p>Storm offers several different levels of guaranteed message processing, including best effort, at least once, and exactly once through <a href="Trident-tutorial.html">Trident</a>.
 This page describes how Storm can guarantee at least once processing.</p>
 
 <h3 id="what-does-it-mean-for-a-message-to-be-fully-processed">What does it mean for a message to be &quot;fully processed&quot;?</h3>
@@ -301,7 +301,7 @@
 <p>The second way is to remove reliability on a message by message basis. You can turn off tracking for an individual spout tuple by omitting a message id in the <code>SpoutOutputCollector.emit</code> method.</p>
 
 <p>Finally, if you don&#39;t care if a particular subset of the tuples downstream in the topology fail to be processed, you can emit them as unanchored tuples. Since they&#39;re not anchored to any spout tuples, they won&#39;t cause any spout tuples to fail if they aren&#39;t acked.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Hooks.html b/content/releases/1.1.2/Hooks.html
index 12b465f..bea8f20 100644
--- a/content/releases/1.1.2/Hooks.html
+++ b/content/releases/1.1.2/Hooks.html
@@ -144,13 +144,13 @@
 
 <p class="post-meta"></p>
 
-<p>Storm provides hooks with which you can insert custom code to run on any number of events within Storm. You create a hook by extending the <a href="javadocs/org/apache/storm/hooks/BaseTaskHook.html">BaseTaskHook</a> class and overriding the appropriate method for the event you want to catch. There are two ways to register your hook:</p>
+<div class="documentation-content"><p>Storm provides hooks with which you can insert custom code to run on any number of events within Storm. You create a hook by extending the <a href="javadocs/org/apache/storm/hooks/BaseTaskHook.html">BaseTaskHook</a> class and overriding the appropriate method for the event you want to catch. There are two ways to register your hook:</p>
 
 <ol>
 <li>In the open method of your spout or prepare method of your bolt using the <a href="javadocs/org/apache/storm/task/TopologyContext.html#addTaskHook">TopologyContext</a> method.</li>
 <li>Through the Storm configuration using the <a href="javadocs/org/apache/storm/Config.html#TOPOLOGY_AUTO_TASK_HOOKS">&quot;topology.auto.task.hooks&quot;</a> config. These hooks are automatically registered in every spout or bolt, and are useful for doing things like integrating with a custom monitoring system.</li>
 </ol>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Implementation-docs.html b/content/releases/1.1.2/Implementation-docs.html
index ac18dc9..065f685 100644
--- a/content/releases/1.1.2/Implementation-docs.html
+++ b/content/releases/1.1.2/Implementation-docs.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This section of the wiki is dedicated to explaining how Storm is implemented. You should have a good grasp of how to use Storm before reading these sections. </p>
+<div class="documentation-content"><p>This section of the wiki is dedicated to explaining how Storm is implemented. You should have a good grasp of how to use Storm before reading these sections. </p>
 
 <ul>
 <li><a href="Structure-of-the-codebase.html">Structure of the codebase</a></li>
@@ -154,7 +154,7 @@
 <li><a href="nimbus-ha-design.html">Nimbus HA</a></li>
 <li><a href="storm-sql-internal.html">Storm SQL</a></li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Installing-native-dependencies.html b/content/releases/1.1.2/Installing-native-dependencies.html
index 0f6116d..6727a90 100644
--- a/content/releases/1.1.2/Installing-native-dependencies.html
+++ b/content/releases/1.1.2/Installing-native-dependencies.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The native dependencies are only needed on actual Storm clusters. When running Storm in local mode, Storm uses a pure Java messaging system so that you don&#39;t need to install native dependencies on your development machine.</p>
+<div class="documentation-content"><p>The native dependencies are only needed on actual Storm clusters. When running Storm in local mode, Storm uses a pure Java messaging system so that you don&#39;t need to install native dependencies on your development machine.</p>
 
 <p>Installing ZeroMQ and JZMQ is usually straightforward. Sometimes, however, people run into issues with autoconf and get strange errors. If you run into any issues, please email the <a href="http://groups.google.com/group/storm-user">Storm mailing list</a> or come get help in the #storm-user room on freenode. </p>
 
@@ -175,7 +175,7 @@
 </ol>
 
 <p>If you run into any errors when running <code>./configure</code>, <a href="http://stackoverflow.com/questions/3522248/how-do-i-compile-jzmq-for-zeromq-on-osx">this thread</a> may provide a solution.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Joins.html b/content/releases/1.1.2/Joins.html
index 805eab6..f4d7887 100644
--- a/content/releases/1.1.2/Joins.html
+++ b/content/releases/1.1.2/Joins.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm core supports joining multiple data streams into one with the help of <code>JoinBolt</code>.
+<div class="documentation-content"><p>Storm core supports joining multiple data streams into one with the help of <code>JoinBolt</code>.
 <code>JoinBolt</code> is a Windowed bolt, i.e. it waits for the configured window duration to match up the
 tuples among the streams being joined. This helps align the streams within a Window boundary.</p>
 
@@ -272,7 +272,7 @@
 <li>Lastly, keep the window size to the minimum value necessary for solving the problem at hand.</li>
 </ul></li>
 </ol>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Kestrel-and-Storm.html b/content/releases/1.1.2/Kestrel-and-Storm.html
index ae4e26d..431068e 100644
--- a/content/releases/1.1.2/Kestrel-and-Storm.html
+++ b/content/releases/1.1.2/Kestrel-and-Storm.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page explains how to use Storm to consume items from a Kestrel cluster.</p>
+<div class="documentation-content"><p>This page explains how to use Storm to consume items from a Kestrel cluster.</p>
 
 <h2 id="preliminaries">Preliminaries</h2>
 
@@ -334,7 +334,7 @@
 Now execute the program to add items to the queue and launch the Storm topology. The order in which you launch the programs is of no importance.
 
 If you run the topology with TOPOLOGY_DEBUG you should see tuples being emitted in the topology.
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Lifecycle-of-a-topology.html b/content/releases/1.1.2/Lifecycle-of-a-topology.html
index 15d0690..1066552 100644
--- a/content/releases/1.1.2/Lifecycle-of-a-topology.html
+++ b/content/releases/1.1.2/Lifecycle-of-a-topology.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>(<strong>NOTE</strong>: this page is based on the 0.7.1 code; many things have changed since then, including a split between tasks and executors, and a reorganization of the code under <code>storm-core/src</code> rather than <code>src/</code>.)</p>
+<div class="documentation-content"><p>(<strong>NOTE</strong>: this page is based on the 0.7.1 code; many things have changed since then, including a split between tasks and executors, and a reorganization of the code under <code>storm-core/src</code> rather than <code>src/</code>.)</p>
 
 <p>This page explains in detail the lifecycle of a topology from running the &quot;storm jar&quot; command to uploading the topology to Nimbus to the supervisors starting/stopping workers to workers and tasks setting themselves up. It also explains how Nimbus monitors topologies and how topologies are shutdown when they are killed.</p>
 
@@ -261,7 +261,7 @@
 <li>Removing a topology cleans out the assignment and static information from ZK <a href="https://github.com/apache/storm/blob/0.7.1/src/clj/org/apache/storm/daemon/nimbus.clj#L116">code</a></li>
 <li>A separate cleanup thread runs the <code>do-cleanup</code> function which will clean up the heartbeat dir and the jars/configs stored locally. <a href="https://github.com/apache/storm/blob/0.7.1/src/clj/org/apache/storm/daemon/nimbus.clj#L577">code</a></li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Local-mode.html b/content/releases/1.1.2/Local-mode.html
index 1ec6bb1..2bc9724 100644
--- a/content/releases/1.1.2/Local-mode.html
+++ b/content/releases/1.1.2/Local-mode.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Local mode simulates a Storm cluster in process and is useful for developing and testing topologies. Running topologies in local mode is similar to running topologies <a href="Running-topologies-on-a-production-cluster.html">on a cluster</a>. </p>
+<div class="documentation-content"><p>Local mode simulates a Storm cluster in process and is useful for developing and testing topologies. Running topologies in local mode is similar to running topologies <a href="Running-topologies-on-a-production-cluster.html">on a cluster</a>. </p>
 
 <p>To create an in-process cluster, simply use the <code>LocalCluster</code> class. For example:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kn">import</span> <span class="nn">org.apache.storm.LocalCluster</span><span class="o">;</span>
@@ -164,7 +164,7 @@
 <li><strong>Config.TOPOLOGY_MAX_TASK_PARALLELISM</strong>: This config puts a ceiling on the number of threads spawned for a single component. Oftentimes production topologies have a lot of parallelism (hundreds of threads) which places unreasonable load when trying to test the topology in local mode. This config lets you easy control that parallelism.</li>
 <li><strong>Config.TOPOLOGY_DEBUG</strong>: When this is set to true, Storm will log a message every time a tuple is emitted from any spout or bolt. This is extremely useful for debugging.</li>
 </ol>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Logs.html b/content/releases/1.1.2/Logs.html
index 929cb21..2096834 100644
--- a/content/releases/1.1.2/Logs.html
+++ b/content/releases/1.1.2/Logs.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Logs in Storm are essential for tracking the status, operations, error messages and debug information for all the 
+<div class="documentation-content"><p>Logs in Storm are essential for tracking the status, operations, error messages and debug information for all the 
 daemons (e.g., nimbus, supervisor, logviewer, drpc, ui, pacemaker) and topologies&#39; workers.</p>
 
 <h3 id="location-of-the-logs">Location of the Logs</h3>
@@ -171,7 +171,7 @@
 <p>Search in a topology: a user can also search a string for a certain topology by clicking the icon of magnifying lens at the top right corner of the UI page. This means the UI will try to search on all the supervisor nodes in a distributed way to find the matched string in all logs for this topology. The search can happen for either normal text log files or rolled zip log files by checking/unchecking the &quot;Search archived logs:&quot; box. Then the matched results can be shown on the UI with url links, directing the user to the certain logs on each supervisor node. This powerful feature is very helpful for users to find certain problematic supervisor nodes running this topology.</p>
 
 <p><img src="images/search-a-topology.png" alt="Search in a topology" title="Search in a topology"></p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Maven.html b/content/releases/1.1.2/Maven.html
index c00912b..5a63344 100644
--- a/content/releases/1.1.2/Maven.html
+++ b/content/releases/1.1.2/Maven.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>To develop topologies, you&#39;ll need the Storm jars on your classpath. You should either include the unpacked jars in the classpath for your project or use Maven to include Storm as a development dependency. Storm is hosted on Maven Central. To include Storm in your project as a development dependency, add the following to your pom.xml:</p>
+<div class="documentation-content"><p>To develop topologies, you&#39;ll need the Storm jars on your classpath. You should either include the unpacked jars in the classpath for your project or use Maven to include Storm as a development dependency. Storm is hosted on Maven Central. To include Storm in your project as a development dependency, add the following to your pom.xml:</p>
 <div class="highlight"><pre><code class="language-xml" data-lang="xml"><span class="nt">&lt;dependency&gt;</span>
   <span class="nt">&lt;groupId&gt;</span>org.apache.storm<span class="nt">&lt;/groupId&gt;</span>
   <span class="nt">&lt;artifactId&gt;</span>storm-core<span class="nt">&lt;/artifactId&gt;</span>
@@ -157,7 +157,7 @@
 <h3 id="developing-storm">Developing Storm</h3>
 
 <p>Please refer to <a href="http://github.com/apache/storm/blob/v1.1.2/DEVELOPER.md">DEVELOPER.md</a> for more details.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Message-passing-implementation.html b/content/releases/1.1.2/Message-passing-implementation.html
index f03886e..f1c9217 100644
--- a/content/releases/1.1.2/Message-passing-implementation.html
+++ b/content/releases/1.1.2/Message-passing-implementation.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>(Note: this walkthrough is out of date as of 0.8.0. 0.8.0 revamped the message passing infrastructure to be based on the Disruptor)</p>
+<div class="documentation-content"><p>(Note: this walkthrough is out of date as of 0.8.0. 0.8.0 revamped the message passing infrastructure to be based on the Disruptor)</p>
 
 <p>This page walks through how emitting and transferring tuples works in Storm.</p>
 
@@ -186,7 +186,7 @@
 </ul></li>
 </ul></li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Metrics.html b/content/releases/1.1.2/Metrics.html
index d223258..36ec608 100644
--- a/content/releases/1.1.2/Metrics.html
+++ b/content/releases/1.1.2/Metrics.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm exposes a metrics interface to report summary statistics across the full topology.
+<div class="documentation-content"><p>Storm exposes a metrics interface to report summary statistics across the full topology.
 The numbers you see on the UI come from some of these built in metrics, but are reported through the worker heartbeats instead of through the IMetricsConsumer described below.</p>
 
 <h3 id="metric-types">Metric Types</h3>
@@ -466,7 +466,7 @@
 <li><code>newWorkerEvent</code> is 1 when a worker is first started and 0 all other times.  This can be used to tell when a worker has crashed and is restarted.</li>
 <li><code>startTimeSecs</code> is when the worker started in seconds since the epoch</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Multilang-protocol.html b/content/releases/1.1.2/Multilang-protocol.html
index 3b0d91a..95ff0db 100644
--- a/content/releases/1.1.2/Multilang-protocol.html
+++ b/content/releases/1.1.2/Multilang-protocol.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page explains the multilang protocol as of Storm 0.7.1. Versions prior to 0.7.1 used a somewhat different protocol, documented [here](Storm-multi-language-protocol-(versions-0.7.0-and-below).html).</p>
+<div class="documentation-content"><p>This page explains the multilang protocol as of Storm 0.7.1. Versions prior to 0.7.1 used a somewhat different protocol, documented [here](Storm-multi-language-protocol-(versions-0.7.0-and-below).html).</p>
 
 <h1 id="storm-multi-language-protocol">Storm Multi-Language Protocol</h1>
 
@@ -436,7 +436,7 @@
 </code></pre></div>
 <p>When subprocess receives heartbeat tuple, it must send a <code>sync</code> command back to
 ShellBolt.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Pacemaker.html b/content/releases/1.1.2/Pacemaker.html
index 1486feb..504c01b 100644
--- a/content/releases/1.1.2/Pacemaker.html
+++ b/content/releases/1.1.2/Pacemaker.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="introduction">Introduction</h3>
+<div class="documentation-content"><h3 id="introduction">Introduction</h3>
 
 <p>Pacemaker is a storm daemon designed to process heartbeats from workers. As Storm is scaled up, ZooKeeper begins to become a bottleneck due to high volumes of writes from workers doing heartbeats. Lots of writes to disk and too much traffic across the network is generated as ZooKeeper tries to maintain consistency.</p>
 
@@ -258,7 +258,7 @@
 On a 270 supervisor cluster, fully scheduled with topologies, Pacemaker resource utilization was 70% of one core and nearly 1GiB of RAM on a machine with 4 <code>Intel(R) Xeon(R) CPU E5530 @ 2.40GHz</code> and 24GiB of RAM.</p>
 
 <p>Pacemaker now supports HA. Multiple Pacemaker instances can be used at once in a storm cluster to allow massive scalability. Just include the names of the Pacemaker hosts in the pacemaker.servers config and workers and Nimbus will start communicating with them. They&#39;re fault tolerant as well. The system keeps on working as long as there is at least one pacemaker left running - provided it can handle the load.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Powered-By.html b/content/releases/1.1.2/Powered-By.html
index 2e26265..38a5161 100644
--- a/content/releases/1.1.2/Powered-By.html
+++ b/content/releases/1.1.2/Powered-By.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Want to be added to this page? Send an email <a href="mailto:nathan.marz@gmail.com">here</a>.</p>
+<div class="documentation-content"><p>Want to be added to this page? Send an email <a href="mailto:nathan.marz@gmail.com">here</a>.</p>
 
 <table>
 
@@ -1169,7 +1169,7 @@
 
 
 </table>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Project-ideas.html b/content/releases/1.1.2/Project-ideas.html
index 1e9beec..2178686 100644
--- a/content/releases/1.1.2/Project-ideas.html
+++ b/content/releases/1.1.2/Project-ideas.html
@@ -144,12 +144,12 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><strong>DSLs for non-JVM languages:</strong> These DSL&#39;s should be all-inclusive and not require any Java for the creation of topologies, spouts, or bolts. Since topologies are <a href="http://thrift.apache.org/">Thrift</a> structs, Nimbus is a Thrift service, and bolts can be written in any language, this is possible.</li>
 <li><strong>Online machine learning algorithms:</strong> Something like <a href="http://mahout.apache.org/">Mahout</a> but for online algorithms</li>
 <li><strong>Suite of performance benchmarks:</strong> These benchmarks should test Storm&#39;s performance on CPU and IO intensive workloads. There should be benchmarks for different classes of applications, such as stream processing (where throughput is the priority) and distributed RPC (where latency is the priority). </li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Rationale.html b/content/releases/1.1.2/Rationale.html
index dab35c1..0a0b153 100644
--- a/content/releases/1.1.2/Rationale.html
+++ b/content/releases/1.1.2/Rationale.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The past decade has seen a revolution in data processing. MapReduce, Hadoop, and related technologies have made it possible to store and process data at scales previously unthinkable. Unfortunately, these data processing technologies are not realtime systems, nor are they meant to be. There&#39;s no hack that will turn Hadoop into a realtime system; realtime data processing has a fundamentally different set of requirements than batch processing.</p>
+<div class="documentation-content"><p>The past decade has seen a revolution in data processing. MapReduce, Hadoop, and related technologies have made it possible to store and process data at scales previously unthinkable. Unfortunately, these data processing technologies are not realtime systems, nor are they meant to be. There&#39;s no hack that will turn Hadoop into a realtime system; realtime data processing has a fundamentally different set of requirements than batch processing.</p>
 
 <p>However, realtime data processing at massive scale is becoming more and more of a requirement for businesses. The lack of a &quot;Hadoop of realtime&quot; has become the biggest hole in the data processing ecosystem.</p>
 
@@ -176,7 +176,7 @@
 <li><strong>Fault-tolerant</strong>: If there are faults during execution of your computation, Storm will reassign tasks as necessary. Storm makes sure that a computation can run forever (or until you kill the computation).</li>
 <li><strong>Programming language agnostic</strong>: Robust and scalable realtime processing shouldn&#39;t be limited to a single platform. Storm topologies and processing components can be defined in any language, making Storm accessible to nearly anyone.</li>
 </ol>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Resource_Aware_Scheduler_overview.html b/content/releases/1.1.2/Resource_Aware_Scheduler_overview.html
index 3cd5533..e20ef70 100644
--- a/content/releases/1.1.2/Resource_Aware_Scheduler_overview.html
+++ b/content/releases/1.1.2/Resource_Aware_Scheduler_overview.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="introduction">Introduction</h1>
+<div class="documentation-content"><h1 id="introduction">Introduction</h1>
 
 <p>The purpose of this document is to provide a description of the Resource Aware Scheduler for the Storm distributed real-time computation system.  This document will provide you with both a high level description of the resource aware scheduler in Storm.  Some of the benefits are using a resource aware scheduler on top of Storm is outlined in the following presentation at Hadoop Summit 2016:</p>
 
@@ -617,7 +617,7 @@
 <td><img src="images/ras_new_strategy_runtime_yahoo.png" alt=""></td>
 </tr>
 </tbody></table>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Running-topologies-on-a-production-cluster.html b/content/releases/1.1.2/Running-topologies-on-a-production-cluster.html
index 0a04efd..b662a14 100644
--- a/content/releases/1.1.2/Running-topologies-on-a-production-cluster.html
+++ b/content/releases/1.1.2/Running-topologies-on-a-production-cluster.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Running topologies on a production cluster is similar to running in <a href="Local-mode.html">Local mode</a>. Here are the steps:</p>
+<div class="documentation-content"><p>Running topologies on a production cluster is similar to running in <a href="Local-mode.html">Local mode</a>. Here are the steps:</p>
 
 <p>1) Define the topology (Use <a href="javadocs/org/apache/storm/topology/TopologyBuilder.html">TopologyBuilder</a> if defining using Java)</p>
 
@@ -212,7 +212,7 @@
 <p>The best place to monitor a topology is using the Storm UI. The Storm UI provides information about errors happening in tasks and fine-grained stats on the throughput and latency performance of each component of each running topology.</p>
 
 <p>You can also look at the worker logs on the cluster machines.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/SECURITY.html b/content/releases/1.1.2/SECURITY.html
index 9de32fd..c671e44 100644
--- a/content/releases/1.1.2/SECURITY.html
+++ b/content/releases/1.1.2/SECURITY.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="running-apache-storm-securely">Running Apache Storm Securely</h1>
+<div class="documentation-content"><h1 id="running-apache-storm-securely">Running Apache Storm Securely</h1>
 
 <p>Apache Storm offers a range of configuration options when trying to secure
 your cluster.  By default all authentication and authorization is disabled but 
@@ -681,7 +681,7 @@
  | storm.zookeeper.topology.auth.payload | A string representing the payload for topology Zookeeper authentication. |</p>
 
 <p>Note: If storm.zookeeper.topology.auth.payload isn&#39;t set,storm will generate a ZooKeeper secret payload for MD5-digest with generateZookeeperDigestSecretPayload() method.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/STORM-UI-REST-API.html b/content/releases/1.1.2/STORM-UI-REST-API.html
index 1f73954..27c62ed 100644
--- a/content/releases/1.1.2/STORM-UI-REST-API.html
+++ b/content/releases/1.1.2/STORM-UI-REST-API.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The Storm UI daemon provides a REST API that allows you to interact with a Storm cluster, which includes retrieving
+<div class="documentation-content"><p>The Storm UI daemon provides a REST API that allows you to interact with a Storm cluster, which includes retrieving
 metrics data and configuration information as well as management operations such as starting or stopping topologies.</p>
 
 <h1 id="data-format">Data format</h1>
@@ -2936,7 +2936,7 @@
   </span><span class="s2">"error"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Internal Server Error"</span><span class="p">,</span><span class="w">
   </span><span class="s2">"errorMessage"</span><span class="p">:</span><span class="w"> </span><span class="s2">"java.lang.NullPointerException</span><span class="se">\n\t</span><span class="s2">at clojure.core$name.invoke(core.clj:1505)</span><span class="se">\n\t</span><span class="s2">at org.apache.storm.ui.core$component_page.invoke(core.clj:752)</span><span class="se">\n\t</span><span class="s2">at org.apache.storm.ui.core$fn__7766.invoke(core.clj:782)</span><span class="se">\n\t</span><span class="s2">at compojure.core$make_route$fn__5755.invoke(core.clj:93)</span><span class="se">\n\t</span><span class="s2">at compojure.core$if_route$fn__5743.invoke(core.clj:39)</span><span class="se">\n\t</span><span class="s2">at compojure.core$if_method$fn__5736.invoke(core.clj:24)</span><span class="se">\n\t</span><span class="s2">at compojure.core$routing$fn__5761.invoke(core.clj:106)</span><span class="se">\n\t</span><span class="s2">at clojure.core$some.invoke(core.clj:2443)</span><span class="se">\n\t</span><span class="s2">at compojure.core$routing.doInvoke(core.clj:106)</span><span class="se">\n\t</span><span class="s2">at clojure.lang.RestFn.applyTo(RestFn.java:139)</span><span class="se">\n\t</span><span class="s2">at clojure.core$apply.invoke(core.clj:619)</span><span class="se">\n\t</span><span class="s2">at compojure.core$routes$fn__5765.invoke(core.clj:111)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.reload$wrap_reload$fn__6880.invoke(reload.clj:14)</span><span class="se">\n\t</span><span class="s2">at org.apache.storm.ui.core$catch_errors$fn__7800.invoke(core.clj:836)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.keyword_params$wrap_keyword_params$fn__6319.invoke(keyword_params.clj:27)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.nested_params$wrap_nested_params$fn__6358.invoke(nested_params.clj:65)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.params$wrap_params$fn__6291.invoke(params.clj:55)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.multipart_params$wrap_multipart_params$fn__6386.invoke(multipart_params.clj:103)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.flash$wrap_flash$fn__6675.invoke(flash.clj:14)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.session$wrap_session$fn__6664.invoke(session.clj:43)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.cookies$wrap_cookies$fn__6595.invoke(cookies.clj:160)</span><span class="se">\n\t</span><span class="s2">at ring.adapter.jetty$proxy_handler$fn__6112.invoke(jetty.clj:16)</span><span class="se">\n\t</span><span class="s2">at ring.adapter.jetty.proxy$org.mortbay.jetty.handler.AbstractHandler$0.handle(Unknown Source)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.Server.handle(Server.java:326)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)</span><span class="se">\n</span><span class="s2">"</span><span class="w">
 </span><span class="p">}</span><span class="w">
-</span></code></pre></div>
+</span></code></pre></div></div>
 
 
 	          </div>
diff --git "a/content/releases/1.1.2/Serialization-\050prior-to-0.6.0\051.html" "b/content/releases/1.1.2/Serialization-\050prior-to-0.6.0\051.html"
index 61de27b..470d891 100644
--- "a/content/releases/1.1.2/Serialization-\050prior-to-0.6.0\051.html"
+++ "b/content/releases/1.1.2/Serialization-\050prior-to-0.6.0\051.html"
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Tuples can be comprised of objects of any types. Since Storm is a distributed system, it needs to know how to serialize and deserialize objects when they&#39;re passed between tasks. By default Storm can serialize ints, shorts, longs, floats, doubles, bools, bytes, strings, and byte arrays, but if you want to use another type in your tuples, you&#39;ll need to implement a custom serializer.</p>
+<div class="documentation-content"><p>Tuples can be comprised of objects of any types. Since Storm is a distributed system, it needs to know how to serialize and deserialize objects when they&#39;re passed between tasks. By default Storm can serialize ints, shorts, longs, floats, doubles, bools, bytes, strings, and byte arrays, but if you want to use another type in your tuples, you&#39;ll need to implement a custom serializer.</p>
 
 <h3 id="dynamic-typing">Dynamic typing</h3>
 
@@ -188,7 +188,7 @@
 <p>Storm provides helpers for registering serializers in a topology config. The <a href="javadocs/backtype/storm/Config.html">Config</a> class has a method called <code>addSerialization</code> that takes in a serializer class to add to the config.</p>
 
 <p>There&#39;s an advanced config called Config.TOPOLOGY_SKIP_MISSING_SERIALIZATIONS. If you set this to true, Storm will ignore any serializations that are registered but do not have their code available on the classpath. Otherwise, Storm will throw errors when it can&#39;t find a serialization. This is useful if you run many topologies on a cluster that each have different serializations, but you want to declare all the serializations across all topologies in the <code>storm.yaml</code> files.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Serialization.html b/content/releases/1.1.2/Serialization.html
index 22cbee2..b4c4c13 100644
--- a/content/releases/1.1.2/Serialization.html
+++ b/content/releases/1.1.2/Serialization.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page is about how the serialization system in Storm works for versions 0.6.0 and onwards. Storm used a different serialization system prior to 0.6.0 which is documented on <a href="Serialization-(prior-to-0.6.0).html">Serialization (prior to 0.6.0)</a>. </p>
+<div class="documentation-content"><p>This page is about how the serialization system in Storm works for versions 0.6.0 and onwards. Storm used a different serialization system prior to 0.6.0 which is documented on <a href="Serialization-(prior-to-0.6.0).html">Serialization (prior to 0.6.0)</a>. </p>
 
 <p>Tuples can be comprised of objects of any types. Since Storm is a distributed system, it needs to know how to serialize and deserialize objects when they&#39;re passed between tasks.</p>
 
@@ -200,7 +200,7 @@
 <p>When a topology is submitted, a single set of serializations is chosen to be used by all components in the topology for sending messages. This is done by merging the component-specific serializer registrations with the regular set of serialization registrations. If two components define serializers for the same class, one of the serializers is chosen arbitrarily.</p>
 
 <p>To force a serializer for a particular class if there&#39;s a conflict between two component-specific registrations, just define the serializer you want to use in the topology-specific configuration. The topology-specific configuration has precedence over component-specific configurations for serialization registrations.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Serializers.html b/content/releases/1.1.2/Serializers.html
index ead48ec..35e0a77 100644
--- a/content/releases/1.1.2/Serializers.html
+++ b/content/releases/1.1.2/Serializers.html
@@ -144,10 +144,10 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><a href="https://github.com/rapportive-oss/storm-json">storm-json</a>: Simple JSON serializer for Storm</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Setting-up-a-Storm-cluster.html b/content/releases/1.1.2/Setting-up-a-Storm-cluster.html
index dd96538..7dcd2e3 100644
--- a/content/releases/1.1.2/Setting-up-a-Storm-cluster.html
+++ b/content/releases/1.1.2/Setting-up-a-Storm-cluster.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page outlines the steps for getting a Storm cluster up and running. If you&#39;re on AWS, you should check out the <a href="https://github.com/nathanmarz/storm-deploy/wiki">storm-deploy</a> project. <a href="https://github.com/nathanmarz/storm-deploy/wiki">storm-deploy</a> completely automates the provisioning, configuration, and installation of Storm clusters on EC2. It also sets up Ganglia for you so you can monitor CPU, disk, and network usage.</p>
+<div class="documentation-content"><p>This page outlines the steps for getting a Storm cluster up and running. If you&#39;re on AWS, you should check out the <a href="https://github.com/nathanmarz/storm-deploy/wiki">storm-deploy</a> project. <a href="https://github.com/nathanmarz/storm-deploy/wiki">storm-deploy</a> completely automates the provisioning, configuration, and installation of Storm clusters on EC2. It also sets up Ganglia for you so you can monitor CPU, disk, and network usage.</p>
 
 <p>If you run into difficulties with your Storm cluster, first check for a solution is in the <a href="Troubleshooting.html">Troubleshooting</a> page. Otherwise, email the mailing list.</p>
 
@@ -246,7 +246,7 @@
 </ol>
 
 <p>As you can see, running the daemons is very straightforward. The daemons will log to the logs/ directory in wherever you extracted the Storm release.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Setting-up-development-environment.html b/content/releases/1.1.2/Setting-up-development-environment.html
index 7035ae4..60c125c 100644
--- a/content/releases/1.1.2/Setting-up-development-environment.html
+++ b/content/releases/1.1.2/Setting-up-development-environment.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page outlines what you need to do to get a Storm development environment set up. In summary, the steps are:</p>
+<div class="documentation-content"><p>This page outlines what you need to do to get a Storm development environment set up. In summary, the steps are:</p>
 
 <ol>
 <li>Download a <a href="..//downloads.html">Storm release</a> , unpack it, and put the unpacked <code>bin/</code> directory on your PATH</li>
@@ -171,7 +171,7 @@
 
 <p>The previous step installed the <code>storm</code> client on your machine which is used to communicate with remote Storm clusters. Now all you have to do is tell the client which Storm cluster to talk to. To do this, all you have to do is put the host address of the master in the <code>~/.storm/storm.yaml</code> file. It should look something like this:</p>
 <div class="highlight"><pre><code class="language-" data-lang="">nimbus.seeds: ["123.45.678.890"]
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Spout-implementations.html b/content/releases/1.1.2/Spout-implementations.html
index 2f77892..67b5b97 100644
--- a/content/releases/1.1.2/Spout-implementations.html
+++ b/content/releases/1.1.2/Spout-implementations.html
@@ -144,14 +144,14 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><a href="https://github.com/nathanmarz/storm-kestrel">storm-kestrel</a>: Adapter to use Kestrel as a spout</li>
 <li><a href="https://github.com/rapportive-oss/storm-amqp-spout">storm-amqp-spout</a>: Adapter to use AMQP source as a spout</li>
 <li><a href="https://github.com/ptgoetz/storm-jms">storm-jms</a>: Adapter to use a JMS source as a spout</li>
 <li><a href="https://github.com/sorenmacbeth/storm-redis-pubsub">storm-redis-pubsub</a>: A spout that subscribes to a Redis pubsub stream</li>
 <li><a href="https://github.com/haitaoyao/storm-beanstalkd-spout">storm-beanstalkd-spout</a>: A spout that subscribes to a beanstalkd queue</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/State-checkpointing.html b/content/releases/1.1.2/State-checkpointing.html
index c3e696b..9455182 100644
--- a/content/releases/1.1.2/State-checkpointing.html
+++ b/content/releases/1.1.2/State-checkpointing.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="state-support-in-core-storm">State support in core storm</h1>
+<div class="documentation-content"><h1 id="state-support-in-core-storm">State support in core storm</h1>
 
 <p>Storm core has abstractions for bolts to save and retrieve the state of its operations. There is a default in-memory
 based state implementation and also a Redis backed implementation that provides state persistence.</p>
@@ -303,7 +303,7 @@
 a <code>StateProvider</code> implementation which can load and return the state based on the namespace. Each state belongs to a unique namespace.
 The namespace is typically unique per task so that each task can have its own state. The StateProvider and the corresponding
 State implementation should be available in the class path of Storm (by placing them in the extlib directory).</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Storm-Scheduler.html b/content/releases/1.1.2/Storm-Scheduler.html
index 1d22766..a9fb237 100644
--- a/content/releases/1.1.2/Storm-Scheduler.html
+++ b/content/releases/1.1.2/Storm-Scheduler.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm now has 4 kinds of built-in schedulers: <a href="http://github.com/apache/storm/blob/v1.1.2/storm-core/src/clj/org/apache/storm/scheduler/DefaultScheduler.clj">DefaultScheduler</a>, <a href="http://github.com/apache/storm/blob/v1.1.2/storm-core/src/clj/org/apache/storm/scheduler/IsolationScheduler.clj">IsolationScheduler</a>, <a href="http://github.com/apache/storm/blob/v1.1.2/storm-core/src/jvm/org/apache/storm/scheduler/multitenant/MultitenantScheduler.java">MultitenantScheduler</a>, <a href="Resource_Aware_Scheduler_overview.html">ResourceAwareScheduler</a>. </p>
+<div class="documentation-content"><p>Storm now has 4 kinds of built-in schedulers: <a href="http://github.com/apache/storm/blob/v1.1.2/storm-core/src/clj/org/apache/storm/scheduler/DefaultScheduler.clj">DefaultScheduler</a>, <a href="http://github.com/apache/storm/blob/v1.1.2/storm-core/src/clj/org/apache/storm/scheduler/IsolationScheduler.clj">IsolationScheduler</a>, <a href="http://github.com/apache/storm/blob/v1.1.2/storm-core/src/jvm/org/apache/storm/scheduler/multitenant/MultitenantScheduler.java">MultitenantScheduler</a>, <a href="Resource_Aware_Scheduler_overview.html">ResourceAwareScheduler</a>. </p>
 
 <h2 id="pluggable-scheduler">Pluggable scheduler</h2>
 
@@ -163,7 +163,7 @@
 <p>Any topologies submitted to the cluster not listed there will not be isolated. Note that there is no way for a user of Storm to affect their isolation settings – this is only allowed by the administrator of the cluster (this is very much intentional).</p>
 
 <p>The isolation scheduler solves the multi-tenancy problem – avoiding resource contention between topologies – by providing full isolation between topologies. The intention is that &quot;productionized&quot; topologies should be listed in the isolation config, and test or in-development topologies should not. The remaining machines on the cluster serve the dual role of failover for isolated topologies and for running the non-isolated topologies.</p>
-
+</div>
 
 
 	          </div>
diff --git "a/content/releases/1.1.2/Storm-multi-language-protocol-\050versions-0.7.0-and-below\051.html" "b/content/releases/1.1.2/Storm-multi-language-protocol-\050versions-0.7.0-and-below\051.html"
index 215e239..463fb20 100644
--- "a/content/releases/1.1.2/Storm-multi-language-protocol-\050versions-0.7.0-and-below\051.html"
+++ "b/content/releases/1.1.2/Storm-multi-language-protocol-\050versions-0.7.0-and-below\051.html"
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page explains the multilang protocol for versions 0.7.0 and below. The protocol changed in version 0.7.1.</p>
+<div class="documentation-content"><p>This page explains the multilang protocol for versions 0.7.0 and below. The protocol changed in version 0.7.1.</p>
 
 <h1 id="storm-multi-language-protocol">Storm Multi-Language Protocol</h1>
 
@@ -253,7 +253,7 @@
 <p>Note: This command is not JSON encoded, it is sent as a simple string.</p>
 
 <p>This lets the parent bolt know that the script has finished processing and is ready for another tuple.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Structure-of-the-codebase.html b/content/releases/1.1.2/Structure-of-the-codebase.html
index 7d8e81c..92dba98 100644
--- a/content/releases/1.1.2/Structure-of-the-codebase.html
+++ b/content/releases/1.1.2/Structure-of-the-codebase.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>There are three distinct layers to Storm&#39;s codebase.</p>
+<div class="documentation-content"><p>There are three distinct layers to Storm&#39;s codebase.</p>
 
 <p>First, Storm was designed from the very beginning to be compatible with multiple languages. Nimbus is a Thrift service and topologies are defined as Thrift structures. The usage of Thrift allows Storm to be used from any language.</p>
 
@@ -287,7 +287,7 @@
 <p><a href="http://github.com/apache/storm/blob/v1.1.2/storm-core/src/clj/org/apache/storm/util.clj">org.apache.storm.util</a>: Contains generic utility functions used throughout the code base.</p>
 
 <p><a href="http://github.com/apache/storm/blob/v1.1.2/storm-core/src/clj/org/apache/storm/zookeeper.clj">org.apache.storm.zookeeper</a>: Clojure wrapper around the Zookeeper API and implements some &quot;high-level&quot; stuff like &quot;mkdirs&quot; and &quot;delete-recursive&quot;.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Support-for-non-java-languages.html b/content/releases/1.1.2/Support-for-non-java-languages.html
index 20efd49..46a7de2 100644
--- a/content/releases/1.1.2/Support-for-non-java-languages.html
+++ b/content/releases/1.1.2/Support-for-non-java-languages.html
@@ -144,13 +144,13 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><a href="https://github.com/velvia/ScalaStorm">Scala DSL</a></li>
 <li><a href="https://github.com/colinsurprenant/storm-jruby">JRuby DSL</a></li>
 <li><a href="Clojure-DSL.html">Clojure DSL</a></li>
 <li><a href="https://github.com/gphat/io-storm">io-storm</a>: Perl multilang adapter</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Transactional-topologies.html b/content/releases/1.1.2/Transactional-topologies.html
index 2164c6f..e161f67 100644
--- a/content/releases/1.1.2/Transactional-topologies.html
+++ b/content/releases/1.1.2/Transactional-topologies.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p><strong>NOTE</strong>: Transactional topologies have been deprecated -- use the <a href="Trident-tutorial.html">Trident</a> framework instead.</p>
+<div class="documentation-content"><p><strong>NOTE</strong>: Transactional topologies have been deprecated -- use the <a href="Trident-tutorial.html">Trident</a> framework instead.</p>
 
 <hr>
 
@@ -510,7 +510,7 @@
 <li>so it can&#39;t call finishbatch until it&#39;s received all tuples from all subscribed components AND its received the commit stream tuple (for committers). this ensures that it can&#39;t prematurely call finishBatch</li>
 </ul></li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Trident-API-Overview.html b/content/releases/1.1.2/Trident-API-Overview.html
index 1be4956..f1d5a9c 100644
--- a/content/releases/1.1.2/Trident-API-Overview.html
+++ b/content/releases/1.1.2/Trident-API-Overview.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The core data model in Trident is the &quot;Stream&quot;, processed as a series of batches. A stream is partitioned among the nodes in the cluster, and operations applied to a stream are applied in parallel across each partition.</p>
+<div class="documentation-content"><p>The core data model in Trident is the &quot;Stream&quot;, processed as a series of batches. A stream is partitioned among the nodes in the cluster, and operations applied to a stream are applied in parallel across each partition.</p>
 
 <p>There are five kinds of operations in Trident:</p>
 
@@ -669,7 +669,7 @@
 <p>You might be wondering – how do you do something like a &quot;windowed join&quot;, where tuples from one side of the join are joined against the last hour of tuples from the other side of the join.</p>
 
 <p>To do this, you would make use of partitionPersist and stateQuery. The last hour of tuples from one side of the join would be stored and rotated in a source of state, keyed by the join field. Then the stateQuery would do lookups by the join field to perform the &quot;join&quot;.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Trident-RAS-API.html b/content/releases/1.1.2/Trident-RAS-API.html
index 2ad8546..1700759 100644
--- a/content/releases/1.1.2/Trident-RAS-API.html
+++ b/content/releases/1.1.2/Trident-RAS-API.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="trident-ras-api">Trident RAS API</h2>
+<div class="documentation-content"><h2 id="trident-ras-api">Trident RAS API</h2>
 
 <p>The Trident RAS (Resource Aware Scheduler) API provides a mechanism to allow users to specify the resource consumption of a Trident topology. The API looks exactly like the base RAS API, only it is called on Trident Streams instead of Bolts and Spouts.</p>
 
@@ -192,7 +192,7 @@
 <p>Resource declarations may be called after any operation. The operations without explicit resources will get the defaults. If you choose to set resources for only some operations, defaults must be declared, or topology submission will fail.
 Resource declarations have the same <em>boundaries</em> as parallelism hints. They don&#39;t cross any groupings, shufflings, or any other kind of repartitioning.
 Resources are declared per operation, but get combined within boundaries.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Trident-spouts.html b/content/releases/1.1.2/Trident-spouts.html
index 9eac47c..7f62f1a 100644
--- a/content/releases/1.1.2/Trident-spouts.html
+++ b/content/releases/1.1.2/Trident-spouts.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="trident-spouts">Trident spouts</h1>
+<div class="documentation-content"><h1 id="trident-spouts">Trident spouts</h1>
 
 <p>Like in the vanilla Storm API, spouts are the source of streams in a Trident topology. On top of the vanilla Storm spouts, Trident exposes additional APIs for more sophisticated spouts.</p>
 
@@ -182,7 +182,7 @@
 </ol>
 
 <p>And, like mentioned in the beginning of this tutorial, you can use regular IRichSpout&#39;s as well.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Trident-state.html b/content/releases/1.1.2/Trident-state.html
index 325c150..0bf639c 100644
--- a/content/releases/1.1.2/Trident-state.html
+++ b/content/releases/1.1.2/Trident-state.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Trident has first-class abstractions for reading from and writing to stateful sources. The state can either be internal to the topology – e.g., kept in-memory and backed by HDFS – or externally stored in a database like Memcached or Cassandra. There&#39;s no difference in the Trident API for either case.</p>
+<div class="documentation-content"><p>Trident has first-class abstractions for reading from and writing to stateful sources. The state can either be internal to the topology – e.g., kept in-memory and backed by HDFS – or externally stored in a database like Memcached or Cassandra. There&#39;s no difference in the Trident API for either case.</p>
 
 <p>Trident manages state in a fault-tolerant way so that state updates are idempotent in the face of retries and failures. This lets you reason about Trident topologies as if each message were processed exactly-once.</p>
 
@@ -415,7 +415,7 @@
 <p>Finally, Trident provides the <a href="http://github.com/apache/storm/blob/v1.1.2/storm-core/src/jvm/org/apache/storm/trident/state/map/SnapshottableMap.java">SnapshottableMap</a> class that turns a MapState into a Snapshottable object, by storing global aggregations into a fixed key.</p>
 
 <p>Take a look at the implementation of <a href="https://github.com/nathanmarz/trident-memcached/blob/master/src/jvm/trident/memcached/MemcachedState.java">MemcachedState</a> to see how all these utilities can be put together to make a high performance MapState implementation. MemcachedState allows you to choose between opaque transactional, transactional, and non-transactional semantics.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Trident-tutorial.html b/content/releases/1.1.2/Trident-tutorial.html
index ca8a161..7ac0c37 100644
--- a/content/releases/1.1.2/Trident-tutorial.html
+++ b/content/releases/1.1.2/Trident-tutorial.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Trident is a high-level abstraction for doing realtime computing on top of Storm. It allows you to seamlessly intermix high throughput (millions of messages per second), stateful stream processing with low latency distributed querying. If you&#39;re familiar with high level batch processing tools like Pig or Cascading, the concepts of Trident will be very familiar – Trident has joins, aggregations, grouping, functions, and filters. In addition to these, Trident adds primitives for doing stateful, incremental processing on top of any database or persistence store. Trident has consistent, exactly-once semantics, so it is easy to reason about Trident topologies.</p>
+<div class="documentation-content"><p>Trident is a high-level abstraction for doing realtime computing on top of Storm. It allows you to seamlessly intermix high throughput (millions of messages per second), stateful stream processing with low latency distributed querying. If you&#39;re familiar with high level batch processing tools like Pig or Cascading, the concepts of Trident will be very familiar – Trident has joins, aggregations, grouping, functions, and filters. In addition to these, Trident adds primitives for doing stateful, incremental processing on top of any database or persistence store. Trident has consistent, exactly-once semantics, so it is easy to reason about Trident topologies.</p>
 
 <h2 id="illustrative-example">Illustrative example</h2>
 
@@ -356,7 +356,7 @@
 <h2 id="conclusion">Conclusion</h2>
 
 <p>Trident makes realtime computation elegant. You&#39;ve seen how high throughput stream processing, state manipulation, and low-latency querying can be seamlessly intermixed via Trident&#39;s API. Trident lets you express your realtime computations in a natural way while still getting maximal performance.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Troubleshooting.html b/content/releases/1.1.2/Troubleshooting.html
index e7d877a..1b386ad 100644
--- a/content/releases/1.1.2/Troubleshooting.html
+++ b/content/releases/1.1.2/Troubleshooting.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page lists issues people have run into when using Storm along with their solutions.</p>
+<div class="documentation-content"><p>This page lists issues people have run into when using Storm along with their solutions.</p>
 
 <h3 id="worker-processes-are-crashing-on-startup-with-no-stack-trace">Worker processes are crashing on startup with no stack trace</h3>
 
@@ -279,7 +279,7 @@
 <ul>
 <li>This means that you&#39;re emitting a mutable object as an output tuple. Everything you emit into the output collector must be immutable. What&#39;s happening is that your bolt is modifying the object while it is being serialized to be sent over the network.</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Tutorial.html b/content/releases/1.1.2/Tutorial.html
index 87e55dc..a776b00 100644
--- a/content/releases/1.1.2/Tutorial.html
+++ b/content/releases/1.1.2/Tutorial.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>In this tutorial, you&#39;ll learn how to create Storm topologies and deploy them to a Storm cluster. Java will be the main language used, but a few examples will use Python to illustrate Storm&#39;s multi-language capabilities.</p>
+<div class="documentation-content"><p>In this tutorial, you&#39;ll learn how to create Storm topologies and deploy them to a Storm cluster. Java will be the main language used, but a few examples will use Python to illustrate Storm&#39;s multi-language capabilities.</p>
 
 <h2 id="preliminaries">Preliminaries</h2>
 
@@ -428,7 +428,7 @@
 <h2 id="conclusion">Conclusion</h2>
 
 <p>This tutorial gave a broad overview of developing, testing, and deploying Storm topologies. The rest of the documentation dives deeper into all the aspects of using Storm.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Understanding-the-parallelism-of-a-Storm-topology.html b/content/releases/1.1.2/Understanding-the-parallelism-of-a-Storm-topology.html
index 0d8e717..789c697 100644
--- a/content/releases/1.1.2/Understanding-the-parallelism-of-a-Storm-topology.html
+++ b/content/releases/1.1.2/Understanding-the-parallelism-of-a-Storm-topology.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="what-makes-a-running-topology-worker-processes-executors-and-tasks">What makes a running topology: worker processes, executors and tasks</h2>
+<div class="documentation-content"><h2 id="what-makes-a-running-topology-worker-processes-executors-and-tasks">What makes a running topology: worker processes, executors and tasks</h2>
 
 <p>Storm distinguishes between the following three main entities that are used to actually run a topology in a Storm cluster:</p>
 
@@ -274,7 +274,7 @@
 <li><a href="Tutorial.html">Tutorial</a></li>
 <li><a href="javadocs/">Storm API documentation</a>, most notably the class <code>Config</code></li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Using-non-JVM-languages-with-Storm.html b/content/releases/1.1.2/Using-non-JVM-languages-with-Storm.html
index 5de792e..cebdacd 100644
--- a/content/releases/1.1.2/Using-non-JVM-languages-with-Storm.html
+++ b/content/releases/1.1.2/Using-non-JVM-languages-with-Storm.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li>two pieces: creating topologies and implementing spouts and bolts in other languages</li>
 <li>creating topologies in another language is easy since topologies are just thrift structures (link to storm.thrift)</li>
 <li>implementing spouts and bolts in another language is called a &quot;multilang components&quot; or &quot;shelling&quot;
@@ -198,7 +198,7 @@
 <p>Then you can connect to Nimbus using the Thrift API and submit the topology, passing {uploaded-jar-location} into the submitTopology method. For reference, here&#39;s the submitTopology definition:</p>
 <div class="highlight"><pre><code class="language-" data-lang="">void submitTopology(1: string name, 2: string uploadedJarLocation, 3: string jsonConf, 4: StormTopology topology)
     throws (1: AlreadyAliveException e, 2: InvalidTopologyException ite);
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/Windowing.html b/content/releases/1.1.2/Windowing.html
index b5f3bc9..ff97307 100644
--- a/content/releases/1.1.2/Windowing.html
+++ b/content/releases/1.1.2/Windowing.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm core has support for processing a group of tuples that falls within a window. Windows are specified with the 
+<div class="documentation-content"><p>Storm core has support for processing a group of tuples that falls within a window. Windows are specified with the 
 following two parameters,</p>
 
 <ol>
@@ -380,7 +380,7 @@
 
 <p>An example toplogy <code>SlidingWindowTopology</code> shows how to use the apis to compute a sliding window sum and a tumbling window 
 average.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/distcache-blobstore.html b/content/releases/1.1.2/distcache-blobstore.html
index d7cb463..b53790f 100644
--- a/content/releases/1.1.2/distcache-blobstore.html
+++ b/content/releases/1.1.2/distcache-blobstore.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="storm-distributed-cache-api">Storm Distributed Cache API</h1>
+<div class="documentation-content"><h1 id="storm-distributed-cache-api">Storm Distributed Cache API</h1>
 
 <p>The distributed cache feature in storm is used to efficiently distribute files
 (or blobs, which is the equivalent terminology for a file in the distributed
@@ -799,7 +799,7 @@
  2: required string session;
  3: optional i64 data_size;
 }
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/dynamic-log-level-settings.html b/content/releases/1.1.2/dynamic-log-level-settings.html
index 0b24d50..f5d4a50 100644
--- a/content/releases/1.1.2/dynamic-log-level-settings.html
+++ b/content/releases/1.1.2/dynamic-log-level-settings.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>We have added the ability to set log level settings for a running topology using the Storm UI and the Storm CLI. </p>
+<div class="documentation-content"><p>We have added the ability to set log level settings for a running topology using the Storm UI and the Storm CLI. </p>
 
 <p>The log level settings apply the same way as you&#39;d expect from log4j, as all we are doing is telling log4j to set the level of the logger you provide. If you set the log level of a parent logger, the children loggers start using that level (unless the children have a more restrictive level already). A timeout can optionally be provided (except for DEBUG mode, where it’s required in the UI), if workers should reset log levels automatically.</p>
 
@@ -179,7 +179,7 @@
 <p><code>./bin/storm set_log_level my_topology -r ROOT</code></p>
 
 <p>Clears the ROOT logger dynamic log level, resetting it to its original value.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/dynamic-worker-profiling.html b/content/releases/1.1.2/dynamic-worker-profiling.html
index c2b58ed..7b0a298 100644
--- a/content/releases/1.1.2/dynamic-worker-profiling.html
+++ b/content/releases/1.1.2/dynamic-worker-profiling.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>In multi-tenant mode, storm launches long-running JVMs across cluster without sudo access to user. Self-serving of Java heap-dumps, jstacks and java profiling of these JVMs would improve users&#39; ability to analyze and debug issues when monitoring it actively.</p>
+<div class="documentation-content"><p>In multi-tenant mode, storm launches long-running JVMs across cluster without sudo access to user. Self-serving of Java heap-dumps, jstacks and java profiling of these JVMs would improve users&#39; ability to analyze and debug issues when monitoring it actively.</p>
 
 <p>The storm dynamic profiler lets you dynamically take heap-dumps, jprofile or jstack for a worker jvm running on stock cluster. It let user download these dumps from the browser and use your favorite tools to analyze it  The UI component page provides list workers for the component and action buttons. The logviewer lets you download the dumps generated by these logs. Please see the screenshots for more information.</p>
 
@@ -171,7 +171,7 @@
 <h2 id="configuration">Configuration</h2>
 
 <p>The &quot;worker.profiler.command&quot; can be configured to point to specific pluggable profiler, heapdump commands. The &quot;worker.profiler.enabled&quot; can be disabled if plugin is not available or jdk does not support Jprofile flight recording so that worker JVM options will not have &quot;worker.profiler.childopts&quot;. To use different profiler plugin, you can change these configuration.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/flux.html b/content/releases/1.1.2/flux.html
index 78f6e4e..f97ee3f 100644
--- a/content/releases/1.1.2/flux.html
+++ b/content/releases/1.1.2/flux.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>A framework for creating and deploying Apache Storm streaming computations with less friction.</p>
+<div class="documentation-content"><p>A framework for creating and deploying Apache Storm streaming computations with less friction.</p>
 
 <h2 id="definition">Definition</h2>
 
@@ -908,7 +908,7 @@
   <span class="na">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">org.apache.storm.flux.test.TridentTopologySource"</span>
   <span class="c1"># Flux will look for "getTopology", this will override that.</span>
   <span class="na">methodName</span><span class="pi">:</span> <span class="s2">"</span><span class="s">getTopologyWithDifferentMethodName"</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/index.html b/content/releases/1.1.2/index.html
index 27992a8..66752c4 100644
--- a/content/releases/1.1.2/index.html
+++ b/content/releases/1.1.2/index.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<blockquote>
+<div class="documentation-content"><blockquote>
 <h4 id="note">NOTE</h4>
 
 <p>In the latest version, the class packages have been changed from &quot;backtype.storm&quot; to &quot;org.apache.storm&quot; so the topology code compiled with older version won&#39;t run on the Storm 1.0.0 just like that. Backward compatibility is available through following configuration </p>
@@ -284,7 +284,7 @@
 <li><a href="Multilang-protocol.html">Multilang protocol</a> (how to provide support for another language)</li>
 <li><a href="Implementation-docs.html">Implementation docs</a></li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/nimbus-ha-design.html b/content/releases/1.1.2/nimbus-ha-design.html
index 9755cf5..75d5b37 100644
--- a/content/releases/1.1.2/nimbus-ha-design.html
+++ b/content/releases/1.1.2/nimbus-ha-design.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="problem-statement">Problem Statement:</h2>
+<div class="documentation-content"><h2 id="problem-statement">Problem Statement:</h2>
 
 <p>Currently the storm master aka nimbus, is a process that runs on a single machine under supervision. In most cases the 
 nimbus failure is transient and it is restarted by the supervisor. However sometimes when disks fail and networks 
@@ -361,7 +361,7 @@
 <p>Note: Even though all nimbus hosts have watchers on zookeeper to be notified immediately as soon as a new topology is available for code
 download, the callback pretty much never results in code download. In practice we have observed that the desired replication is only achieved once the background-thread runs. 
 So you should expect your topology submission time to be somewhere between 0 to (2 * nimbus.code.sync.freq.secs) for any nimbus.min.replication.count &gt; 1.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/storm-cassandra.html b/content/releases/1.1.2/storm-cassandra.html
index f22f5c8..e879609 100644
--- a/content/releases/1.1.2/storm-cassandra.html
+++ b/content/releases/1.1.2/storm-cassandra.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="bolt-api-implementation-for-apache-cassandra">Bolt API implementation for Apache Cassandra</h3>
+<div class="documentation-content"><h3 id="bolt-api-implementation-for-apache-cassandra">Bolt API implementation for Apache Cassandra</h3>
 
 <p>This library provides core storm bolt on top of Apache Cassandra.
 Provides simple DSL to map storm <em>Tuple</em> to Cassandra Query Language <em>Statement</em>.</p>
@@ -373,7 +373,7 @@
         <span class="n">CassandraStateFactory</span> <span class="n">selectWeatherStationStateFactory</span> <span class="o">=</span> <span class="n">getSelectWeatherStationStateFactory</span><span class="o">();</span>
         <span class="n">TridentState</span> <span class="n">selectState</span> <span class="o">=</span> <span class="n">topology</span><span class="o">.</span><span class="na">newStaticState</span><span class="o">(</span><span class="n">selectWeatherStationStateFactory</span><span class="o">);</span>
         <span class="n">stream</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">stateQuery</span><span class="o">(</span><span class="n">selectState</span><span class="o">,</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"weather_station_id"</span><span class="o">),</span> <span class="k">new</span> <span class="n">CassandraQuery</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"name"</span><span class="o">));</span>         
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/storm-elasticsearch.html b/content/releases/1.1.2/storm-elasticsearch.html
index bf4253d..c335318 100644
--- a/content/releases/1.1.2/storm-elasticsearch.html
+++ b/content/releases/1.1.2/storm-elasticsearch.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="storm-elasticsearch-bolt-trident-state">Storm Elasticsearch Bolt &amp; Trident State</h1>
+<div class="documentation-content"><h1 id="storm-elasticsearch-bolt-trident-state">Storm Elasticsearch Bolt &amp; Trident State</h1>
 
 <p>EsIndexBolt, EsPercolateBolt and EsState allows users to stream data from storm into Elasticsearch directly.
   For detailed description, please refer to the following.</p>
@@ -245,7 +245,7 @@
 <li>Sriharsha Chintalapani (<a href="https://github.com/harshach">@harshach</a>)</li>
 <li>Jungtaek Lim (<a href="https://github.com/HeartSaVioR">@HeartSaVioR</a>)</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/storm-eventhubs.html b/content/releases/1.1.2/storm-eventhubs.html
index f1fed63..27752c6 100644
--- a/content/releases/1.1.2/storm-eventhubs.html
+++ b/content/releases/1.1.2/storm-eventhubs.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm spout and bolt implementation for Microsoft Azure Eventhubs</p>
+<div class="documentation-content"><p>Storm spout and bolt implementation for Microsoft Azure Eventhubs</p>
 
 <h3 id="build">build</h3>
 <div class="highlight"><pre><code class="language-" data-lang="">mvn clean package
@@ -178,7 +178,7 @@
 
 <h3 id="windows-azure-eventhubs">Windows Azure Eventhubs</h3>
 <div class="highlight"><pre><code class="language-" data-lang="">http://azure.microsoft.com/en-us/services/event-hubs/
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/storm-hbase.html b/content/releases/1.1.2/storm-hbase.html
index d9dd98c..6e21b15 100644
--- a/content/releases/1.1.2/storm-hbase.html
+++ b/content/releases/1.1.2/storm-hbase.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm/Trident integration for <a href="https://hbase.apache.org">Apache HBase</a></p>
+<div class="documentation-content"><p>Storm/Trident integration for <a href="https://hbase.apache.org">Apache HBase</a></p>
 
 <h2 id="usage">Usage</h2>
 
@@ -359,7 +359,7 @@
         <span class="o">}</span>
     <span class="o">}</span>
 <span class="o">}</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/storm-hdfs.html b/content/releases/1.1.2/storm-hdfs.html
index b1940c7..3baa2dc 100644
--- a/content/releases/1.1.2/storm-hdfs.html
+++ b/content/releases/1.1.2/storm-hdfs.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm components for interacting with HDFS file systems</p>
+<div class="documentation-content"><p>Storm components for interacting with HDFS file systems</p>
 
 <h2 id="usage">Usage</h2>
 
@@ -460,7 +460,7 @@
 <p>On worker hosts the bolt/trident-state code will use the keytab file with principal provided in the config to authenticate with 
 Namenode. This method is little dangerous as you need to ensure all workers have the keytab file at the same location and you need
 to remember this as you bring up new hosts in the cluster.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/storm-hive.html b/content/releases/1.1.2/storm-hive.html
index 6f2d77c..267e970 100644
--- a/content/releases/1.1.2/storm-hive.html
+++ b/content/releases/1.1.2/storm-hive.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Hive offers streaming API that allows data to be written continuously into Hive. The incoming data 
+<div class="documentation-content"><p>Hive offers streaming API that allows data to be written continuously into Hive. The incoming data 
   can be continuously committed in small batches of records into existing Hive partition or table. Once the data
   is committed its immediately visible to all hive queries. More info on Hive Streaming API 
   <a href="https://cwiki.apache.org/confluence/display/Hive/Streaming+Data+Ingest">https://cwiki.apache.org/confluence/display/Hive/Streaming+Data+Ingest</a></p>
@@ -303,7 +303,7 @@
 
    <span class="n">StateFactory</span> <span class="n">factory</span> <span class="o">=</span> <span class="k">new</span> <span class="n">HiveStateFactory</span><span class="o">().</span><span class="na">withOptions</span><span class="o">(</span><span class="n">hiveOptions</span><span class="o">);</span>
    <span class="n">TridentState</span> <span class="n">state</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">partitionPersist</span><span class="o">(</span><span class="n">factory</span><span class="o">,</span> <span class="n">hiveFields</span><span class="o">,</span> <span class="k">new</span> <span class="n">HiveUpdater</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">());</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/storm-jdbc.html b/content/releases/1.1.2/storm-jdbc.html
index 8a1a1c1..452ab86 100644
--- a/content/releases/1.1.2/storm-jdbc.html
+++ b/content/releases/1.1.2/storm-jdbc.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm/Trident integration for JDBC. This package includes the core bolts and trident states that allows a storm topology
+<div class="documentation-content"><p>Storm/Trident integration for JDBC. This package includes the core bolts and trident states that allows a storm topology
 to either insert storm tuples in a database table or to execute select queries against a database and enrich tuples 
 in a storm topology.</p>
 
@@ -399,7 +399,7 @@
 <div class="highlight"><pre><code class="language-" data-lang="">select * from user;
 </code></pre></div>
 <p>For trident you can view <code>org.apache.storm.jdbc.topology.UserPersistanceTridentTopology</code>.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/storm-jms-example.html b/content/releases/1.1.2/storm-jms-example.html
index d9498a1..ce14789 100644
--- a/content/releases/1.1.2/storm-jms-example.html
+++ b/content/releases/1.1.2/storm-jms-example.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="example-storm-jms-topology">Example Storm JMS Topology</h2>
+<div class="documentation-content"><h2 id="example-storm-jms-topology">Example Storm JMS Topology</h2>
 
 <p>The storm-jms source code contains an example project (in the &quot;examples&quot; directory) 
 builds a multi-bolt/multi-spout topology (depicted below) that uses the JMS Spout and JMS Bolt components.</p>
@@ -248,7 +248,7 @@
 DEBUG (backtype.storm.contrib.jms.spout.JmsSpout:251) - JMS Message acked: ID:budreau.home-60117-1321735025796-0:0:1:1:1
 </code></pre></div>
 <p>The topology will run for 2 minutes, then gracefully shut down.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/storm-jms-spring.html b/content/releases/1.1.2/storm-jms-spring.html
index eebf41d..7f9b1b1 100644
--- a/content/releases/1.1.2/storm-jms-spring.html
+++ b/content/releases/1.1.2/storm-jms-spring.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="connecting-to-jms-using-springs-jms-support">Connecting to JMS Using Spring&#39;s JMS Support</h3>
+<div class="documentation-content"><h3 id="connecting-to-jms-using-springs-jms-support">Connecting to JMS Using Spring&#39;s JMS Support</h3>
 
 <p>Create a Spring applicationContext.xml file that defines one or more destination (topic/queue) beans, as well as a connecton factory.</p>
 <div class="highlight"><pre><code class="language-" data-lang=""><span class="cp">&lt;?xml version="1.0" encoding="UTF-8"?&gt;</span>
@@ -163,7 +163,7 @@
         <span class="na">brokerURL=</span><span class="s">"tcp://localhost:61616"</span> <span class="nt">/&gt;</span>
 
 <span class="nt">&lt;/beans&gt;</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/storm-jms.html b/content/releases/1.1.2/storm-jms.html
index 061b25f..908e337 100644
--- a/content/releases/1.1.2/storm-jms.html
+++ b/content/releases/1.1.2/storm-jms.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="about-storm-jms">About Storm JMS</h2>
+<div class="documentation-content"><h2 id="about-storm-jms">About Storm JMS</h2>
 
 <p>Storm JMS is a generic framework for integrating JMS messaging within the Storm framework.</p>
 
@@ -169,7 +169,7 @@
 <p><a href="storm-jms-example.html">Example Topology</a></p>
 
 <p><a href="storm-jms-spring.html">Using Spring JMS</a></p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/storm-kafka-client.html b/content/releases/1.1.2/storm-kafka-client.html
index 8fc7a5c..aafc6d9 100644
--- a/content/releases/1.1.2/storm-kafka-client.html
+++ b/content/releases/1.1.2/storm-kafka-client.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="storm-apache-kafka-integration-using-the-kafka-client-jar">Storm Apache Kafka integration using the kafka-client jar</h1>
+<div class="documentation-content"><h1 id="storm-apache-kafka-integration-using-the-kafka-client-jar">Storm Apache Kafka integration using the kafka-client jar</h1>
 
 <p>This includes the new Apache Kafka consumer API.</p>
 
@@ -476,7 +476,7 @@
   <span class="o">.</span><span class="na">setTupleTrackingEnforced</span><span class="o">(</span><span class="kc">true</span><span class="o">)</span>
 </code></pre></div>
 <p>Note: This setting has no effect with AT_LEAST_ONCE processing guarantee, where tuple tracking is required and therefore always enabled.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/storm-kafka.html b/content/releases/1.1.2/storm-kafka.html
index 3d54427..655fb0e 100644
--- a/content/releases/1.1.2/storm-kafka.html
+++ b/content/releases/1.1.2/storm-kafka.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Provides core Storm and Trident spout implementations for consuming data from Apache Kafka 0.8.x.</p>
+<div class="documentation-content"><p>Provides core Storm and Trident spout implementations for consuming data from Apache Kafka 0.8.x.</p>
 
 <h2 id="spouts">Spouts</h2>
 
@@ -495,7 +495,7 @@
 <ul>
 <li>P. Taylor Goetz (<a href="mailto:ptgoetz@apache.org">ptgoetz@apache.org</a>)</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/storm-metrics-profiling-internal-actions.html b/content/releases/1.1.2/storm-metrics-profiling-internal-actions.html
index b5c0b3b..0d45cd8 100644
--- a/content/releases/1.1.2/storm-metrics-profiling-internal-actions.html
+++ b/content/releases/1.1.2/storm-metrics-profiling-internal-actions.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>With the addition of these metrics, Storm users can collect, view, and analyze the performance of various internal actions.  The actions that are profiled include thrift rpc calls and http quests within the storm daemons. For instance, in the Storm Nimbus daemon, the following thrift calls defined in the Nimbus$Iface are profiled:</p>
+<div class="documentation-content"><p>With the addition of these metrics, Storm users can collect, view, and analyze the performance of various internal actions.  The actions that are profiled include thrift rpc calls and http quests within the storm daemons. For instance, in the Storm Nimbus daemon, the following thrift calls defined in the Nimbus$Iface are profiled:</p>
 
 <ul>
 <li>submitTopology</li>
@@ -211,7 +211,7 @@
 <p>For more information about io.dropwizard.metrics and metrics-clojure packages please reference their original documentation:
 - <a href="https://dropwizard.github.io/metrics/3.1.0/">https://dropwizard.github.io/metrics/3.1.0/</a>
 - <a href="http://metrics-clojure.readthedocs.org/en/latest/">http://metrics-clojure.readthedocs.org/en/latest/</a></p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/storm-mongodb.html b/content/releases/1.1.2/storm-mongodb.html
index 84dc5d1..d80972c 100644
--- a/content/releases/1.1.2/storm-mongodb.html
+++ b/content/releases/1.1.2/storm-mongodb.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm/Trident integration for <a href="https://www.mongodb.org/">MongoDB</a>. This package includes the core bolts and trident states that allows a storm topology to either insert storm tuples in a database collection or to execute update queries against a database collection in a storm topology.</p>
+<div class="documentation-content"><p>Storm/Trident integration for <a href="https://www.mongodb.org/">MongoDB</a>. This package includes the core bolts and trident states that allows a storm topology to either insert storm tuples in a database collection or to execute update queries against a database collection in a storm topology.</p>
 
 <h2 id="insert-into-database">Insert into Database</h2>
 
@@ -298,7 +298,7 @@
 
         <span class="c1">//if a new document should be inserted if there are no matches to the query filter</span>
         <span class="c1">//updateBolt.withUpsert(true);</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/storm-mqtt.html b/content/releases/1.1.2/storm-mqtt.html
index fdb1fe2..1a18397 100644
--- a/content/releases/1.1.2/storm-mqtt.html
+++ b/content/releases/1.1.2/storm-mqtt.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="about">About</h2>
+<div class="documentation-content"><h2 id="about">About</h2>
 
 <p>MQTT is a lightweight publish/subscribe protocol frequently used in IoT applications.</p>
 
@@ -483,7 +483,7 @@
 <ul>
 <li>P. Taylor Goetz (<a href="mailto:ptgoetz@apache.org">ptgoetz@apache.org</a>)</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/storm-redis.html b/content/releases/1.1.2/storm-redis.html
index 48a95e2..b5d4c15 100644
--- a/content/releases/1.1.2/storm-redis.html
+++ b/content/releases/1.1.2/storm-redis.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm/Trident integration for <a href="http://redis.io/">Redis</a></p>
+<div class="documentation-content"><p>Storm/Trident integration for <a href="http://redis.io/">Redis</a></p>
 
 <p>Storm-redis uses Jedis for Redis client.</p>
 
@@ -382,7 +382,7 @@
         <span class="n">stream</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">stateQuery</span><span class="o">(</span><span class="n">state</span><span class="o">,</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"word"</span><span class="o">),</span>
                                 <span class="k">new</span> <span class="nf">RedisClusterStateQuerier</span><span class="o">(</span><span class="n">lookupMapper</span><span class="o">),</span>
                                 <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">"columnName"</span><span class="o">,</span><span class="s">"columnValue"</span><span class="o">));</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/storm-solr.html b/content/releases/1.1.2/storm-solr.html
index 12ebf49..878a96e 100644
--- a/content/releases/1.1.2/storm-solr.html
+++ b/content/releases/1.1.2/storm-solr.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm and Trident integration for Apache Solr. This package includes a bolt and a trident state that enable a Storm topology
+<div class="documentation-content"><p>Storm and Trident integration for Apache Solr. This package includes a bolt and a trident state that enable a Storm topology
 stream the contents of storm tuples to index Solr collections.</p>
 
 <h1 id="index-storm-tuples-into-a-solr-collection">Index Storm tuples into a Solr collection</h1>
@@ -308,7 +308,7 @@
 <p>You can also see the results by opening the Apache Solr UI and pasting the <code>id</code> pattern in the <code>q</code> textbox in the queries page</p>
 
 <p><a href="http://localhost:8983/solr/#/gettingstarted_shard1_replica2/query">http://localhost:8983/solr/#/gettingstarted_shard1_replica2/query</a></p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/storm-sql-example.html b/content/releases/1.1.2/storm-sql-example.html
index c9d1532..8b75ddc 100644
--- a/content/releases/1.1.2/storm-sql-example.html
+++ b/content/releases/1.1.2/storm-sql-example.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page shows how to use Storm SQL by showing the example of processing Apache logs. 
+<div class="documentation-content"><p>This page shows how to use Storm SQL by showing the example of processing Apache logs. 
 This page is written by &quot;how-to&quot; style so you can follow the step and learn how to utilize Storm SQL step by step. </p>
 
 <h2 id="preparation">Preparation</h2>
@@ -379,7 +379,7 @@
 (You may noticed that the types of some of output fields are different than output table schema.)</p>
 
 <p>Its behavior is subject to change when Storm SQL changes its backend API to core (tuple by tuple, low-level or high-level) one.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/storm-sql-internal.html b/content/releases/1.1.2/storm-sql-internal.html
index 7408d67..bffc3d9 100644
--- a/content/releases/1.1.2/storm-sql-internal.html
+++ b/content/releases/1.1.2/storm-sql-internal.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page describes the design and the implementation of the Storm SQL integration.</p>
+<div class="documentation-content"><p>This page describes the design and the implementation of the Storm SQL integration.</p>
 
 <h2 id="overview">Overview</h2>
 
@@ -195,7 +195,7 @@
 (Use <code>--artifacts</code> if your data source JARs are available in Maven repository since it handles transitive dependencies.)</p>
 
 <p>Please refer <a href="storm-sql.html">Storm SQL integration</a> page to how to do it.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/storm-sql-reference.html b/content/releases/1.1.2/storm-sql-reference.html
index 334705e..5908073 100644
--- a/content/releases/1.1.2/storm-sql-reference.html
+++ b/content/releases/1.1.2/storm-sql-reference.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm SQL uses Apache Calcite to parse and evaluate the SQL statements. 
+<div class="documentation-content"><p>Storm SQL uses Apache Calcite to parse and evaluate the SQL statements. 
 Storm SQL also adopts Rex compiler from Calcite, so Storm SQL is expected to handle SQL dialect recognized by Calcite&#39;s default SQL parser. </p>
 
 <p>The page is based on Calcite SQL reference on website, and removes the area Storm SQL doesn&#39;t support, and also adds the area Storm SQL supports.</p>
@@ -2101,7 +2101,7 @@
 
 <p>Also, hdfs configuration files should be provided.
 You can put the <code>core-site.xml</code> and <code>hdfs-site.xml</code> into the <code>conf</code> directory which is in Storm installation directory.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/storm-sql.html b/content/releases/1.1.2/storm-sql.html
index 3fe0fdf..f294c6e 100644
--- a/content/releases/1.1.2/storm-sql.html
+++ b/content/releases/1.1.2/storm-sql.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The Storm SQL integration allows users to run SQL queries over streaming data in Storm. Not only the SQL interface allows faster development cycles on streaming analytics, but also opens up the opportunities to unify batch data processing like <a href="///hive.apache.org">Apache Hive</a> and real-time streaming data analytics.</p>
+<div class="documentation-content"><p>The Storm SQL integration allows users to run SQL queries over streaming data in Storm. Not only the SQL interface allows faster development cycles on streaming analytics, but also opens up the opportunities to unify batch data processing like <a href="///hive.apache.org">Apache Hive</a> and real-time streaming data analytics.</p>
 
 <p>At a very high level StormSQL compiles the SQL queries to <a href="Trident-API-Overview.html">Trident</a> topologies and executes them in Storm clusters. This document provides information of how to use StormSQL as end users. For people that are interested in more details in the design and the implementation of StormSQL please refer to the <a href="storm-sql-internal.html">this</a> page.</p>
 
@@ -284,7 +284,7 @@
 <li>Windowing is yet to be implemented.</li>
 <li>Aggregation and join are not supported (waiting for <code>Streaming SQL</code> to be matured)</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.1.2/windows-users-guide.html b/content/releases/1.1.2/windows-users-guide.html
index e54f3e4..2a759da 100644
--- a/content/releases/1.1.2/windows-users-guide.html
+++ b/content/releases/1.1.2/windows-users-guide.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page guides how to set up environment on Windows for Apache Storm.</p>
+<div class="documentation-content"><p>This page guides how to set up environment on Windows for Apache Storm.</p>
 
 <h2 id="symbolic-link">Symbolic Link</h2>
 
@@ -172,7 +172,7 @@
 on Nimbus and all of the Supervisor nodes.  This will also disable features that require symlinks.  Currently this is only downloading
 dependent blobs, but may change in the future.  Some topologies may rely on symbolic links to resources in the current working directory of the worker that are
 created as a convienence, so it is not a 100% backwards compatible change.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Acking-framework-implementation.html b/content/releases/1.2.1/Acking-framework-implementation.html
index a9108de..28ec8bc 100644
--- a/content/releases/1.2.1/Acking-framework-implementation.html
+++ b/content/releases/1.2.1/Acking-framework-implementation.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p><a href="https://github.com/apache/incubator-storm/blob/46c3ba7/storm-core/src/clj/backtype/storm/daemon/acker.clj#L28">Storm&#39;s acker</a> tracks completion of each tupletree with a checksum hash: each time a tuple is sent, its value is XORed into the checksum, and each time a tuple is acked its value is XORed in again. If all tuples have been successfully acked, the checksum will be zero (the odds that the checksum will be zero otherwise are vanishingly small).</p>
+<div class="documentation-content"><p><a href="https://github.com/apache/incubator-storm/blob/46c3ba7/storm-core/src/clj/backtype/storm/daemon/acker.clj#L28">Storm&#39;s acker</a> tracks completion of each tupletree with a checksum hash: each time a tuple is sent, its value is XORed into the checksum, and each time a tuple is acked its value is XORed in again. If all tuples have been successfully acked, the checksum will be zero (the odds that the checksum will be zero otherwise are vanishingly small).</p>
 
 <p>You can read a bit more about the <a href="Guaranteeing-message-processing.html#what-is-storms-reliability-api">reliability mechanism</a> elsewhere on the wiki -- this explains the internal details.</p>
 
@@ -180,7 +180,7 @@
 <p>Internally, it holds several HashMaps (&#39;buckets&#39;) of its own, each holding a cohort of records that will expire at the same time.  Let&#39;s call the longest-lived bucket death row, and the most recent the nursery. Whenever a value is <code>.put()</code> to the RotatingMap, it is relocated to the nursery -- and removed from any other bucket it might have been in (effectively resetting its death clock).</p>
 
 <p>Whenever its owner calls <code>.rotate()</code>, the RotatingMap advances each cohort one step further towards expiration. (Typically, Storm objects call rotate on every receipt of a system tick stream tuple.) If there are any key-value pairs in the former death row bucket, the RotatingMap invokes a callback (given in the constructor) for each key-value pair, letting its owner take appropriate action (eg, failing a tuple.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Classpath-handling.html b/content/releases/1.2.1/Classpath-handling.html
index f68b86b..634a5ee 100644
--- a/content/releases/1.2.1/Classpath-handling.html
+++ b/content/releases/1.2.1/Classpath-handling.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="storm-is-an-application-container">Storm is an Application Container</h3>
+<div class="documentation-content"><h3 id="storm-is-an-application-container">Storm is an Application Container</h3>
 
 <p>Storm provides an application container environment, a la Apache Tomcat, which creates potential for classpath conflicts between Storm and your application.  The most common way of using Storm involves submitting an &quot;uber JAR&quot; containing your application code with all of its dependencies bundled in, and then Storm distributes this JAR to Worker nodes.  Then Storm runs your application within a Storm process called a <code>Worker</code> -- thus the JVM&#39;s classpath contains the dependencies of your JAR as well as whatever dependencies the Worker itself has.  So careful handling of classpaths and dependencies is critical for the correct functioning of Storm.</p>
 
@@ -173,7 +173,7 @@
 <p>When the <code>storm.py</code> script launches a <code>java</code> command, it first constructs the classpath from the optional settings mentioned above, as well as including some default locations such as the <code>${STORM_DIR}/</code>, <code>${STORM_DIR}/lib/</code>, <code>${STORM_DIR}/extlib/</code> and <code>${STORM_DIR}/extlib-daemon/</code> directories.  In past releases, Storm would enumerate all JARs in those directories and then explicitly add all of those JARs into the <code>-cp</code> / <code>--classpath</code> argument to the launched <code>java</code> commands.  As such, the classpath would get so long that the <code>java</code> commands could breach the Linux Kernel process table limit of 4096 bytes for recording commands.  That led to truncated commands in <code>ps</code> output, making it hard to operate Storm clusters because you could not easily differentiate the processes nor easily see from <code>ps</code> which port a worker is listening to.</p>
 
 <p>After Storm dropped support for Java 5, this classpath expansion was no longer necessary, because Java 6 supports classpath wildcards. Classpath wildcards allow you to specify a directory ending with a <code>*</code> element, such as <code>foo/bar/*</code>, and the JVM will automatically expand the classpath to include all <code>.jar</code> files in the wildcard directory.  As of <a href="https://issues.apache.org/jira/browse/STORM-2191">STORM-2191</a> Storm just uses classpath wildcards instead of explicitly listing all JARs, thereby shortening all of the commands and making operating Storm clusters a bit easier.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Clojure-DSL.html b/content/releases/1.2.1/Clojure-DSL.html
index 89fa383..fd2616a 100644
--- a/content/releases/1.2.1/Clojure-DSL.html
+++ b/content/releases/1.2.1/Clojure-DSL.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm comes with a Clojure DSL for defining spouts, bolts, and topologies. The Clojure DSL has access to everything the Java API exposes, so if you&#39;re a Clojure user you can code Storm topologies without touching Java at all. The Clojure DSL is defined in the source in the <a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/clj/org/apache/storm/clojure.clj">org.apache.storm.clojure</a> namespace.</p>
+<div class="documentation-content"><p>Storm comes with a Clojure DSL for defining spouts, bolts, and topologies. The Clojure DSL has access to everything the Java API exposes, so if you&#39;re a Clojure user you can code Storm topologies without touching Java at all. The Clojure DSL is defined in the source in the <a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/clj/org/apache/storm/clojure.clj">org.apache.storm.clojure</a> namespace.</p>
 
 <p>This page outlines all the pieces of the Clojure DSL, including:</p>
 
@@ -371,7 +371,7 @@
 <h3 id="testing-topologies">Testing topologies</h3>
 
 <p><a href="http://www.pixelmachine.org/2011/12/17/Testing-Storm-Topologies.html">This blog post</a> and its <a href="http://www.pixelmachine.org/2011/12/21/Testing-Storm-Topologies-Part-2.html">follow-up</a> give a good overview of Storm&#39;s powerful built-in facilities for testing topologies in Clojure.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Command-line-client.html b/content/releases/1.2.1/Command-line-client.html
index 19e9671..b651b35 100644
--- a/content/releases/1.2.1/Command-line-client.html
+++ b/content/releases/1.2.1/Command-line-client.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page describes all the commands that are possible with the &quot;storm&quot; command line client. To learn how to set up your &quot;storm&quot; client to talk to a remote cluster, follow the instructions in <a href="Setting-up-development-environment.html">Setting up development environment</a>. See <a href="Classpath-handling.html">Classpath handling</a> for details on using external libraries in these commands.</p>
+<div class="documentation-content"><p>This page describes all the commands that are possible with the &quot;storm&quot; command line client. To learn how to set up your &quot;storm&quot; client to talk to a remote cluster, follow the instructions in <a href="Setting-up-development-environment.html">Setting up development environment</a>. See <a href="Classpath-handling.html">Classpath handling</a> for details on using external libraries in these commands.</p>
 
 <p>These commands are:</p>
 
@@ -423,7 +423,7 @@
 <p>Syntax: <code>storm help [command]</code></p>
 
 <p>Print one help message or list of available commands</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Common-patterns.html b/content/releases/1.2.1/Common-patterns.html
index 5460965..5333dd7 100644
--- a/content/releases/1.2.1/Common-patterns.html
+++ b/content/releases/1.2.1/Common-patterns.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page lists a variety of common patterns in Storm topologies.</p>
+<div class="documentation-content"><p>This page lists a variety of common patterns in Storm topologies.</p>
 
 <ol>
 <li>Batching</li>
@@ -212,7 +212,7 @@
 <p><code>KeyedFairBolt</code> also wraps the bolt containing your logic and makes sure your topology processes multiple DRPC invocations at the same time, instead of doing them serially one at a time.</p>
 
 <p>See <a href="Distributed-RPC.html">Distributed RPC</a> for more details.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Concepts.html b/content/releases/1.2.1/Concepts.html
index 0c5ea0d..bfd8b7a 100644
--- a/content/releases/1.2.1/Concepts.html
+++ b/content/releases/1.2.1/Concepts.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page lists the main concepts of Storm and links to resources where you can find more information. The concepts discussed are:</p>
+<div class="documentation-content"><p>This page lists the main concepts of Storm and links to resources where you can find more information. The concepts discussed are:</p>
 
 <ol>
 <li>Topologies</li>
@@ -268,7 +268,7 @@
 <ul>
 <li><a href="javadocs/org/apache/storm/Config.html#TOPOLOGY_WORKERS">Config.TOPOLOGY_WORKERS</a>: this config sets the number of workers to allocate for executing the topology</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Configuration.html b/content/releases/1.2.1/Configuration.html
index fcee36e..6f300d9 100644
--- a/content/releases/1.2.1/Configuration.html
+++ b/content/releases/1.2.1/Configuration.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm has a variety of configurations for tweaking the behavior of nimbus, supervisors, and running topologies. Some configurations are system configurations and cannot be modified on a topology by topology basis, whereas other configurations can be modified per topology. </p>
+<div class="documentation-content"><p>Storm has a variety of configurations for tweaking the behavior of nimbus, supervisors, and running topologies. Some configurations are system configurations and cannot be modified on a topology by topology basis, whereas other configurations can be modified per topology. </p>
 
 <p>Every configuration has a default value defined in <a href="http://github.com/apache/storm/blob/v1.2.1/conf/defaults.yaml">defaults.yaml</a> in the Storm codebase. You can override these configurations by defining a storm.yaml in the classpath of Nimbus and the supervisors. Finally, you can define a topology-specific configuration that you submit along with your topology when using <a href="javadocs/org/apache/storm/StormSubmitter.html">StormSubmitter</a>. However, the topology-specific configuration can only override configs prefixed with &quot;TOPOLOGY&quot;.</p>
 
@@ -175,7 +175,7 @@
 <li><a href="Running-topologies-on-a-production-cluster.html">Running topologies on a production cluster</a>: lists useful configurations when running topologies on a cluster</li>
 <li><a href="Local-mode.html">Local mode</a>: lists useful configurations when using local mode</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Contributing-to-Storm.html b/content/releases/1.2.1/Contributing-to-Storm.html
index 8badb1c..9fa0bdb 100644
--- a/content/releases/1.2.1/Contributing-to-Storm.html
+++ b/content/releases/1.2.1/Contributing-to-Storm.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="getting-started-with-contributing">Getting started with contributing</h3>
+<div class="documentation-content"><h3 id="getting-started-with-contributing">Getting started with contributing</h3>
 
 <p>Some of the issues on the <a href="https://issues.apache.org/jira/browse/STORM">issue tracker</a> are marked with the <a href="https://issues.apache.org/jira/browse/STORM-2891?jql=project%20%3D%20STORM%20AND%20status%20%3D%20Open%20AND%20labels%20in%20(newbie%2C%20%22newbie%2B%2B%22)">&quot;Newbie&quot;</a> label. If you&#39;re interested in contributing to Storm but don&#39;t know where to begin, these are good issues to start with. These issues are a great way to get your feet wet with learning the codebase because they require learning about only an isolated portion of the codebase and are a relatively small amount of work.</p>
 
@@ -172,7 +172,7 @@
 <h3 id="contributing-documentation">Contributing documentation</h3>
 
 <p>Documentation contributions are very welcome! The best way to send contributions is as emails through the mailing list.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Creating-a-new-Storm-project.html b/content/releases/1.2.1/Creating-a-new-Storm-project.html
index e679958..9dc8638 100644
--- a/content/releases/1.2.1/Creating-a-new-Storm-project.html
+++ b/content/releases/1.2.1/Creating-a-new-Storm-project.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page outlines how to set up a Storm project for development. The steps are:</p>
+<div class="documentation-content"><p>This page outlines how to set up a Storm project for development. The steps are:</p>
 
 <ol>
 <li>Add Storm jars to classpath</li>
@@ -166,7 +166,7 @@
 <p>For more information on writing topologies in other languages, see <a href="Using-non-JVM-languages-with-Storm.html">Using non-JVM languages with Storm</a>.</p>
 
 <p>To test that everything is working in Eclipse, you should now be able to <code>Run</code> the <code>WordCountTopology.java</code> file. You will see messages being emitted at the console for 10 seconds.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/DSLs-and-multilang-adapters.html b/content/releases/1.2.1/DSLs-and-multilang-adapters.html
index 8be8db5..7f10518 100644
--- a/content/releases/1.2.1/DSLs-and-multilang-adapters.html
+++ b/content/releases/1.2.1/DSLs-and-multilang-adapters.html
@@ -144,14 +144,14 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><a href="https://github.com/velvia/ScalaStorm">Scala DSL</a></li>
 <li><a href="https://github.com/colinsurprenant/redstorm">JRuby DSL</a></li>
 <li><a href="Clojure-DSL.html">Clojure DSL</a></li>
 <li><a href="https://github.com/tomdz/storm-esper">Storm/Esper integration</a>: Streaming SQL on top of Storm</li>
 <li><a href="https://github.com/dan-blanchard/io-storm">io-storm</a>: Perl multilang adapter</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Daemon-Fault-Tolerance.html b/content/releases/1.2.1/Daemon-Fault-Tolerance.html
index 565e12c..8981fb0 100644
--- a/content/releases/1.2.1/Daemon-Fault-Tolerance.html
+++ b/content/releases/1.2.1/Daemon-Fault-Tolerance.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm has several different daemon processes.  Nimbus that schedules workers, supervisors that launch and kill workers, the log viewer that gives access to logs, and the UI that shows the status of a cluster.</p>
+<div class="documentation-content"><p>Storm has several different daemon processes.  Nimbus that schedules workers, supervisors that launch and kill workers, the log viewer that gives access to logs, and the UI that shows the status of a cluster.</p>
 
 <h2 id="what-happens-when-a-worker-dies">What happens when a worker dies?</h2>
 
@@ -169,7 +169,7 @@
 <h2 id="how-does-storm-guarantee-data-processing">How does Storm guarantee data processing?</h2>
 
 <p>Storm provides mechanisms to guarantee data processing even if nodes die or messages are lost. See <a href="Guaranteeing-message-processing.html">Guaranteeing message processing</a> for the details.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Defining-a-non-jvm-language-dsl-for-storm.html b/content/releases/1.2.1/Defining-a-non-jvm-language-dsl-for-storm.html
index c3fde21..38f9395 100644
--- a/content/releases/1.2.1/Defining-a-non-jvm-language-dsl-for-storm.html
+++ b/content/releases/1.2.1/Defining-a-non-jvm-language-dsl-for-storm.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The right place to start to learn how to make a non-JVM DSL for Storm is <a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/storm.thrift">storm-core/src/storm.thrift</a>. Since Storm topologies are just Thrift structures, and Nimbus is a Thrift daemon, you can create and submit topologies in any language.</p>
+<div class="documentation-content"><p>The right place to start to learn how to make a non-JVM DSL for Storm is <a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/storm.thrift">storm-core/src/storm.thrift</a>. Since Storm topologies are just Thrift structures, and Nimbus is a Thrift daemon, you can create and submit topologies in any language.</p>
 
 <p>When you create the Thrift structs for spouts and bolts, the code for the spout or bolt is specified in the ComponentObject struct:</p>
 <div class="highlight"><pre><code class="language-" data-lang="">union ComponentObject {
@@ -165,7 +165,7 @@
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kt">void</span> <span class="nf">submitTopology</span><span class="o">(</span><span class="mi">1</span><span class="o">:</span> <span class="n">string</span> <span class="n">name</span><span class="o">,</span> <span class="mi">2</span><span class="o">:</span> <span class="n">string</span> <span class="n">uploadedJarLocation</span><span class="o">,</span> <span class="mi">3</span><span class="o">:</span> <span class="n">string</span> <span class="n">jsonConf</span><span class="o">,</span> <span class="mi">4</span><span class="o">:</span> <span class="n">StormTopology</span> <span class="n">topology</span><span class="o">)</span> <span class="kd">throws</span> <span class="o">(</span><span class="mi">1</span><span class="o">:</span> <span class="n">AlreadyAliveException</span> <span class="n">e</span><span class="o">,</span> <span class="mi">2</span><span class="o">:</span> <span class="n">InvalidTopologyException</span> <span class="n">ite</span><span class="o">);</span>
 </code></pre></div>
 <p>Finally, one of the key things to do in a non-JVM DSL is make it easy to define the entire topology in one file (the bolts, spouts, and the definition of the topology).</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Distributed-RPC.html b/content/releases/1.2.1/Distributed-RPC.html
index 73e2569..2baa19b 100644
--- a/content/releases/1.2.1/Distributed-RPC.html
+++ b/content/releases/1.2.1/Distributed-RPC.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The idea behind distributed RPC (DRPC) is to parallelize the computation of really intense functions on the fly using Storm. The Storm topology takes in as input a stream of function arguments, and it emits an output stream of the results for each of those function calls. </p>
+<div class="documentation-content"><p>The idea behind distributed RPC (DRPC) is to parallelize the computation of really intense functions on the fly using Storm. The Storm topology takes in as input a stream of function arguments, and it emits an output stream of the results for each of those function calls. </p>
 
 <p>DRPC is not so much a feature of Storm as it is a pattern expressed from Storm&#39;s primitives of streams, spouts, bolts, and topologies. DRPC could have been packaged as a separate library from Storm, but it&#39;s so useful that it&#39;s bundled with Storm.</p>
 
@@ -330,7 +330,7 @@
 <li>KeyedFairBolt for weaving the processing of multiple requests at the same time</li>
 <li>How to use <code>CoordinatedBolt</code> directly</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Eventlogging.html b/content/releases/1.2.1/Eventlogging.html
index 8d9a05f..4557c1b 100644
--- a/content/releases/1.2.1/Eventlogging.html
+++ b/content/releases/1.2.1/Eventlogging.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="introduction">Introduction</h1>
+<div class="documentation-content"><h1 id="introduction">Introduction</h1>
 
 <p>Topology event inspector provides the ability to view the tuples as it flows through different stages in a storm topology.
 This could be useful for inspecting the tuples emitted at a spout or a bolt in the topology pipeline while the topology is running, without stopping or redeploying the topology. The normal flow of tuples from the spouts to the bolts is not affected by turning on event logging.</p>
@@ -269,7 +269,7 @@
 
 <p>Please keep in mind that EventLoggerBolt is just a kind of Bolt, so whole throughput of the topology will go down when registered event loggers cannot keep up handling incoming events, so you may want to take care of the Bolt like normal Bolt.
 One of idea to avoid this is making your implementation of IEventLogger as <code>non-blocking</code> fashion.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/FAQ.html b/content/releases/1.2.1/FAQ.html
index 81e8d50..562ee8d 100644
--- a/content/releases/1.2.1/FAQ.html
+++ b/content/releases/1.2.1/FAQ.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="best-practices">Best Practices</h2>
+<div class="documentation-content"><h2 id="best-practices">Best Practices</h2>
 
 <h3 id="what-rules-of-thumb-can-you-give-me-for-configuring-storm-trident">What rules of thumb can you give me for configuring Storm+Trident?</h3>
 
@@ -276,7 +276,7 @@
 <li>When possible, make your process incremental: each value that comes in makes the answer more an more true. A Trident ReducerAggregator is an operator that takes a prior result and a set of new records and returns a new result. This lets the result be cached and serialized to a datastore; if a server drops off line for a day and then comes back with a full day&#39;s worth of data in a rush, the old results will be calmly retrieved and updated.</li>
 <li>Lambda architecture: Record all events into an archival store (S3, HBase, HDFS) on receipt. in the fast layer, once the time window is clear, process the bucket to get an actionable answer, and ignore everything older than the time window. Periodically run a global aggregation to calculate a &quot;correct&quot; answer.</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Fault-tolerance.html b/content/releases/1.2.1/Fault-tolerance.html
index bf71b1a..61cbf6b 100644
--- a/content/releases/1.2.1/Fault-tolerance.html
+++ b/content/releases/1.2.1/Fault-tolerance.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page explains the design details of Storm that make it a fault-tolerant system.</p>
+<div class="documentation-content"><p>This page explains the design details of Storm that make it a fault-tolerant system.</p>
 
 <h2 id="what-happens-when-a-worker-dies">What happens when a worker dies?</h2>
 
@@ -169,7 +169,7 @@
 <h2 id="how-does-storm-guarantee-data-processing">How does Storm guarantee data processing?</h2>
 
 <p>Storm provides mechanisms to guarantee data processing even if nodes die or messages are lost. See <a href="Guaranteeing-message-processing.html">Guaranteeing message processing</a> for the details.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Guaranteeing-message-processing.html b/content/releases/1.2.1/Guaranteeing-message-processing.html
index fe6aadc..e7a81c4 100644
--- a/content/releases/1.2.1/Guaranteeing-message-processing.html
+++ b/content/releases/1.2.1/Guaranteeing-message-processing.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm offers several different levels of guaranteed message processing, including best effort, at least once, and exactly once through <a href="Trident-tutorial.html">Trident</a>.
+<div class="documentation-content"><p>Storm offers several different levels of guaranteed message processing, including best effort, at least once, and exactly once through <a href="Trident-tutorial.html">Trident</a>.
 This page describes how Storm can guarantee at least once processing.</p>
 
 <h3 id="what-does-it-mean-for-a-message-to-be-fully-processed">What does it mean for a message to be &quot;fully processed&quot;?</h3>
@@ -301,7 +301,7 @@
 <p>The second way is to remove reliability on a message by message basis. You can turn off tracking for an individual spout tuple by omitting a message id in the <code>SpoutOutputCollector.emit</code> method.</p>
 
 <p>Finally, if you don&#39;t care if a particular subset of the tuples downstream in the topology fail to be processed, you can emit them as unanchored tuples. Since they&#39;re not anchored to any spout tuples, they won&#39;t cause any spout tuples to fail if they aren&#39;t acked.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Hooks.html b/content/releases/1.2.1/Hooks.html
index 138481a..67e52d3 100644
--- a/content/releases/1.2.1/Hooks.html
+++ b/content/releases/1.2.1/Hooks.html
@@ -144,13 +144,13 @@
 
 <p class="post-meta"></p>
 
-<p>Storm provides hooks with which you can insert custom code to run on any number of events within Storm. You create a hook by extending the <a href="javadocs/org/apache/storm/hooks/BaseTaskHook.html">BaseTaskHook</a> class and overriding the appropriate method for the event you want to catch. There are two ways to register your hook:</p>
+<div class="documentation-content"><p>Storm provides hooks with which you can insert custom code to run on any number of events within Storm. You create a hook by extending the <a href="javadocs/org/apache/storm/hooks/BaseTaskHook.html">BaseTaskHook</a> class and overriding the appropriate method for the event you want to catch. There are two ways to register your hook:</p>
 
 <ol>
 <li>In the open method of your spout or prepare method of your bolt using the <a href="javadocs/org/apache/storm/task/TopologyContext.html#addTaskHook">TopologyContext</a> method.</li>
 <li>Through the Storm configuration using the <a href="javadocs/org/apache/storm/Config.html#TOPOLOGY_AUTO_TASK_HOOKS">&quot;topology.auto.task.hooks&quot;</a> config. These hooks are automatically registered in every spout or bolt, and are useful for doing things like integrating with a custom monitoring system.</li>
 </ol>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Implementation-docs.html b/content/releases/1.2.1/Implementation-docs.html
index 6dcbf6a..e522728 100644
--- a/content/releases/1.2.1/Implementation-docs.html
+++ b/content/releases/1.2.1/Implementation-docs.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This section of the wiki is dedicated to explaining how Storm is implemented. You should have a good grasp of how to use Storm before reading these sections. </p>
+<div class="documentation-content"><p>This section of the wiki is dedicated to explaining how Storm is implemented. You should have a good grasp of how to use Storm before reading these sections. </p>
 
 <ul>
 <li><a href="Structure-of-the-codebase.html">Structure of the codebase</a></li>
@@ -154,7 +154,7 @@
 <li><a href="nimbus-ha-design.html">Nimbus HA</a></li>
 <li><a href="storm-sql-internal.html">Storm SQL</a></li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Installing-native-dependencies.html b/content/releases/1.2.1/Installing-native-dependencies.html
index 1371936..b7fee03 100644
--- a/content/releases/1.2.1/Installing-native-dependencies.html
+++ b/content/releases/1.2.1/Installing-native-dependencies.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The native dependencies are only needed on actual Storm clusters. When running Storm in local mode, Storm uses a pure Java messaging system so that you don&#39;t need to install native dependencies on your development machine.</p>
+<div class="documentation-content"><p>The native dependencies are only needed on actual Storm clusters. When running Storm in local mode, Storm uses a pure Java messaging system so that you don&#39;t need to install native dependencies on your development machine.</p>
 
 <p>Installing ZeroMQ and JZMQ is usually straightforward. Sometimes, however, people run into issues with autoconf and get strange errors. If you run into any issues, please email the <a href="http://groups.google.com/group/storm-user">Storm mailing list</a> or come get help in the #storm-user room on freenode. </p>
 
@@ -175,7 +175,7 @@
 </ol>
 
 <p>If you run into any errors when running <code>./configure</code>, <a href="http://stackoverflow.com/questions/3522248/how-do-i-compile-jzmq-for-zeromq-on-osx">this thread</a> may provide a solution.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Joins.html b/content/releases/1.2.1/Joins.html
index b95e985..410e45a 100644
--- a/content/releases/1.2.1/Joins.html
+++ b/content/releases/1.2.1/Joins.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm core supports joining multiple data streams into one with the help of <code>JoinBolt</code>.
+<div class="documentation-content"><p>Storm core supports joining multiple data streams into one with the help of <code>JoinBolt</code>.
 <code>JoinBolt</code> is a Windowed bolt, i.e. it waits for the configured window duration to match up the
 tuples among the streams being joined. This helps align the streams within a Window boundary.</p>
 
@@ -272,7 +272,7 @@
 <li>Lastly, keep the window size to the minimum value necessary for solving the problem at hand.</li>
 </ul></li>
 </ol>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Kestrel-and-Storm.html b/content/releases/1.2.1/Kestrel-and-Storm.html
index c31597d..bd1fb02 100644
--- a/content/releases/1.2.1/Kestrel-and-Storm.html
+++ b/content/releases/1.2.1/Kestrel-and-Storm.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page explains how to use Storm to consume items from a Kestrel cluster.</p>
+<div class="documentation-content"><p>This page explains how to use Storm to consume items from a Kestrel cluster.</p>
 
 <h2 id="preliminaries">Preliminaries</h2>
 
@@ -334,7 +334,7 @@
 Now execute the program to add items to the queue and launch the Storm topology. The order in which you launch the programs is of no importance.
 
 If you run the topology with TOPOLOGY_DEBUG you should see tuples being emitted in the topology.
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Lifecycle-of-a-topology.html b/content/releases/1.2.1/Lifecycle-of-a-topology.html
index 7239101..d91ed32 100644
--- a/content/releases/1.2.1/Lifecycle-of-a-topology.html
+++ b/content/releases/1.2.1/Lifecycle-of-a-topology.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>(<strong>NOTE</strong>: this page is based on the 0.7.1 code; many things have changed since then, including a split between tasks and executors, and a reorganization of the code under <code>storm-core/src</code> rather than <code>src/</code>.)</p>
+<div class="documentation-content"><p>(<strong>NOTE</strong>: this page is based on the 0.7.1 code; many things have changed since then, including a split between tasks and executors, and a reorganization of the code under <code>storm-core/src</code> rather than <code>src/</code>.)</p>
 
 <p>This page explains in detail the lifecycle of a topology from running the &quot;storm jar&quot; command to uploading the topology to Nimbus to the supervisors starting/stopping workers to workers and tasks setting themselves up. It also explains how Nimbus monitors topologies and how topologies are shutdown when they are killed.</p>
 
@@ -261,7 +261,7 @@
 <li>Removing a topology cleans out the assignment and static information from ZK <a href="https://github.com/apache/storm/blob/0.7.1/src/clj/org/apache/storm/daemon/nimbus.clj#L116">code</a></li>
 <li>A separate cleanup thread runs the <code>do-cleanup</code> function which will clean up the heartbeat dir and the jars/configs stored locally. <a href="https://github.com/apache/storm/blob/0.7.1/src/clj/org/apache/storm/daemon/nimbus.clj#L577">code</a></li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Local-mode.html b/content/releases/1.2.1/Local-mode.html
index 5149afd..9152f7e 100644
--- a/content/releases/1.2.1/Local-mode.html
+++ b/content/releases/1.2.1/Local-mode.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Local mode simulates a Storm cluster in process and is useful for developing and testing topologies. Running topologies in local mode is similar to running topologies <a href="Running-topologies-on-a-production-cluster.html">on a cluster</a>. </p>
+<div class="documentation-content"><p>Local mode simulates a Storm cluster in process and is useful for developing and testing topologies. Running topologies in local mode is similar to running topologies <a href="Running-topologies-on-a-production-cluster.html">on a cluster</a>. </p>
 
 <p>To create an in-process cluster, simply use the <code>LocalCluster</code> class. For example:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kn">import</span> <span class="nn">org.apache.storm.LocalCluster</span><span class="o">;</span>
@@ -164,7 +164,7 @@
 <li><strong>Config.TOPOLOGY_MAX_TASK_PARALLELISM</strong>: This config puts a ceiling on the number of threads spawned for a single component. Oftentimes production topologies have a lot of parallelism (hundreds of threads) which places unreasonable load when trying to test the topology in local mode. This config lets you easy control that parallelism.</li>
 <li><strong>Config.TOPOLOGY_DEBUG</strong>: When this is set to true, Storm will log a message every time a tuple is emitted from any spout or bolt. This is extremely useful for debugging.</li>
 </ol>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Logs.html b/content/releases/1.2.1/Logs.html
index 4d8c3af..314eff2 100644
--- a/content/releases/1.2.1/Logs.html
+++ b/content/releases/1.2.1/Logs.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Logs in Storm are essential for tracking the status, operations, error messages and debug information for all the 
+<div class="documentation-content"><p>Logs in Storm are essential for tracking the status, operations, error messages and debug information for all the 
 daemons (e.g., nimbus, supervisor, logviewer, drpc, ui, pacemaker) and topologies&#39; workers.</p>
 
 <h3 id="location-of-the-logs">Location of the Logs</h3>
@@ -171,7 +171,7 @@
 <p>Search in a topology: a user can also search a string for a certain topology by clicking the icon of magnifying lens at the top right corner of the UI page. This means the UI will try to search on all the supervisor nodes in a distributed way to find the matched string in all logs for this topology. The search can happen for either normal text log files or rolled zip log files by checking/unchecking the &quot;Search archived logs:&quot; box. Then the matched results can be shown on the UI with url links, directing the user to the certain logs on each supervisor node. This powerful feature is very helpful for users to find certain problematic supervisor nodes running this topology.</p>
 
 <p><img src="images/search-a-topology.png" alt="Search in a topology" title="Search in a topology"></p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Maven.html b/content/releases/1.2.1/Maven.html
index 2a9d037..f356085 100644
--- a/content/releases/1.2.1/Maven.html
+++ b/content/releases/1.2.1/Maven.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>To develop topologies, you&#39;ll need the Storm jars on your classpath. You should either include the unpacked jars in the classpath for your project or use Maven to include Storm as a development dependency. Storm is hosted on Maven Central. To include Storm in your project as a development dependency, add the following to your pom.xml:</p>
+<div class="documentation-content"><p>To develop topologies, you&#39;ll need the Storm jars on your classpath. You should either include the unpacked jars in the classpath for your project or use Maven to include Storm as a development dependency. Storm is hosted on Maven Central. To include Storm in your project as a development dependency, add the following to your pom.xml:</p>
 <div class="highlight"><pre><code class="language-xml" data-lang="xml"><span class="nt">&lt;dependency&gt;</span>
   <span class="nt">&lt;groupId&gt;</span>org.apache.storm<span class="nt">&lt;/groupId&gt;</span>
   <span class="nt">&lt;artifactId&gt;</span>storm-core<span class="nt">&lt;/artifactId&gt;</span>
@@ -157,7 +157,7 @@
 <h3 id="developing-storm">Developing Storm</h3>
 
 <p>Please refer to <a href="http://github.com/apache/storm/blob/v1.2.1/DEVELOPER.md">DEVELOPER.md</a> for more details.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Message-passing-implementation.html b/content/releases/1.2.1/Message-passing-implementation.html
index 0efb3f1..fc46bb0 100644
--- a/content/releases/1.2.1/Message-passing-implementation.html
+++ b/content/releases/1.2.1/Message-passing-implementation.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>(Note: this walkthrough is out of date as of 0.8.0. 0.8.0 revamped the message passing infrastructure to be based on the Disruptor)</p>
+<div class="documentation-content"><p>(Note: this walkthrough is out of date as of 0.8.0. 0.8.0 revamped the message passing infrastructure to be based on the Disruptor)</p>
 
 <p>This page walks through how emitting and transferring tuples works in Storm.</p>
 
@@ -186,7 +186,7 @@
 </ul></li>
 </ul></li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Metrics.html b/content/releases/1.2.1/Metrics.html
index 26d2047..94f1e8e 100644
--- a/content/releases/1.2.1/Metrics.html
+++ b/content/releases/1.2.1/Metrics.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm exposes a metrics interface to report summary statistics across the full topology.
+<div class="documentation-content"><p>Storm exposes a metrics interface to report summary statistics across the full topology.
 The numbers you see on the UI come from some of these built in metrics, but are reported through the worker heartbeats instead of through the IMetricsConsumer described below.</p>
 
 <h3 id="metric-types">Metric Types</h3>
@@ -466,7 +466,7 @@
 <li><code>newWorkerEvent</code> is 1 when a worker is first started and 0 all other times.  This can be used to tell when a worker has crashed and is restarted.</li>
 <li><code>startTimeSecs</code> is when the worker started in seconds since the epoch</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Multilang-protocol.html b/content/releases/1.2.1/Multilang-protocol.html
index 3f3accd..5b65343 100644
--- a/content/releases/1.2.1/Multilang-protocol.html
+++ b/content/releases/1.2.1/Multilang-protocol.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page explains the multilang protocol as of Storm 0.7.1. Versions prior to 0.7.1 used a somewhat different protocol, documented [here](Storm-multi-language-protocol-(versions-0.7.0-and-below).html).</p>
+<div class="documentation-content"><p>This page explains the multilang protocol as of Storm 0.7.1. Versions prior to 0.7.1 used a somewhat different protocol, documented [here](Storm-multi-language-protocol-(versions-0.7.0-and-below).html).</p>
 
 <h1 id="storm-multi-language-protocol">Storm Multi-Language Protocol</h1>
 
@@ -436,7 +436,7 @@
 </code></pre></div>
 <p>When subprocess receives heartbeat tuple, it must send a <code>sync</code> command back to
 ShellBolt.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Pacemaker.html b/content/releases/1.2.1/Pacemaker.html
index 9257f35..7353e9a 100644
--- a/content/releases/1.2.1/Pacemaker.html
+++ b/content/releases/1.2.1/Pacemaker.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="introduction">Introduction</h3>
+<div class="documentation-content"><h3 id="introduction">Introduction</h3>
 
 <p>Pacemaker is a storm daemon designed to process heartbeats from workers. As Storm is scaled up, ZooKeeper begins to become a bottleneck due to high volumes of writes from workers doing heartbeats. Lots of writes to disk and too much traffic across the network is generated as ZooKeeper tries to maintain consistency.</p>
 
@@ -258,7 +258,7 @@
 On a 270 supervisor cluster, fully scheduled with topologies, Pacemaker resource utilization was 70% of one core and nearly 1GiB of RAM on a machine with 4 <code>Intel(R) Xeon(R) CPU E5530 @ 2.40GHz</code> and 24GiB of RAM.</p>
 
 <p>Pacemaker now supports HA. Multiple Pacemaker instances can be used at once in a storm cluster to allow massive scalability. Just include the names of the Pacemaker hosts in the pacemaker.servers config and workers and Nimbus will start communicating with them. They&#39;re fault tolerant as well. The system keeps on working as long as there is at least one pacemaker left running - provided it can handle the load.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Powered-By.html b/content/releases/1.2.1/Powered-By.html
index b939e4f..eeb9eb2 100644
--- a/content/releases/1.2.1/Powered-By.html
+++ b/content/releases/1.2.1/Powered-By.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Want to be added to this page? Send an email <a href="mailto:nathan.marz@gmail.com">here</a>.</p>
+<div class="documentation-content"><p>Want to be added to this page? Send an email <a href="mailto:nathan.marz@gmail.com">here</a>.</p>
 
 <table>
 
@@ -1169,7 +1169,7 @@
 
 
 </table>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Project-ideas.html b/content/releases/1.2.1/Project-ideas.html
index ee22774..625f451 100644
--- a/content/releases/1.2.1/Project-ideas.html
+++ b/content/releases/1.2.1/Project-ideas.html
@@ -144,12 +144,12 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><strong>DSLs for non-JVM languages:</strong> These DSL&#39;s should be all-inclusive and not require any Java for the creation of topologies, spouts, or bolts. Since topologies are <a href="http://thrift.apache.org/">Thrift</a> structs, Nimbus is a Thrift service, and bolts can be written in any language, this is possible.</li>
 <li><strong>Online machine learning algorithms:</strong> Something like <a href="http://mahout.apache.org/">Mahout</a> but for online algorithms</li>
 <li><strong>Suite of performance benchmarks:</strong> These benchmarks should test Storm&#39;s performance on CPU and IO intensive workloads. There should be benchmarks for different classes of applications, such as stream processing (where throughput is the priority) and distributed RPC (where latency is the priority). </li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Rationale.html b/content/releases/1.2.1/Rationale.html
index 2fd316d..6dc60f4 100644
--- a/content/releases/1.2.1/Rationale.html
+++ b/content/releases/1.2.1/Rationale.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The past decade has seen a revolution in data processing. MapReduce, Hadoop, and related technologies have made it possible to store and process data at scales previously unthinkable. Unfortunately, these data processing technologies are not realtime systems, nor are they meant to be. There&#39;s no hack that will turn Hadoop into a realtime system; realtime data processing has a fundamentally different set of requirements than batch processing.</p>
+<div class="documentation-content"><p>The past decade has seen a revolution in data processing. MapReduce, Hadoop, and related technologies have made it possible to store and process data at scales previously unthinkable. Unfortunately, these data processing technologies are not realtime systems, nor are they meant to be. There&#39;s no hack that will turn Hadoop into a realtime system; realtime data processing has a fundamentally different set of requirements than batch processing.</p>
 
 <p>However, realtime data processing at massive scale is becoming more and more of a requirement for businesses. The lack of a &quot;Hadoop of realtime&quot; has become the biggest hole in the data processing ecosystem.</p>
 
@@ -176,7 +176,7 @@
 <li><strong>Fault-tolerant</strong>: If there are faults during execution of your computation, Storm will reassign tasks as necessary. Storm makes sure that a computation can run forever (or until you kill the computation).</li>
 <li><strong>Programming language agnostic</strong>: Robust and scalable realtime processing shouldn&#39;t be limited to a single platform. Storm topologies and processing components can be defined in any language, making Storm accessible to nearly anyone.</li>
 </ol>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Resource_Aware_Scheduler_overview.html b/content/releases/1.2.1/Resource_Aware_Scheduler_overview.html
index 2055f21..8c3a5d1 100644
--- a/content/releases/1.2.1/Resource_Aware_Scheduler_overview.html
+++ b/content/releases/1.2.1/Resource_Aware_Scheduler_overview.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="introduction">Introduction</h1>
+<div class="documentation-content"><h1 id="introduction">Introduction</h1>
 
 <p>The purpose of this document is to provide a description of the Resource Aware Scheduler for the Storm distributed real-time computation system.  This document will provide you with both a high level description of the resource aware scheduler in Storm.  Some of the benefits are using a resource aware scheduler on top of Storm is outlined in the following presentation at Hadoop Summit 2016:</p>
 
@@ -617,7 +617,7 @@
 <td><img src="images/ras_new_strategy_runtime_yahoo.png" alt=""></td>
 </tr>
 </tbody></table>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Running-topologies-on-a-production-cluster.html b/content/releases/1.2.1/Running-topologies-on-a-production-cluster.html
index c49b731..af54a31 100644
--- a/content/releases/1.2.1/Running-topologies-on-a-production-cluster.html
+++ b/content/releases/1.2.1/Running-topologies-on-a-production-cluster.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Running topologies on a production cluster is similar to running in <a href="Local-mode.html">Local mode</a>. Here are the steps:</p>
+<div class="documentation-content"><p>Running topologies on a production cluster is similar to running in <a href="Local-mode.html">Local mode</a>. Here are the steps:</p>
 
 <p>1) Define the topology (Use <a href="javadocs/org/apache/storm/topology/TopologyBuilder.html">TopologyBuilder</a> if defining using Java)</p>
 
@@ -212,7 +212,7 @@
 <p>The best place to monitor a topology is using the Storm UI. The Storm UI provides information about errors happening in tasks and fine-grained stats on the throughput and latency performance of each component of each running topology.</p>
 
 <p>You can also look at the worker logs on the cluster machines.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/SECURITY.html b/content/releases/1.2.1/SECURITY.html
index 8a6978f..9515823 100644
--- a/content/releases/1.2.1/SECURITY.html
+++ b/content/releases/1.2.1/SECURITY.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="running-apache-storm-securely">Running Apache Storm Securely</h1>
+<div class="documentation-content"><h1 id="running-apache-storm-securely">Running Apache Storm Securely</h1>
 
 <p>Apache Storm offers a range of configuration options when trying to secure
 your cluster.  By default all authentication and authorization is disabled but 
@@ -683,7 +683,7 @@
  | storm.zookeeper.topology.auth.payload | A string representing the payload for topology Zookeeper authentication. |</p>
 
 <p>Note: If storm.zookeeper.topology.auth.payload isn&#39;t set,storm will generate a ZooKeeper secret payload for MD5-digest with generateZookeeperDigestSecretPayload() method.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/STORM-UI-REST-API.html b/content/releases/1.2.1/STORM-UI-REST-API.html
index 92aca68..12e9159 100644
--- a/content/releases/1.2.1/STORM-UI-REST-API.html
+++ b/content/releases/1.2.1/STORM-UI-REST-API.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The Storm UI daemon provides a REST API that allows you to interact with a Storm cluster, which includes retrieving
+<div class="documentation-content"><p>The Storm UI daemon provides a REST API that allows you to interact with a Storm cluster, which includes retrieving
 metrics data and configuration information as well as management operations such as starting or stopping topologies.</p>
 
 <h1 id="data-format">Data format</h1>
@@ -2936,7 +2936,7 @@
   </span><span class="s2">"error"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Internal Server Error"</span><span class="p">,</span><span class="w">
   </span><span class="s2">"errorMessage"</span><span class="p">:</span><span class="w"> </span><span class="s2">"java.lang.NullPointerException</span><span class="se">\n\t</span><span class="s2">at clojure.core$name.invoke(core.clj:1505)</span><span class="se">\n\t</span><span class="s2">at org.apache.storm.ui.core$component_page.invoke(core.clj:752)</span><span class="se">\n\t</span><span class="s2">at org.apache.storm.ui.core$fn__7766.invoke(core.clj:782)</span><span class="se">\n\t</span><span class="s2">at compojure.core$make_route$fn__5755.invoke(core.clj:93)</span><span class="se">\n\t</span><span class="s2">at compojure.core$if_route$fn__5743.invoke(core.clj:39)</span><span class="se">\n\t</span><span class="s2">at compojure.core$if_method$fn__5736.invoke(core.clj:24)</span><span class="se">\n\t</span><span class="s2">at compojure.core$routing$fn__5761.invoke(core.clj:106)</span><span class="se">\n\t</span><span class="s2">at clojure.core$some.invoke(core.clj:2443)</span><span class="se">\n\t</span><span class="s2">at compojure.core$routing.doInvoke(core.clj:106)</span><span class="se">\n\t</span><span class="s2">at clojure.lang.RestFn.applyTo(RestFn.java:139)</span><span class="se">\n\t</span><span class="s2">at clojure.core$apply.invoke(core.clj:619)</span><span class="se">\n\t</span><span class="s2">at compojure.core$routes$fn__5765.invoke(core.clj:111)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.reload$wrap_reload$fn__6880.invoke(reload.clj:14)</span><span class="se">\n\t</span><span class="s2">at org.apache.storm.ui.core$catch_errors$fn__7800.invoke(core.clj:836)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.keyword_params$wrap_keyword_params$fn__6319.invoke(keyword_params.clj:27)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.nested_params$wrap_nested_params$fn__6358.invoke(nested_params.clj:65)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.params$wrap_params$fn__6291.invoke(params.clj:55)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.multipart_params$wrap_multipart_params$fn__6386.invoke(multipart_params.clj:103)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.flash$wrap_flash$fn__6675.invoke(flash.clj:14)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.session$wrap_session$fn__6664.invoke(session.clj:43)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.cookies$wrap_cookies$fn__6595.invoke(cookies.clj:160)</span><span class="se">\n\t</span><span class="s2">at ring.adapter.jetty$proxy_handler$fn__6112.invoke(jetty.clj:16)</span><span class="se">\n\t</span><span class="s2">at ring.adapter.jetty.proxy$org.mortbay.jetty.handler.AbstractHandler$0.handle(Unknown Source)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.Server.handle(Server.java:326)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)</span><span class="se">\n</span><span class="s2">"</span><span class="w">
 </span><span class="p">}</span><span class="w">
-</span></code></pre></div>
+</span></code></pre></div></div>
 
 
 	          </div>
diff --git "a/content/releases/1.2.1/Serialization-\050prior-to-0.6.0\051.html" "b/content/releases/1.2.1/Serialization-\050prior-to-0.6.0\051.html"
index dab36c9..8b1b245 100644
--- "a/content/releases/1.2.1/Serialization-\050prior-to-0.6.0\051.html"
+++ "b/content/releases/1.2.1/Serialization-\050prior-to-0.6.0\051.html"
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Tuples can be comprised of objects of any types. Since Storm is a distributed system, it needs to know how to serialize and deserialize objects when they&#39;re passed between tasks. By default Storm can serialize ints, shorts, longs, floats, doubles, bools, bytes, strings, and byte arrays, but if you want to use another type in your tuples, you&#39;ll need to implement a custom serializer.</p>
+<div class="documentation-content"><p>Tuples can be comprised of objects of any types. Since Storm is a distributed system, it needs to know how to serialize and deserialize objects when they&#39;re passed between tasks. By default Storm can serialize ints, shorts, longs, floats, doubles, bools, bytes, strings, and byte arrays, but if you want to use another type in your tuples, you&#39;ll need to implement a custom serializer.</p>
 
 <h3 id="dynamic-typing">Dynamic typing</h3>
 
@@ -188,7 +188,7 @@
 <p>Storm provides helpers for registering serializers in a topology config. The <a href="javadocs/backtype/storm/Config.html">Config</a> class has a method called <code>addSerialization</code> that takes in a serializer class to add to the config.</p>
 
 <p>There&#39;s an advanced config called Config.TOPOLOGY_SKIP_MISSING_SERIALIZATIONS. If you set this to true, Storm will ignore any serializations that are registered but do not have their code available on the classpath. Otherwise, Storm will throw errors when it can&#39;t find a serialization. This is useful if you run many topologies on a cluster that each have different serializations, but you want to declare all the serializations across all topologies in the <code>storm.yaml</code> files.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Serialization.html b/content/releases/1.2.1/Serialization.html
index b52937a..a79aeed 100644
--- a/content/releases/1.2.1/Serialization.html
+++ b/content/releases/1.2.1/Serialization.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page is about how the serialization system in Storm works for versions 0.6.0 and onwards. Storm used a different serialization system prior to 0.6.0 which is documented on <a href="Serialization-(prior-to-0.6.0).html">Serialization (prior to 0.6.0)</a>. </p>
+<div class="documentation-content"><p>This page is about how the serialization system in Storm works for versions 0.6.0 and onwards. Storm used a different serialization system prior to 0.6.0 which is documented on <a href="Serialization-(prior-to-0.6.0).html">Serialization (prior to 0.6.0)</a>. </p>
 
 <p>Tuples can be comprised of objects of any types. Since Storm is a distributed system, it needs to know how to serialize and deserialize objects when they&#39;re passed between tasks.</p>
 
@@ -200,7 +200,7 @@
 <p>When a topology is submitted, a single set of serializations is chosen to be used by all components in the topology for sending messages. This is done by merging the component-specific serializer registrations with the regular set of serialization registrations. If two components define serializers for the same class, one of the serializers is chosen arbitrarily.</p>
 
 <p>To force a serializer for a particular class if there&#39;s a conflict between two component-specific registrations, just define the serializer you want to use in the topology-specific configuration. The topology-specific configuration has precedence over component-specific configurations for serialization registrations.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Serializers.html b/content/releases/1.2.1/Serializers.html
index 200c717..f2d3acb 100644
--- a/content/releases/1.2.1/Serializers.html
+++ b/content/releases/1.2.1/Serializers.html
@@ -144,10 +144,10 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><a href="https://github.com/rapportive-oss/storm-json">storm-json</a>: Simple JSON serializer for Storm</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Setting-up-a-Storm-cluster.html b/content/releases/1.2.1/Setting-up-a-Storm-cluster.html
index 2fcab0c..0592dd3 100644
--- a/content/releases/1.2.1/Setting-up-a-Storm-cluster.html
+++ b/content/releases/1.2.1/Setting-up-a-Storm-cluster.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page outlines the steps for getting a Storm cluster up and running. If you&#39;re on AWS, you should check out the <a href="https://github.com/nathanmarz/storm-deploy/wiki">storm-deploy</a> project. <a href="https://github.com/nathanmarz/storm-deploy/wiki">storm-deploy</a> completely automates the provisioning, configuration, and installation of Storm clusters on EC2. It also sets up Ganglia for you so you can monitor CPU, disk, and network usage.</p>
+<div class="documentation-content"><p>This page outlines the steps for getting a Storm cluster up and running. If you&#39;re on AWS, you should check out the <a href="https://github.com/nathanmarz/storm-deploy/wiki">storm-deploy</a> project. <a href="https://github.com/nathanmarz/storm-deploy/wiki">storm-deploy</a> completely automates the provisioning, configuration, and installation of Storm clusters on EC2. It also sets up Ganglia for you so you can monitor CPU, disk, and network usage.</p>
 
 <p>If you run into difficulties with your Storm cluster, first check for a solution is in the <a href="Troubleshooting.html">Troubleshooting</a> page. Otherwise, email the mailing list.</p>
 
@@ -246,7 +246,7 @@
 </ol>
 
 <p>As you can see, running the daemons is very straightforward. The daemons will log to the logs/ directory in wherever you extracted the Storm release.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Setting-up-development-environment.html b/content/releases/1.2.1/Setting-up-development-environment.html
index 73bbd95..5e8e70d 100644
--- a/content/releases/1.2.1/Setting-up-development-environment.html
+++ b/content/releases/1.2.1/Setting-up-development-environment.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page outlines what you need to do to get a Storm development environment set up. In summary, the steps are:</p>
+<div class="documentation-content"><p>This page outlines what you need to do to get a Storm development environment set up. In summary, the steps are:</p>
 
 <ol>
 <li>Download a <a href="..//downloads.html">Storm release</a> , unpack it, and put the unpacked <code>bin/</code> directory on your PATH</li>
@@ -171,7 +171,7 @@
 
 <p>The previous step installed the <code>storm</code> client on your machine which is used to communicate with remote Storm clusters. Now all you have to do is tell the client which Storm cluster to talk to. To do this, all you have to do is put the host address of the master in the <code>~/.storm/storm.yaml</code> file. It should look something like this:</p>
 <div class="highlight"><pre><code class="language-" data-lang="">nimbus.seeds: ["123.45.678.890"]
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Spout-implementations.html b/content/releases/1.2.1/Spout-implementations.html
index 64223b1..ad75ae1 100644
--- a/content/releases/1.2.1/Spout-implementations.html
+++ b/content/releases/1.2.1/Spout-implementations.html
@@ -144,14 +144,14 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><a href="https://github.com/nathanmarz/storm-kestrel">storm-kestrel</a>: Adapter to use Kestrel as a spout</li>
 <li><a href="https://github.com/rapportive-oss/storm-amqp-spout">storm-amqp-spout</a>: Adapter to use AMQP source as a spout</li>
 <li><a href="https://github.com/ptgoetz/storm-jms">storm-jms</a>: Adapter to use a JMS source as a spout</li>
 <li><a href="https://github.com/sorenmacbeth/storm-redis-pubsub">storm-redis-pubsub</a>: A spout that subscribes to a Redis pubsub stream</li>
 <li><a href="https://github.com/haitaoyao/storm-beanstalkd-spout">storm-beanstalkd-spout</a>: A spout that subscribes to a beanstalkd queue</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/State-checkpointing.html b/content/releases/1.2.1/State-checkpointing.html
index 458070b..1425498 100644
--- a/content/releases/1.2.1/State-checkpointing.html
+++ b/content/releases/1.2.1/State-checkpointing.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="state-support-in-core-storm">State support in core storm</h1>
+<div class="documentation-content"><h1 id="state-support-in-core-storm">State support in core storm</h1>
 
 <p>Storm core has abstractions for bolts to save and retrieve the state of its operations. There is a default in-memory
 based state implementation and also a Redis backed implementation that provides state persistence.</p>
@@ -419,7 +419,7 @@
 </ul>
 
 <p><code>org.apache.storm:storm-hbase:&lt;storm-version&gt;</code></p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Storm-Scheduler.html b/content/releases/1.2.1/Storm-Scheduler.html
index ca72cc0..805fac2 100644
--- a/content/releases/1.2.1/Storm-Scheduler.html
+++ b/content/releases/1.2.1/Storm-Scheduler.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm now has 4 kinds of built-in schedulers: <a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/clj/org/apache/storm/scheduler/DefaultScheduler.clj">DefaultScheduler</a>, <a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/clj/org/apache/storm/scheduler/IsolationScheduler.clj">IsolationScheduler</a>, <a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/jvm/org/apache/storm/scheduler/multitenant/MultitenantScheduler.java">MultitenantScheduler</a>, <a href="Resource_Aware_Scheduler_overview.html">ResourceAwareScheduler</a>. </p>
+<div class="documentation-content"><p>Storm now has 4 kinds of built-in schedulers: <a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/clj/org/apache/storm/scheduler/DefaultScheduler.clj">DefaultScheduler</a>, <a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/clj/org/apache/storm/scheduler/IsolationScheduler.clj">IsolationScheduler</a>, <a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/jvm/org/apache/storm/scheduler/multitenant/MultitenantScheduler.java">MultitenantScheduler</a>, <a href="Resource_Aware_Scheduler_overview.html">ResourceAwareScheduler</a>. </p>
 
 <h2 id="pluggable-scheduler">Pluggable scheduler</h2>
 
@@ -163,7 +163,7 @@
 <p>Any topologies submitted to the cluster not listed there will not be isolated. Note that there is no way for a user of Storm to affect their isolation settings – this is only allowed by the administrator of the cluster (this is very much intentional).</p>
 
 <p>The isolation scheduler solves the multi-tenancy problem – avoiding resource contention between topologies – by providing full isolation between topologies. The intention is that &quot;productionized&quot; topologies should be listed in the isolation config, and test or in-development topologies should not. The remaining machines on the cluster serve the dual role of failover for isolated topologies and for running the non-isolated topologies.</p>
-
+</div>
 
 
 	          </div>
diff --git "a/content/releases/1.2.1/Storm-multi-language-protocol-\050versions-0.7.0-and-below\051.html" "b/content/releases/1.2.1/Storm-multi-language-protocol-\050versions-0.7.0-and-below\051.html"
index 1c41348..d9df735 100644
--- "a/content/releases/1.2.1/Storm-multi-language-protocol-\050versions-0.7.0-and-below\051.html"
+++ "b/content/releases/1.2.1/Storm-multi-language-protocol-\050versions-0.7.0-and-below\051.html"
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page explains the multilang protocol for versions 0.7.0 and below. The protocol changed in version 0.7.1.</p>
+<div class="documentation-content"><p>This page explains the multilang protocol for versions 0.7.0 and below. The protocol changed in version 0.7.1.</p>
 
 <h1 id="storm-multi-language-protocol">Storm Multi-Language Protocol</h1>
 
@@ -253,7 +253,7 @@
 <p>Note: This command is not JSON encoded, it is sent as a simple string.</p>
 
 <p>This lets the parent bolt know that the script has finished processing and is ready for another tuple.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Structure-of-the-codebase.html b/content/releases/1.2.1/Structure-of-the-codebase.html
index f095080..ffe035b 100644
--- a/content/releases/1.2.1/Structure-of-the-codebase.html
+++ b/content/releases/1.2.1/Structure-of-the-codebase.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>There are three distinct layers to Storm&#39;s codebase.</p>
+<div class="documentation-content"><p>There are three distinct layers to Storm&#39;s codebase.</p>
 
 <p>First, Storm was designed from the very beginning to be compatible with multiple languages. Nimbus is a Thrift service and topologies are defined as Thrift structures. The usage of Thrift allows Storm to be used from any language.</p>
 
@@ -287,7 +287,7 @@
 <p><a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/clj/org/apache/storm/util.clj">org.apache.storm.util</a>: Contains generic utility functions used throughout the code base.</p>
 
 <p><a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/clj/org/apache/storm/zookeeper.clj">org.apache.storm.zookeeper</a>: Clojure wrapper around the Zookeeper API and implements some &quot;high-level&quot; stuff like &quot;mkdirs&quot; and &quot;delete-recursive&quot;.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Support-for-non-java-languages.html b/content/releases/1.2.1/Support-for-non-java-languages.html
index ab0c42b..e7bce3a 100644
--- a/content/releases/1.2.1/Support-for-non-java-languages.html
+++ b/content/releases/1.2.1/Support-for-non-java-languages.html
@@ -144,13 +144,13 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><a href="https://github.com/velvia/ScalaStorm">Scala DSL</a></li>
 <li><a href="https://github.com/colinsurprenant/storm-jruby">JRuby DSL</a></li>
 <li><a href="Clojure-DSL.html">Clojure DSL</a></li>
 <li><a href="https://github.com/gphat/io-storm">io-storm</a>: Perl multilang adapter</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Transactional-topologies.html b/content/releases/1.2.1/Transactional-topologies.html
index 37b4863..36b65bf 100644
--- a/content/releases/1.2.1/Transactional-topologies.html
+++ b/content/releases/1.2.1/Transactional-topologies.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p><strong>NOTE</strong>: Transactional topologies have been deprecated -- use the <a href="Trident-tutorial.html">Trident</a> framework instead.</p>
+<div class="documentation-content"><p><strong>NOTE</strong>: Transactional topologies have been deprecated -- use the <a href="Trident-tutorial.html">Trident</a> framework instead.</p>
 
 <hr>
 
@@ -510,7 +510,7 @@
 <li>so it can&#39;t call finishbatch until it&#39;s received all tuples from all subscribed components AND its received the commit stream tuple (for committers). this ensures that it can&#39;t prematurely call finishBatch</li>
 </ul></li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Trident-API-Overview.html b/content/releases/1.2.1/Trident-API-Overview.html
index 36dff27..eb5cdf5 100644
--- a/content/releases/1.2.1/Trident-API-Overview.html
+++ b/content/releases/1.2.1/Trident-API-Overview.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The core data model in Trident is the &quot;Stream&quot;, processed as a series of batches. A stream is partitioned among the nodes in the cluster, and operations applied to a stream are applied in parallel across each partition.</p>
+<div class="documentation-content"><p>The core data model in Trident is the &quot;Stream&quot;, processed as a series of batches. A stream is partitioned among the nodes in the cluster, and operations applied to a stream are applied in parallel across each partition.</p>
 
 <p>There are five kinds of operations in Trident:</p>
 
@@ -669,7 +669,7 @@
 <p>You might be wondering – how do you do something like a &quot;windowed join&quot;, where tuples from one side of the join are joined against the last hour of tuples from the other side of the join.</p>
 
 <p>To do this, you would make use of partitionPersist and stateQuery. The last hour of tuples from one side of the join would be stored and rotated in a source of state, keyed by the join field. Then the stateQuery would do lookups by the join field to perform the &quot;join&quot;.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Trident-RAS-API.html b/content/releases/1.2.1/Trident-RAS-API.html
index 428dd6f..d18217c 100644
--- a/content/releases/1.2.1/Trident-RAS-API.html
+++ b/content/releases/1.2.1/Trident-RAS-API.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="trident-ras-api">Trident RAS API</h2>
+<div class="documentation-content"><h2 id="trident-ras-api">Trident RAS API</h2>
 
 <p>The Trident RAS (Resource Aware Scheduler) API provides a mechanism to allow users to specify the resource consumption of a Trident topology. The API looks exactly like the base RAS API, only it is called on Trident Streams instead of Bolts and Spouts.</p>
 
@@ -192,7 +192,7 @@
 <p>Resource declarations may be called after any operation. The operations without explicit resources will get the defaults. If you choose to set resources for only some operations, defaults must be declared, or topology submission will fail.
 Resource declarations have the same <em>boundaries</em> as parallelism hints. They don&#39;t cross any groupings, shufflings, or any other kind of repartitioning.
 Resources are declared per operation, but get combined within boundaries.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Trident-spouts.html b/content/releases/1.2.1/Trident-spouts.html
index d08a745..e0b736d 100644
--- a/content/releases/1.2.1/Trident-spouts.html
+++ b/content/releases/1.2.1/Trident-spouts.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="trident-spouts">Trident spouts</h1>
+<div class="documentation-content"><h1 id="trident-spouts">Trident spouts</h1>
 
 <p>Like in the vanilla Storm API, spouts are the source of streams in a Trident topology. On top of the vanilla Storm spouts, Trident exposes additional APIs for more sophisticated spouts.</p>
 
@@ -182,7 +182,7 @@
 </ol>
 
 <p>And, like mentioned in the beginning of this tutorial, you can use regular IRichSpout&#39;s as well.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Trident-state.html b/content/releases/1.2.1/Trident-state.html
index a174820..2c9e059 100644
--- a/content/releases/1.2.1/Trident-state.html
+++ b/content/releases/1.2.1/Trident-state.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Trident has first-class abstractions for reading from and writing to stateful sources. The state can either be internal to the topology – e.g., kept in-memory and backed by HDFS – or externally stored in a database like Memcached or Cassandra. There&#39;s no difference in the Trident API for either case.</p>
+<div class="documentation-content"><p>Trident has first-class abstractions for reading from and writing to stateful sources. The state can either be internal to the topology – e.g., kept in-memory and backed by HDFS – or externally stored in a database like Memcached or Cassandra. There&#39;s no difference in the Trident API for either case.</p>
 
 <p>Trident manages state in a fault-tolerant way so that state updates are idempotent in the face of retries and failures. This lets you reason about Trident topologies as if each message were processed exactly-once.</p>
 
@@ -415,7 +415,7 @@
 <p>Finally, Trident provides the <a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/jvm/org/apache/storm/trident/state/map/SnapshottableMap.java">SnapshottableMap</a> class that turns a MapState into a Snapshottable object, by storing global aggregations into a fixed key.</p>
 
 <p>Take a look at the implementation of <a href="https://github.com/nathanmarz/trident-memcached/blob/master/src/jvm/trident/memcached/MemcachedState.java">MemcachedState</a> to see how all these utilities can be put together to make a high performance MapState implementation. MemcachedState allows you to choose between opaque transactional, transactional, and non-transactional semantics.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Trident-tutorial.html b/content/releases/1.2.1/Trident-tutorial.html
index 4403c50..4d2bbbb 100644
--- a/content/releases/1.2.1/Trident-tutorial.html
+++ b/content/releases/1.2.1/Trident-tutorial.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Trident is a high-level abstraction for doing realtime computing on top of Storm. It allows you to seamlessly intermix high throughput (millions of messages per second), stateful stream processing with low latency distributed querying. If you&#39;re familiar with high level batch processing tools like Pig or Cascading, the concepts of Trident will be very familiar – Trident has joins, aggregations, grouping, functions, and filters. In addition to these, Trident adds primitives for doing stateful, incremental processing on top of any database or persistence store. Trident has consistent, exactly-once semantics, so it is easy to reason about Trident topologies.</p>
+<div class="documentation-content"><p>Trident is a high-level abstraction for doing realtime computing on top of Storm. It allows you to seamlessly intermix high throughput (millions of messages per second), stateful stream processing with low latency distributed querying. If you&#39;re familiar with high level batch processing tools like Pig or Cascading, the concepts of Trident will be very familiar – Trident has joins, aggregations, grouping, functions, and filters. In addition to these, Trident adds primitives for doing stateful, incremental processing on top of any database or persistence store. Trident has consistent, exactly-once semantics, so it is easy to reason about Trident topologies.</p>
 
 <h2 id="illustrative-example">Illustrative example</h2>
 
@@ -356,7 +356,7 @@
 <h2 id="conclusion">Conclusion</h2>
 
 <p>Trident makes realtime computation elegant. You&#39;ve seen how high throughput stream processing, state manipulation, and low-latency querying can be seamlessly intermixed via Trident&#39;s API. Trident lets you express your realtime computations in a natural way while still getting maximal performance.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Troubleshooting.html b/content/releases/1.2.1/Troubleshooting.html
index 721c844..8ed7a9b 100644
--- a/content/releases/1.2.1/Troubleshooting.html
+++ b/content/releases/1.2.1/Troubleshooting.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page lists issues people have run into when using Storm along with their solutions.</p>
+<div class="documentation-content"><p>This page lists issues people have run into when using Storm along with their solutions.</p>
 
 <h3 id="worker-processes-are-crashing-on-startup-with-no-stack-trace">Worker processes are crashing on startup with no stack trace</h3>
 
@@ -279,7 +279,7 @@
 <ul>
 <li>This means that you&#39;re emitting a mutable object as an output tuple. Everything you emit into the output collector must be immutable. What&#39;s happening is that your bolt is modifying the object while it is being serialized to be sent over the network.</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Tutorial.html b/content/releases/1.2.1/Tutorial.html
index ecf28c1..45eb3cd 100644
--- a/content/releases/1.2.1/Tutorial.html
+++ b/content/releases/1.2.1/Tutorial.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>In this tutorial, you&#39;ll learn how to create Storm topologies and deploy them to a Storm cluster. Java will be the main language used, but a few examples will use Python to illustrate Storm&#39;s multi-language capabilities.</p>
+<div class="documentation-content"><p>In this tutorial, you&#39;ll learn how to create Storm topologies and deploy them to a Storm cluster. Java will be the main language used, but a few examples will use Python to illustrate Storm&#39;s multi-language capabilities.</p>
 
 <h2 id="preliminaries">Preliminaries</h2>
 
@@ -428,7 +428,7 @@
 <h2 id="conclusion">Conclusion</h2>
 
 <p>This tutorial gave a broad overview of developing, testing, and deploying Storm topologies. The rest of the documentation dives deeper into all the aspects of using Storm.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Understanding-the-parallelism-of-a-Storm-topology.html b/content/releases/1.2.1/Understanding-the-parallelism-of-a-Storm-topology.html
index d337ef5..b965f89 100644
--- a/content/releases/1.2.1/Understanding-the-parallelism-of-a-Storm-topology.html
+++ b/content/releases/1.2.1/Understanding-the-parallelism-of-a-Storm-topology.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="what-makes-a-running-topology-worker-processes-executors-and-tasks">What makes a running topology: worker processes, executors and tasks</h2>
+<div class="documentation-content"><h2 id="what-makes-a-running-topology-worker-processes-executors-and-tasks">What makes a running topology: worker processes, executors and tasks</h2>
 
 <p>Storm distinguishes between the following three main entities that are used to actually run a topology in a Storm cluster:</p>
 
@@ -274,7 +274,7 @@
 <li><a href="Tutorial.html">Tutorial</a></li>
 <li><a href="javadocs/">Storm API documentation</a>, most notably the class <code>Config</code></li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Using-non-JVM-languages-with-Storm.html b/content/releases/1.2.1/Using-non-JVM-languages-with-Storm.html
index 59f7a38..23253db 100644
--- a/content/releases/1.2.1/Using-non-JVM-languages-with-Storm.html
+++ b/content/releases/1.2.1/Using-non-JVM-languages-with-Storm.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li>two pieces: creating topologies and implementing spouts and bolts in other languages</li>
 <li>creating topologies in another language is easy since topologies are just thrift structures (link to storm.thrift)</li>
 <li>implementing spouts and bolts in another language is called a &quot;multilang components&quot; or &quot;shelling&quot;
@@ -198,7 +198,7 @@
 <p>Then you can connect to Nimbus using the Thrift API and submit the topology, passing {uploaded-jar-location} into the submitTopology method. For reference, here&#39;s the submitTopology definition:</p>
 <div class="highlight"><pre><code class="language-" data-lang="">void submitTopology(1: string name, 2: string uploadedJarLocation, 3: string jsonConf, 4: StormTopology topology)
     throws (1: AlreadyAliveException e, 2: InvalidTopologyException ite);
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/Windowing.html b/content/releases/1.2.1/Windowing.html
index 68428f2..939177f 100644
--- a/content/releases/1.2.1/Windowing.html
+++ b/content/releases/1.2.1/Windowing.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm core has support for processing a group of tuples that falls within a window. Windows are specified with the 
+<div class="documentation-content"><p>Storm core has support for processing a group of tuples that falls within a window. Windows are specified with the 
 following two parameters,</p>
 
 <ol>
@@ -380,7 +380,7 @@
 
 <p>An example toplogy <code>SlidingWindowTopology</code> shows how to use the apis to compute a sliding window sum and a tumbling window 
 average.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/distcache-blobstore.html b/content/releases/1.2.1/distcache-blobstore.html
index 7a03da4..b359881 100644
--- a/content/releases/1.2.1/distcache-blobstore.html
+++ b/content/releases/1.2.1/distcache-blobstore.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="storm-distributed-cache-api">Storm Distributed Cache API</h1>
+<div class="documentation-content"><h1 id="storm-distributed-cache-api">Storm Distributed Cache API</h1>
 
 <p>The distributed cache feature in storm is used to efficiently distribute files
 (or blobs, which is the equivalent terminology for a file in the distributed
@@ -799,7 +799,7 @@
  2: required string session;
  3: optional i64 data_size;
 }
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/dynamic-log-level-settings.html b/content/releases/1.2.1/dynamic-log-level-settings.html
index c26d773..82f8a9b 100644
--- a/content/releases/1.2.1/dynamic-log-level-settings.html
+++ b/content/releases/1.2.1/dynamic-log-level-settings.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>We have added the ability to set log level settings for a running topology using the Storm UI and the Storm CLI. </p>
+<div class="documentation-content"><p>We have added the ability to set log level settings for a running topology using the Storm UI and the Storm CLI. </p>
 
 <p>The log level settings apply the same way as you&#39;d expect from log4j, as all we are doing is telling log4j to set the level of the logger you provide. If you set the log level of a parent logger, the children loggers start using that level (unless the children have a more restrictive level already). A timeout can optionally be provided (except for DEBUG mode, where it’s required in the UI), if workers should reset log levels automatically.</p>
 
@@ -179,7 +179,7 @@
 <p><code>./bin/storm set_log_level my_topology -r ROOT</code></p>
 
 <p>Clears the ROOT logger dynamic log level, resetting it to its original value.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/dynamic-worker-profiling.html b/content/releases/1.2.1/dynamic-worker-profiling.html
index eb939d3..e915903 100644
--- a/content/releases/1.2.1/dynamic-worker-profiling.html
+++ b/content/releases/1.2.1/dynamic-worker-profiling.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>In multi-tenant mode, storm launches long-running JVMs across cluster without sudo access to user. Self-serving of Java heap-dumps, jstacks and java profiling of these JVMs would improve users&#39; ability to analyze and debug issues when monitoring it actively.</p>
+<div class="documentation-content"><p>In multi-tenant mode, storm launches long-running JVMs across cluster without sudo access to user. Self-serving of Java heap-dumps, jstacks and java profiling of these JVMs would improve users&#39; ability to analyze and debug issues when monitoring it actively.</p>
 
 <p>The storm dynamic profiler lets you dynamically take heap-dumps, jprofile or jstack for a worker jvm running on stock cluster. It let user download these dumps from the browser and use your favorite tools to analyze it  The UI component page provides list workers for the component and action buttons. The logviewer lets you download the dumps generated by these logs. Please see the screenshots for more information.</p>
 
@@ -171,7 +171,7 @@
 <h2 id="configuration">Configuration</h2>
 
 <p>The &quot;worker.profiler.command&quot; can be configured to point to specific pluggable profiler, heapdump commands. The &quot;worker.profiler.enabled&quot; can be disabled if plugin is not available or jdk does not support Jprofile flight recording so that worker JVM options will not have &quot;worker.profiler.childopts&quot;. To use different profiler plugin, you can change these configuration.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/flux.html b/content/releases/1.2.1/flux.html
index a3afd83..e43b36a 100644
--- a/content/releases/1.2.1/flux.html
+++ b/content/releases/1.2.1/flux.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>A framework for creating and deploying Apache Storm streaming computations with less friction.</p>
+<div class="documentation-content"><p>A framework for creating and deploying Apache Storm streaming computations with less friction.</p>
 
 <h2 id="definition">Definition</h2>
 
@@ -908,7 +908,7 @@
   <span class="na">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">org.apache.storm.flux.test.TridentTopologySource"</span>
   <span class="c1"># Flux will look for "getTopology", this will override that.</span>
   <span class="na">methodName</span><span class="pi">:</span> <span class="s2">"</span><span class="s">getTopologyWithDifferentMethodName"</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/index.html b/content/releases/1.2.1/index.html
index 860b688..93d1cea 100644
--- a/content/releases/1.2.1/index.html
+++ b/content/releases/1.2.1/index.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<blockquote>
+<div class="documentation-content"><blockquote>
 <h4 id="note">NOTE</h4>
 
 <p>In the latest version, the class packages have been changed from &quot;backtype.storm&quot; to &quot;org.apache.storm&quot; so the topology code compiled with older version won&#39;t run on the Storm 1.0.0 just like that. Backward compatibility is available through following configuration </p>
@@ -286,7 +286,7 @@
 <li><a href="Multilang-protocol.html">Multilang protocol</a> (how to provide support for another language)</li>
 <li><a href="Implementation-docs.html">Implementation docs</a></li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/metrics_v2.html b/content/releases/1.2.1/metrics_v2.html
index 7e1cba5..47f8f10 100644
--- a/content/releases/1.2.1/metrics_v2.html
+++ b/content/releases/1.2.1/metrics_v2.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Apache Storm version 1.2 introduces a new metrics system for reporting
+<div class="documentation-content"><p>Apache Storm version 1.2 introduces a new metrics system for reporting
 internal statistics (e.g. acked, failed, emitted, transferred, disruptor queue metrics, etc.) as well as a 
 new API for user defined metrics.</p>
 
@@ -274,7 +274,7 @@
     <span class="kt">boolean</span> <span class="nf">matches</span><span class="o">(</span><span class="n">String</span> <span class="n">name</span><span class="o">,</span> <span class="n">Metric</span> <span class="n">metric</span><span class="o">);</span>
 
 <span class="o">}</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/nimbus-ha-design.html b/content/releases/1.2.1/nimbus-ha-design.html
index 7bd56b1..4ee5b46 100644
--- a/content/releases/1.2.1/nimbus-ha-design.html
+++ b/content/releases/1.2.1/nimbus-ha-design.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="problem-statement">Problem Statement:</h2>
+<div class="documentation-content"><h2 id="problem-statement">Problem Statement:</h2>
 
 <p>Currently the storm master aka nimbus, is a process that runs on a single machine under supervision. In most cases the 
 nimbus failure is transient and it is restarted by the supervisor. However sometimes when disks fail and networks 
@@ -361,7 +361,7 @@
 <p>Note: Even though all nimbus hosts have watchers on zookeeper to be notified immediately as soon as a new topology is available for code
 download, the callback pretty much never results in code download. In practice we have observed that the desired replication is only achieved once the background-thread runs. 
 So you should expect your topology submission time to be somewhere between 0 to (2 * nimbus.code.sync.freq.secs) for any nimbus.min.replication.count &gt; 1.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/storm-cassandra.html b/content/releases/1.2.1/storm-cassandra.html
index d0f47e4..ec5bc9d 100644
--- a/content/releases/1.2.1/storm-cassandra.html
+++ b/content/releases/1.2.1/storm-cassandra.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="bolt-api-implementation-for-apache-cassandra">Bolt API implementation for Apache Cassandra</h3>
+<div class="documentation-content"><h3 id="bolt-api-implementation-for-apache-cassandra">Bolt API implementation for Apache Cassandra</h3>
 
 <p>This library provides core storm bolt on top of Apache Cassandra.
 Provides simple DSL to map storm <em>Tuple</em> to Cassandra Query Language <em>Statement</em>.</p>
@@ -373,7 +373,7 @@
         <span class="n">CassandraStateFactory</span> <span class="n">selectWeatherStationStateFactory</span> <span class="o">=</span> <span class="n">getSelectWeatherStationStateFactory</span><span class="o">();</span>
         <span class="n">TridentState</span> <span class="n">selectState</span> <span class="o">=</span> <span class="n">topology</span><span class="o">.</span><span class="na">newStaticState</span><span class="o">(</span><span class="n">selectWeatherStationStateFactory</span><span class="o">);</span>
         <span class="n">stream</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">stateQuery</span><span class="o">(</span><span class="n">selectState</span><span class="o">,</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"weather_station_id"</span><span class="o">),</span> <span class="k">new</span> <span class="n">CassandraQuery</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"name"</span><span class="o">));</span>         
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/storm-elasticsearch.html b/content/releases/1.2.1/storm-elasticsearch.html
index 9477383..3696122 100644
--- a/content/releases/1.2.1/storm-elasticsearch.html
+++ b/content/releases/1.2.1/storm-elasticsearch.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="storm-elasticsearch-bolt-trident-state">Storm Elasticsearch Bolt &amp; Trident State</h1>
+<div class="documentation-content"><h1 id="storm-elasticsearch-bolt-trident-state">Storm Elasticsearch Bolt &amp; Trident State</h1>
 
 <p>EsIndexBolt, EsPercolateBolt and EsState allows users to stream data from storm into Elasticsearch directly.
   For detailed description, please refer to the following.</p>
@@ -245,7 +245,7 @@
 <li>Sriharsha Chintalapani (<a href="https://github.com/harshach">@harshach</a>)</li>
 <li>Jungtaek Lim (<a href="https://github.com/HeartSaVioR">@HeartSaVioR</a>)</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/storm-eventhubs.html b/content/releases/1.2.1/storm-eventhubs.html
index dd8e158..4f0ac92 100644
--- a/content/releases/1.2.1/storm-eventhubs.html
+++ b/content/releases/1.2.1/storm-eventhubs.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm spout and bolt implementation for Microsoft Azure Eventhubs</p>
+<div class="documentation-content"><p>Storm spout and bolt implementation for Microsoft Azure Eventhubs</p>
 
 <h3 id="build">build</h3>
 <div class="highlight"><pre><code class="language-" data-lang="">mvn clean package
@@ -178,7 +178,7 @@
 
 <h3 id="windows-azure-eventhubs">Windows Azure Eventhubs</h3>
 <div class="highlight"><pre><code class="language-" data-lang="">http://azure.microsoft.com/en-us/services/event-hubs/
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/storm-hbase.html b/content/releases/1.2.1/storm-hbase.html
index 87e2e25..3cb5653 100644
--- a/content/releases/1.2.1/storm-hbase.html
+++ b/content/releases/1.2.1/storm-hbase.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm/Trident integration for <a href="https://hbase.apache.org">Apache HBase</a></p>
+<div class="documentation-content"><p>Storm/Trident integration for <a href="https://hbase.apache.org">Apache HBase</a></p>
 
 <h2 id="usage">Usage</h2>
 
@@ -368,7 +368,7 @@
         <span class="o">}</span>
     <span class="o">}</span>
 <span class="o">}</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/storm-hdfs.html b/content/releases/1.2.1/storm-hdfs.html
index d0c0266..86f3d5c 100644
--- a/content/releases/1.2.1/storm-hdfs.html
+++ b/content/releases/1.2.1/storm-hdfs.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm components for interacting with HDFS file systems</p>
+<div class="documentation-content"><p>Storm components for interacting with HDFS file systems</p>
 
 <h2 id="usage">Usage</h2>
 
@@ -469,7 +469,7 @@
 <p>On worker hosts the bolt/trident-state code will use the keytab file with principal provided in the config to authenticate with 
 Namenode. This method is little dangerous as you need to ensure all workers have the keytab file at the same location and you need
 to remember this as you bring up new hosts in the cluster.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/storm-hive.html b/content/releases/1.2.1/storm-hive.html
index c86291b..e78f9e8 100644
--- a/content/releases/1.2.1/storm-hive.html
+++ b/content/releases/1.2.1/storm-hive.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Hive offers streaming API that allows data to be written continuously into Hive. The incoming data 
+<div class="documentation-content"><p>Hive offers streaming API that allows data to be written continuously into Hive. The incoming data 
   can be continuously committed in small batches of records into existing Hive partition or table. Once the data
   is committed its immediately visible to all hive queries. More info on Hive Streaming API 
   <a href="https://cwiki.apache.org/confluence/display/Hive/Streaming+Data+Ingest">https://cwiki.apache.org/confluence/display/Hive/Streaming+Data+Ingest</a></p>
@@ -303,7 +303,7 @@
 
    <span class="n">StateFactory</span> <span class="n">factory</span> <span class="o">=</span> <span class="k">new</span> <span class="n">HiveStateFactory</span><span class="o">().</span><span class="na">withOptions</span><span class="o">(</span><span class="n">hiveOptions</span><span class="o">);</span>
    <span class="n">TridentState</span> <span class="n">state</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">partitionPersist</span><span class="o">(</span><span class="n">factory</span><span class="o">,</span> <span class="n">hiveFields</span><span class="o">,</span> <span class="k">new</span> <span class="n">HiveUpdater</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">());</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/storm-jdbc.html b/content/releases/1.2.1/storm-jdbc.html
index 99f7562..2e0f874 100644
--- a/content/releases/1.2.1/storm-jdbc.html
+++ b/content/releases/1.2.1/storm-jdbc.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm/Trident integration for JDBC. This package includes the core bolts and trident states that allows a storm topology
+<div class="documentation-content"><p>Storm/Trident integration for JDBC. This package includes the core bolts and trident states that allows a storm topology
 to either insert storm tuples in a database table or to execute select queries against a database and enrich tuples 
 in a storm topology.</p>
 
@@ -399,7 +399,7 @@
 <div class="highlight"><pre><code class="language-" data-lang="">select * from user;
 </code></pre></div>
 <p>For trident you can view <code>org.apache.storm.jdbc.topology.UserPersistanceTridentTopology</code>.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/storm-jms-example.html b/content/releases/1.2.1/storm-jms-example.html
index 6a31fda..3920121 100644
--- a/content/releases/1.2.1/storm-jms-example.html
+++ b/content/releases/1.2.1/storm-jms-example.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="example-storm-jms-topology">Example Storm JMS Topology</h2>
+<div class="documentation-content"><h2 id="example-storm-jms-topology">Example Storm JMS Topology</h2>
 
 <p>The storm-jms source code contains an example project (in the &quot;examples&quot; directory) 
 builds a multi-bolt/multi-spout topology (depicted below) that uses the JMS Spout and JMS Bolt components.</p>
@@ -248,7 +248,7 @@
 DEBUG (backtype.storm.contrib.jms.spout.JmsSpout:251) - JMS Message acked: ID:budreau.home-60117-1321735025796-0:0:1:1:1
 </code></pre></div>
 <p>The topology will run for 2 minutes, then gracefully shut down.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/storm-jms-spring.html b/content/releases/1.2.1/storm-jms-spring.html
index 16e54b9..c18c253 100644
--- a/content/releases/1.2.1/storm-jms-spring.html
+++ b/content/releases/1.2.1/storm-jms-spring.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="connecting-to-jms-using-springs-jms-support">Connecting to JMS Using Spring&#39;s JMS Support</h3>
+<div class="documentation-content"><h3 id="connecting-to-jms-using-springs-jms-support">Connecting to JMS Using Spring&#39;s JMS Support</h3>
 
 <p>Create a Spring applicationContext.xml file that defines one or more destination (topic/queue) beans, as well as a connecton factory.</p>
 <div class="highlight"><pre><code class="language-" data-lang=""><span class="cp">&lt;?xml version="1.0" encoding="UTF-8"?&gt;</span>
@@ -163,7 +163,7 @@
         <span class="na">brokerURL=</span><span class="s">"tcp://localhost:61616"</span> <span class="nt">/&gt;</span>
 
 <span class="nt">&lt;/beans&gt;</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/storm-jms.html b/content/releases/1.2.1/storm-jms.html
index 887e058..0cd88e6 100644
--- a/content/releases/1.2.1/storm-jms.html
+++ b/content/releases/1.2.1/storm-jms.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="about-storm-jms">About Storm JMS</h2>
+<div class="documentation-content"><h2 id="about-storm-jms">About Storm JMS</h2>
 
 <p>Storm JMS is a generic framework for integrating JMS messaging within the Storm framework.</p>
 
@@ -169,7 +169,7 @@
 <p><a href="storm-jms-example.html">Example Topology</a></p>
 
 <p><a href="storm-jms-spring.html">Using Spring JMS</a></p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/storm-kafka-client.html b/content/releases/1.2.1/storm-kafka-client.html
index 9644458..e71ffa2 100644
--- a/content/releases/1.2.1/storm-kafka-client.html
+++ b/content/releases/1.2.1/storm-kafka-client.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="storm-apache-kafka-integration-using-the-kafka-client-jar">Storm Apache Kafka integration using the kafka-client jar</h1>
+<div class="documentation-content"><h1 id="storm-apache-kafka-integration-using-the-kafka-client-jar">Storm Apache Kafka integration using the kafka-client jar</h1>
 
 <p>This includes the new Apache Kafka consumer API.</p>
 
@@ -476,7 +476,7 @@
   <span class="o">.</span><span class="na">setTupleTrackingEnforced</span><span class="o">(</span><span class="kc">true</span><span class="o">)</span>
 </code></pre></div>
 <p>Note: This setting has no effect with AT_LEAST_ONCE processing guarantee, where tuple tracking is required and therefore always enabled.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/storm-kafka.html b/content/releases/1.2.1/storm-kafka.html
index e08e547..4062063 100644
--- a/content/releases/1.2.1/storm-kafka.html
+++ b/content/releases/1.2.1/storm-kafka.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Provides core Storm and Trident spout implementations for consuming data from Apache Kafka 0.8.x.</p>
+<div class="documentation-content"><p>Provides core Storm and Trident spout implementations for consuming data from Apache Kafka 0.8.x.</p>
 
 <h2 id="spouts">Spouts</h2>
 
@@ -498,7 +498,7 @@
 <ul>
 <li>P. Taylor Goetz (<a href="mailto:ptgoetz@apache.org">ptgoetz@apache.org</a>)</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/storm-metrics-profiling-internal-actions.html b/content/releases/1.2.1/storm-metrics-profiling-internal-actions.html
index 6d977ca..ec4add3 100644
--- a/content/releases/1.2.1/storm-metrics-profiling-internal-actions.html
+++ b/content/releases/1.2.1/storm-metrics-profiling-internal-actions.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>With the addition of these metrics, Storm users can collect, view, and analyze the performance of various internal actions.  The actions that are profiled include thrift rpc calls and http quests within the storm daemons. For instance, in the Storm Nimbus daemon, the following thrift calls defined in the Nimbus$Iface are profiled:</p>
+<div class="documentation-content"><p>With the addition of these metrics, Storm users can collect, view, and analyze the performance of various internal actions.  The actions that are profiled include thrift rpc calls and http quests within the storm daemons. For instance, in the Storm Nimbus daemon, the following thrift calls defined in the Nimbus$Iface are profiled:</p>
 
 <ul>
 <li>submitTopology</li>
@@ -211,7 +211,7 @@
 <p>For more information about io.dropwizard.metrics and metrics-clojure packages please reference their original documentation:
 - <a href="https://dropwizard.github.io/metrics/3.1.0/">https://dropwizard.github.io/metrics/3.1.0/</a>
 - <a href="http://metrics-clojure.readthedocs.org/en/latest/">http://metrics-clojure.readthedocs.org/en/latest/</a></p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/storm-mongodb.html b/content/releases/1.2.1/storm-mongodb.html
index 6deafa6..1a3caee 100644
--- a/content/releases/1.2.1/storm-mongodb.html
+++ b/content/releases/1.2.1/storm-mongodb.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm/Trident integration for <a href="https://www.mongodb.org/">MongoDB</a>. This package includes the core bolts and trident states that allows a storm topology to either insert storm tuples in a database collection or to execute update queries against a database collection in a storm topology.</p>
+<div class="documentation-content"><p>Storm/Trident integration for <a href="https://www.mongodb.org/">MongoDB</a>. This package includes the core bolts and trident states that allows a storm topology to either insert storm tuples in a database collection or to execute update queries against a database collection in a storm topology.</p>
 
 <h2 id="insert-into-database">Insert into Database</h2>
 
@@ -298,7 +298,7 @@
 
         <span class="c1">//if a new document should be inserted if there are no matches to the query filter</span>
         <span class="c1">//updateBolt.withUpsert(true);</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/storm-mqtt.html b/content/releases/1.2.1/storm-mqtt.html
index 6de7bf0..2f71f28 100644
--- a/content/releases/1.2.1/storm-mqtt.html
+++ b/content/releases/1.2.1/storm-mqtt.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="about">About</h2>
+<div class="documentation-content"><h2 id="about">About</h2>
 
 <p>MQTT is a lightweight publish/subscribe protocol frequently used in IoT applications.</p>
 
@@ -483,7 +483,7 @@
 <ul>
 <li>P. Taylor Goetz (<a href="mailto:ptgoetz@apache.org">ptgoetz@apache.org</a>)</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/storm-redis.html b/content/releases/1.2.1/storm-redis.html
index 038df9a..cbad490 100644
--- a/content/releases/1.2.1/storm-redis.html
+++ b/content/releases/1.2.1/storm-redis.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm/Trident integration for <a href="http://redis.io/">Redis</a></p>
+<div class="documentation-content"><p>Storm/Trident integration for <a href="http://redis.io/">Redis</a></p>
 
 <p>Storm-redis uses Jedis for Redis client.</p>
 
@@ -382,7 +382,7 @@
         <span class="n">stream</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">stateQuery</span><span class="o">(</span><span class="n">state</span><span class="o">,</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"word"</span><span class="o">),</span>
                                 <span class="k">new</span> <span class="nf">RedisClusterStateQuerier</span><span class="o">(</span><span class="n">lookupMapper</span><span class="o">),</span>
                                 <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">"columnName"</span><span class="o">,</span><span class="s">"columnValue"</span><span class="o">));</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/storm-solr.html b/content/releases/1.2.1/storm-solr.html
index 65b2527..3f5e133 100644
--- a/content/releases/1.2.1/storm-solr.html
+++ b/content/releases/1.2.1/storm-solr.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm and Trident integration for Apache Solr. This package includes a bolt and a trident state that enable a Storm topology
+<div class="documentation-content"><p>Storm and Trident integration for Apache Solr. This package includes a bolt and a trident state that enable a Storm topology
 stream the contents of storm tuples to index Solr collections.</p>
 
 <h1 id="index-storm-tuples-into-a-solr-collection">Index Storm tuples into a Solr collection</h1>
@@ -308,7 +308,7 @@
 <p>You can also see the results by opening the Apache Solr UI and pasting the <code>id</code> pattern in the <code>q</code> textbox in the queries page</p>
 
 <p><a href="http://localhost:8983/solr/#/gettingstarted_shard1_replica2/query">http://localhost:8983/solr/#/gettingstarted_shard1_replica2/query</a></p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/storm-sql-example.html b/content/releases/1.2.1/storm-sql-example.html
index 29f249e..280626f 100644
--- a/content/releases/1.2.1/storm-sql-example.html
+++ b/content/releases/1.2.1/storm-sql-example.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page shows how to use Storm SQL by showing the example of processing Apache logs. 
+<div class="documentation-content"><p>This page shows how to use Storm SQL by showing the example of processing Apache logs. 
 This page is written by &quot;how-to&quot; style so you can follow the step and learn how to utilize Storm SQL step by step. </p>
 
 <h2 id="preparation">Preparation</h2>
@@ -379,7 +379,7 @@
 (You may noticed that the types of some of output fields are different than output table schema.)</p>
 
 <p>Its behavior is subject to change when Storm SQL changes its backend API to core (tuple by tuple, low-level or high-level) one.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/storm-sql-internal.html b/content/releases/1.2.1/storm-sql-internal.html
index 97f809b..959eb6a 100644
--- a/content/releases/1.2.1/storm-sql-internal.html
+++ b/content/releases/1.2.1/storm-sql-internal.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page describes the design and the implementation of the Storm SQL integration.</p>
+<div class="documentation-content"><p>This page describes the design and the implementation of the Storm SQL integration.</p>
 
 <h2 id="overview">Overview</h2>
 
@@ -195,7 +195,7 @@
 (Use <code>--artifacts</code> if your data source JARs are available in Maven repository since it handles transitive dependencies.)</p>
 
 <p>Please refer <a href="storm-sql.html">Storm SQL integration</a> page to how to do it.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/storm-sql-reference.html b/content/releases/1.2.1/storm-sql-reference.html
index 5221649..e26b0e1 100644
--- a/content/releases/1.2.1/storm-sql-reference.html
+++ b/content/releases/1.2.1/storm-sql-reference.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm SQL uses Apache Calcite to parse and evaluate the SQL statements. 
+<div class="documentation-content"><p>Storm SQL uses Apache Calcite to parse and evaluate the SQL statements. 
 Storm SQL also adopts Rex compiler from Calcite, so Storm SQL is expected to handle SQL dialect recognized by Calcite&#39;s default SQL parser. </p>
 
 <p>The page is based on Calcite SQL reference on website, and removes the area Storm SQL doesn&#39;t support, and also adds the area Storm SQL supports.</p>
@@ -2101,7 +2101,7 @@
 
 <p>Also, hdfs configuration files should be provided.
 You can put the <code>core-site.xml</code> and <code>hdfs-site.xml</code> into the <code>conf</code> directory which is in Storm installation directory.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/storm-sql.html b/content/releases/1.2.1/storm-sql.html
index 42effbd..a161fc9 100644
--- a/content/releases/1.2.1/storm-sql.html
+++ b/content/releases/1.2.1/storm-sql.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The Storm SQL integration allows users to run SQL queries over streaming data in Storm. Not only the SQL interface allows faster development cycles on streaming analytics, but also opens up the opportunities to unify batch data processing like <a href="///hive.apache.org">Apache Hive</a> and real-time streaming data analytics.</p>
+<div class="documentation-content"><p>The Storm SQL integration allows users to run SQL queries over streaming data in Storm. Not only the SQL interface allows faster development cycles on streaming analytics, but also opens up the opportunities to unify batch data processing like <a href="///hive.apache.org">Apache Hive</a> and real-time streaming data analytics.</p>
 
 <p>At a very high level StormSQL compiles the SQL queries to <a href="Trident-API-Overview.html">Trident</a> topologies and executes them in Storm clusters. This document provides information of how to use StormSQL as end users. For people that are interested in more details in the design and the implementation of StormSQL please refer to the <a href="storm-sql-internal.html">this</a> page.</p>
 
@@ -284,7 +284,7 @@
 <li>Windowing is yet to be implemented.</li>
 <li>Aggregation and join are not supported (waiting for <code>Streaming SQL</code> to be matured)</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/1.2.1/windows-users-guide.html b/content/releases/1.2.1/windows-users-guide.html
index bd83020..752551f 100644
--- a/content/releases/1.2.1/windows-users-guide.html
+++ b/content/releases/1.2.1/windows-users-guide.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page guides how to set up environment on Windows for Apache Storm.</p>
+<div class="documentation-content"><p>This page guides how to set up environment on Windows for Apache Storm.</p>
 
 <h2 id="symbolic-link">Symbolic Link</h2>
 
@@ -172,7 +172,7 @@
 on Nimbus and all of the Supervisor nodes.  This will also disable features that require symlinks.  Currently this is only downloading
 dependent blobs, but may change in the future.  Some topologies may rely on symbolic links to resources in the current working directory of the worker that are
 created as a convienence, so it is not a 100% backwards compatible change.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Acking-framework-implementation.html b/content/releases/2.0.0-SNAPSHOT/Acking-framework-implementation.html
index 28ce798..b56ed67 100644
--- a/content/releases/2.0.0-SNAPSHOT/Acking-framework-implementation.html
+++ b/content/releases/2.0.0-SNAPSHOT/Acking-framework-implementation.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p><a href="http://github.com/apache/storm/blob/master/storm-client/src/jvm/org/apache/storm/daemon/Acker.java">Storm&#39;s acker</a> tracks completion of each tupletree with a checksum hash: each time a tuple is sent, its value is XORed into the checksum, and each time a tuple is acked its value is XORed in again. If all tuples have been successfully acked, the checksum will be zero (the odds that the checksum will be zero otherwise are vanishingly small).</p>
+<div class="documentation-content"><p><a href="http://github.com/apache/storm/blob/master/storm-client/src/jvm/org/apache/storm/daemon/Acker.java">Storm&#39;s acker</a> tracks completion of each tupletree with a checksum hash: each time a tuple is sent, its value is XORed into the checksum, and each time a tuple is acked its value is XORed in again. If all tuples have been successfully acked, the checksum will be zero (the odds that the checksum will be zero otherwise are vanishingly small).</p>
 
 <p>You can read a bit more about the <a href="Guaranteeing-message-processing.html#what-is-storms-reliability-api">reliability mechanism</a> elsewhere on the wiki -- this explains the internal details.</p>
 
@@ -180,7 +180,7 @@
 <p>Internally, it holds several HashMaps (&#39;buckets&#39;) of its own, each holding a cohort of records that will expire at the same time.  Let&#39;s call the longest-lived bucket death row, and the most recent the nursery. Whenever a value is <code>.put()</code> to the RotatingMap, it is relocated to the nursery -- and removed from any other bucket it might have been in (effectively resetting its death clock).</p>
 
 <p>Whenever its owner calls <code>.rotate()</code>, the RotatingMap advances each cohort one step further towards expiration. (Typically, Storm objects call rotate on every receipt of a system tick stream tuple.) If there are any key-value pairs in the former death row bucket, the RotatingMap invokes a callback (given in the constructor) for each key-value pair, letting its owner take appropriate action (eg, failing a tuple.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Classpath-handling.html b/content/releases/2.0.0-SNAPSHOT/Classpath-handling.html
index dfc097f..e46ef9e 100644
--- a/content/releases/2.0.0-SNAPSHOT/Classpath-handling.html
+++ b/content/releases/2.0.0-SNAPSHOT/Classpath-handling.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="storm-is-an-application-container">Storm is an Application Container</h3>
+<div class="documentation-content"><h3 id="storm-is-an-application-container">Storm is an Application Container</h3>
 
 <p>Storm provides an application container environment, a la Apache Tomcat, which creates potential for classpath conflicts between Storm and your application.  The most common way of using Storm involves submitting an &quot;uber JAR&quot; containing your application code with all of its dependencies bundled in, and then Storm distributes this JAR to Worker nodes.  Then Storm runs your application within a Storm process called a <code>Worker</code> -- thus the JVM&#39;s classpath contains the dependencies of your JAR as well as whatever dependencies the Worker itself has.  So careful handling of classpaths and dependencies is critical for the correct functioning of Storm.</p>
 
@@ -173,7 +173,7 @@
 <p>When the <code>storm.py</code> script launches a <code>java</code> command, it first constructs the classpath from the optional settings mentioned above, as well as including some default locations such as the <code>${STORM_DIR}/</code>, <code>${STORM_DIR}/lib/</code>, <code>${STORM_DIR}/extlib/</code> and <code>${STORM_DIR}/extlib-daemon/</code> directories.  In past releases, Storm would enumerate all JARs in those directories and then explicitly add all of those JARs into the <code>-cp</code> / <code>--classpath</code> argument to the launched <code>java</code> commands.  As such, the classpath would get so long that the <code>java</code> commands could breach the Linux Kernel process table limit of 4096 bytes for recording commands.  That led to truncated commands in <code>ps</code> output, making it hard to operate Storm clusters because you could not easily differentiate the processes nor easily see from <code>ps</code> which port a worker is listening to.</p>
 
 <p>After Storm dropped support for Java 5, this classpath expansion was no longer necessary, because Java 6 supports classpath wildcards. Classpath wildcards allow you to specify a directory ending with a <code>*</code> element, such as <code>foo/bar/*</code>, and the JVM will automatically expand the classpath to include all <code>.jar</code> files in the wildcard directory.  As of <a href="https://issues.apache.org/jira/browse/STORM-2191">STORM-2191</a> Storm just uses classpath wildcards instead of explicitly listing all JARs, thereby shortening all of the commands and making operating Storm clusters a bit easier.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Clojure-DSL.html b/content/releases/2.0.0-SNAPSHOT/Clojure-DSL.html
index 7f61ec9..a2bb1b6 100644
--- a/content/releases/2.0.0-SNAPSHOT/Clojure-DSL.html
+++ b/content/releases/2.0.0-SNAPSHOT/Clojure-DSL.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm offers a Clojure DSL through the storm-clojure package for defining spouts, bolts, and topologies. The Clojure DSL has access to everything the Java API exposes, so if you&#39;re a Clojure user you can code Storm topologies without touching Java at all. The Clojure DSL is defined in the source in the <a href="http://github.com/apache/storm/blob/master/storm-clojure/src/clj/org/apache/storm/clojure.clj">org.apache.storm.clojure</a> namespace.</p>
+<div class="documentation-content"><p>Storm offers a Clojure DSL through the storm-clojure package for defining spouts, bolts, and topologies. The Clojure DSL has access to everything the Java API exposes, so if you&#39;re a Clojure user you can code Storm topologies without touching Java at all. The Clojure DSL is defined in the source in the <a href="http://github.com/apache/storm/blob/master/storm-clojure/src/clj/org/apache/storm/clojure.clj">org.apache.storm.clojure</a> namespace.</p>
 
 <p>This page outlines all the pieces of the Clojure DSL, including:</p>
 
@@ -371,7 +371,7 @@
 <h3 id="testing-topologies">Testing topologies</h3>
 
 <p><a href="http://www.pixelmachine.org/2011/12/17/Testing-Storm-Topologies.html">This blog post</a> and its <a href="http://www.pixelmachine.org/2011/12/21/Testing-Storm-Topologies-Part-2.html">follow-up</a> give a good overview of Storm&#39;s powerful built-in facilities for testing topologies in Clojure.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Command-line-client.html b/content/releases/2.0.0-SNAPSHOT/Command-line-client.html
index cf0921e..492218b 100644
--- a/content/releases/2.0.0-SNAPSHOT/Command-line-client.html
+++ b/content/releases/2.0.0-SNAPSHOT/Command-line-client.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page describes all the commands that are possible with the &quot;storm&quot; command line client. To learn how to set up your &quot;storm&quot; client to talk to a remote cluster, follow the instructions in <a href="Setting-up-development-environment.html">Setting up development environment</a>. See <a href="Classpath-handling.html">Classpath handling</a> for details on using external libraries in these commands.</p>
+<div class="documentation-content"><p>This page describes all the commands that are possible with the &quot;storm&quot; command line client. To learn how to set up your &quot;storm&quot; client to talk to a remote cluster, follow the instructions in <a href="Setting-up-development-environment.html">Setting up development environment</a>. See <a href="Classpath-handling.html">Classpath handling</a> for details on using external libraries in these commands.</p>
 
 <p>These commands are:</p>
 
@@ -455,7 +455,7 @@
 <p>Syntax: <code>storm help [command]</code></p>
 
 <p>Print one help message or list of available commands</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Common-patterns.html b/content/releases/2.0.0-SNAPSHOT/Common-patterns.html
index 7d9a9a0..a6ecbe3 100644
--- a/content/releases/2.0.0-SNAPSHOT/Common-patterns.html
+++ b/content/releases/2.0.0-SNAPSHOT/Common-patterns.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page lists a variety of common patterns in Storm topologies.</p>
+<div class="documentation-content"><p>This page lists a variety of common patterns in Storm topologies.</p>
 
 <ol>
 <li>Batching</li>
@@ -212,7 +212,7 @@
 <p><code>KeyedFairBolt</code> also wraps the bolt containing your logic and makes sure your topology processes multiple DRPC invocations at the same time, instead of doing them serially one at a time.</p>
 
 <p>See <a href="Distributed-RPC.html">Distributed RPC</a> for more details.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Concepts.html b/content/releases/2.0.0-SNAPSHOT/Concepts.html
index d237d5e..f70d2bb 100644
--- a/content/releases/2.0.0-SNAPSHOT/Concepts.html
+++ b/content/releases/2.0.0-SNAPSHOT/Concepts.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page lists the main concepts of Storm and links to resources where you can find more information. The concepts discussed are:</p>
+<div class="documentation-content"><p>This page lists the main concepts of Storm and links to resources where you can find more information. The concepts discussed are:</p>
 
 <ol>
 <li>Topologies</li>
@@ -272,7 +272,7 @@
 <h3 id="performance-tuning">Performance Tuning</h3>
 
 <p>Refer to <a href="Performance.md">performance tuning guide</a></p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Configuration.html b/content/releases/2.0.0-SNAPSHOT/Configuration.html
index 99ccea6..5a304cd 100644
--- a/content/releases/2.0.0-SNAPSHOT/Configuration.html
+++ b/content/releases/2.0.0-SNAPSHOT/Configuration.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm has a variety of configurations for tweaking the behavior of nimbus, supervisors, and running topologies. Some configurations are system configurations and cannot be modified on a topology by topology basis, whereas other configurations can be modified per topology. </p>
+<div class="documentation-content"><p>Storm has a variety of configurations for tweaking the behavior of nimbus, supervisors, and running topologies. Some configurations are system configurations and cannot be modified on a topology by topology basis, whereas other configurations can be modified per topology. </p>
 
 <p>Every configuration has a default value defined in <a href="http://github.com/apache/storm/blob/master/conf/defaults.yaml">defaults.yaml</a> in the Storm codebase. You can override these configurations by defining a storm.yaml in the classpath of Nimbus and the supervisors. Finally, you can define a topology-specific configuration that you submit along with your topology when using <a href="javadocs/org/apache/storm/StormSubmitter.html">StormSubmitter</a>. However, the topology-specific configuration can only override configs prefixed with &quot;TOPOLOGY&quot;.</p>
 
@@ -187,7 +187,7 @@
 <li><a href="Running-topologies-on-a-production-cluster.html">Running topologies on a production cluster</a>: lists useful configurations when running topologies on a cluster</li>
 <li><a href="Local-mode.html">Local mode</a>: lists useful configurations when using local mode</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Contributing-to-Storm.html b/content/releases/2.0.0-SNAPSHOT/Contributing-to-Storm.html
index b854616..0dbb066 100644
--- a/content/releases/2.0.0-SNAPSHOT/Contributing-to-Storm.html
+++ b/content/releases/2.0.0-SNAPSHOT/Contributing-to-Storm.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="getting-started-with-contributing">Getting started with contributing</h3>
+<div class="documentation-content"><h3 id="getting-started-with-contributing">Getting started with contributing</h3>
 
 <p>Some of the issues on the <a href="https://issues.apache.org/jira/browse/STORM">issue tracker</a> are marked with the <a href="https://issues.apache.org/jira/browse/STORM-2891?jql=project%20%3D%20STORM%20AND%20status%20%3D%20Open%20AND%20labels%20in%20(newbie%2C%20%22newbie%2B%2B%22)">&quot;Newbie&quot;</a> label. If you&#39;re interested in contributing to Storm but don&#39;t know where to begin, these are good issues to start with. These issues are a great way to get your feet wet with learning the codebase because they require learning about only an isolated portion of the codebase and are a relatively small amount of work.</p>
 
@@ -172,7 +172,7 @@
 <h3 id="contributing-documentation">Contributing documentation</h3>
 
 <p>Documentation contributions are very welcome! The best way to send contributions is as emails through the mailing list.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Creating-a-new-Storm-project.html b/content/releases/2.0.0-SNAPSHOT/Creating-a-new-Storm-project.html
index 378a98d..7044ca2 100644
--- a/content/releases/2.0.0-SNAPSHOT/Creating-a-new-Storm-project.html
+++ b/content/releases/2.0.0-SNAPSHOT/Creating-a-new-Storm-project.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page outlines how to set up a Storm project for development. The steps are:</p>
+<div class="documentation-content"><p>This page outlines how to set up a Storm project for development. The steps are:</p>
 
 <ol>
 <li>Add Storm jars to classpath</li>
@@ -166,7 +166,7 @@
 <p>For more information on writing topologies in other languages, see <a href="Using-non-JVM-languages-with-Storm.html">Using non-JVM languages with Storm</a>.</p>
 
 <p>To test that everything is working in Eclipse, you should now be able to <code>Run</code> the <code>WordCountTopology.java</code> file. You will see messages being emitted at the console for 10 seconds.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/DSLs-and-multilang-adapters.html b/content/releases/2.0.0-SNAPSHOT/DSLs-and-multilang-adapters.html
index 7c83cba..981d652 100644
--- a/content/releases/2.0.0-SNAPSHOT/DSLs-and-multilang-adapters.html
+++ b/content/releases/2.0.0-SNAPSHOT/DSLs-and-multilang-adapters.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><a href="Clojure-DSL.html">Clojure DSL</a></li>
 <li><a href="https://github.com/velvia/ScalaStorm">Scala DSL</a></li>
 <li><a href="https://github.com/colinsurprenant/redstorm">JRuby DSL</a></li>
@@ -152,7 +152,7 @@
 <li><a href="https://github.com/dan-blanchard/io-storm">io-storm</a>: Perl multilang adapter</li>
 <li><a href="https://github.com/Prolucid/FsShelter">FsShelter</a>: F# DSL and runtime with protobuf multilang</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Daemon-Fault-Tolerance.html b/content/releases/2.0.0-SNAPSHOT/Daemon-Fault-Tolerance.html
index f6e10ce..eb937ae 100644
--- a/content/releases/2.0.0-SNAPSHOT/Daemon-Fault-Tolerance.html
+++ b/content/releases/2.0.0-SNAPSHOT/Daemon-Fault-Tolerance.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm has several different daemon processes.  Nimbus that schedules workers, supervisors that launch and kill workers, the log viewer that gives access to logs, and the UI that shows the status of a cluster.</p>
+<div class="documentation-content"><p>Storm has several different daemon processes.  Nimbus that schedules workers, supervisors that launch and kill workers, the log viewer that gives access to logs, and the UI that shows the status of a cluster.</p>
 
 <h2 id="what-happens-when-a-worker-dies">What happens when a worker dies?</h2>
 
@@ -169,7 +169,7 @@
 <h2 id="how-does-storm-guarantee-data-processing">How does Storm guarantee data processing?</h2>
 
 <p>Storm provides mechanisms to guarantee data processing even if nodes die or messages are lost. See <a href="Guaranteeing-message-processing.html">Guaranteeing message processing</a> for the details.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Defining-a-non-jvm-language-dsl-for-storm.html b/content/releases/2.0.0-SNAPSHOT/Defining-a-non-jvm-language-dsl-for-storm.html
index 51633f3..8fa36fd 100644
--- a/content/releases/2.0.0-SNAPSHOT/Defining-a-non-jvm-language-dsl-for-storm.html
+++ b/content/releases/2.0.0-SNAPSHOT/Defining-a-non-jvm-language-dsl-for-storm.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The right place to start to learn how to make a non-JVM DSL for Storm is <a href="http://github.com/apache/storm/blob/master/storm-client/src/storm.thrift">storm-client/src/storm.thrift</a>. Since Storm topologies are just Thrift structures, and Nimbus is a Thrift daemon, you can create and submit topologies in any language.</p>
+<div class="documentation-content"><p>The right place to start to learn how to make a non-JVM DSL for Storm is <a href="http://github.com/apache/storm/blob/master/storm-client/src/storm.thrift">storm-client/src/storm.thrift</a>. Since Storm topologies are just Thrift structures, and Nimbus is a Thrift daemon, you can create and submit topologies in any language.</p>
 
 <p>When you create the Thrift structs for spouts and bolts, the code for the spout or bolt is specified in the ComponentObject struct:</p>
 <div class="highlight"><pre><code class="language-" data-lang="">union ComponentObject {
@@ -165,7 +165,7 @@
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kt">void</span> <span class="nf">submitTopology</span><span class="o">(</span><span class="mi">1</span><span class="o">:</span> <span class="n">string</span> <span class="n">name</span><span class="o">,</span> <span class="mi">2</span><span class="o">:</span> <span class="n">string</span> <span class="n">uploadedJarLocation</span><span class="o">,</span> <span class="mi">3</span><span class="o">:</span> <span class="n">string</span> <span class="n">jsonConf</span><span class="o">,</span> <span class="mi">4</span><span class="o">:</span> <span class="n">StormTopology</span> <span class="n">topology</span><span class="o">)</span> <span class="kd">throws</span> <span class="o">(</span><span class="mi">1</span><span class="o">:</span> <span class="n">AlreadyAliveException</span> <span class="n">e</span><span class="o">,</span> <span class="mi">2</span><span class="o">:</span> <span class="n">InvalidTopologyException</span> <span class="n">ite</span><span class="o">);</span>
 </code></pre></div>
 <p>Finally, one of the key things to do in a non-JVM DSL is make it easy to define the entire topology in one file (the bolts, spouts, and the definition of the topology).</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Distributed-RPC.html b/content/releases/2.0.0-SNAPSHOT/Distributed-RPC.html
index bd26e03..a2a2198 100644
--- a/content/releases/2.0.0-SNAPSHOT/Distributed-RPC.html
+++ b/content/releases/2.0.0-SNAPSHOT/Distributed-RPC.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The idea behind distributed RPC (DRPC) is to parallelize the computation of really intense functions on the fly using Storm. The Storm topology takes in as input a stream of function arguments, and it emits an output stream of the results for each of those function calls. </p>
+<div class="documentation-content"><p>The idea behind distributed RPC (DRPC) is to parallelize the computation of really intense functions on the fly using Storm. The Storm topology takes in as input a stream of function arguments, and it emits an output stream of the results for each of those function calls. </p>
 
 <p>DRPC is not so much a feature of Storm as it is a pattern expressed from Storm&#39;s primitives of streams, spouts, bolts, and topologies. DRPC could have been packaged as a separate library from Storm, but it&#39;s so useful that it&#39;s bundled with Storm.</p>
 
@@ -347,7 +347,7 @@
 <li>KeyedFairBolt for weaving the processing of multiple requests at the same time</li>
 <li>How to use <code>CoordinatedBolt</code> directly</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Eventlogging.html b/content/releases/2.0.0-SNAPSHOT/Eventlogging.html
index a6e5945..e6564ce 100644
--- a/content/releases/2.0.0-SNAPSHOT/Eventlogging.html
+++ b/content/releases/2.0.0-SNAPSHOT/Eventlogging.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="introduction">Introduction</h1>
+<div class="documentation-content"><h1 id="introduction">Introduction</h1>
 
 <p>Topology event inspector provides the ability to view the tuples as it flows through different stages in a storm topology.
 This could be useful for inspecting the tuples emitted at a spout or a bolt in the topology pipeline while the topology is running, without stopping or redeploying the topology. The normal flow of tuples from the spouts to the bolts is not affected by turning on event logging.</p>
@@ -269,7 +269,7 @@
 
 <p>Please keep in mind that EventLoggerBolt is just a kind of Bolt, so whole throughput of the topology will go down when registered event loggers cannot keep up handling incoming events, so you may want to take care of the Bolt like normal Bolt.
 One of idea to avoid this is making your implementation of IEventLogger as <code>non-blocking</code> fashion.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/FAQ.html b/content/releases/2.0.0-SNAPSHOT/FAQ.html
index e2639e0..66a996e 100644
--- a/content/releases/2.0.0-SNAPSHOT/FAQ.html
+++ b/content/releases/2.0.0-SNAPSHOT/FAQ.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="best-practices">Best Practices</h2>
+<div class="documentation-content"><h2 id="best-practices">Best Practices</h2>
 
 <h3 id="what-rules-of-thumb-can-you-give-me-for-configuring-storm-trident">What rules of thumb can you give me for configuring Storm+Trident?</h3>
 
@@ -276,7 +276,7 @@
 <li>When possible, make your process incremental: each value that comes in makes the answer more and more true. A Trident ReducerAggregator is an operator that takes a prior result and a set of new records and returns a new result. This lets the result be cached and serialized to a datastore; if a server drops off line for a day and then comes back with a full day&#39;s worth of data in a rush, the old results will be calmly retrieved and updated.</li>
 <li>Lambda architecture: Record all events into an archival store (S3, HBase, HDFS) on receipt. in the fast layer, once the time window is clear, process the bucket to get an actionable answer, and ignore everything older than the time window. Periodically run a global aggregation to calculate a &quot;correct&quot; answer.</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Fault-tolerance.html b/content/releases/2.0.0-SNAPSHOT/Fault-tolerance.html
index f691c2c..adcfd30 100644
--- a/content/releases/2.0.0-SNAPSHOT/Fault-tolerance.html
+++ b/content/releases/2.0.0-SNAPSHOT/Fault-tolerance.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page explains the design details of Storm that make it a fault-tolerant system.</p>
+<div class="documentation-content"><p>This page explains the design details of Storm that make it a fault-tolerant system.</p>
 
 <h2 id="what-happens-when-a-worker-dies">What happens when a worker dies?</h2>
 
@@ -169,7 +169,7 @@
 <h2 id="how-does-storm-guarantee-data-processing">How does Storm guarantee data processing?</h2>
 
 <p>Storm provides mechanisms to guarantee data processing even if nodes die or messages are lost. See <a href="Guaranteeing-message-processing.html">Guaranteeing message processing</a> for the details.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Guaranteeing-message-processing.html b/content/releases/2.0.0-SNAPSHOT/Guaranteeing-message-processing.html
index 503abfc..655cf8c 100644
--- a/content/releases/2.0.0-SNAPSHOT/Guaranteeing-message-processing.html
+++ b/content/releases/2.0.0-SNAPSHOT/Guaranteeing-message-processing.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm offers several different levels of guaranteed message processing, including best effort, at least once, and exactly once through <a href="Trident-tutorial.html">Trident</a>.
+<div class="documentation-content"><p>Storm offers several different levels of guaranteed message processing, including best effort, at least once, and exactly once through <a href="Trident-tutorial.html">Trident</a>.
 This page describes how Storm can guarantee at least once processing.</p>
 
 <h3 id="what-does-it-mean-for-a-message-to-be-fully-processed">What does it mean for a message to be &quot;fully processed&quot;?</h3>
@@ -301,7 +301,7 @@
 <p>The second way is to remove reliability on a message by message basis. You can turn off tracking for an individual spout tuple by omitting a message id in the <code>SpoutOutputCollector.emit</code> method.</p>
 
 <p>Finally, if you don&#39;t care if a particular subset of the tuples downstream in the topology fail to be processed, you can emit them as unanchored tuples. Since they&#39;re not anchored to any spout tuples, they won&#39;t cause any spout tuples to fail if they aren&#39;t acked.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Hooks.html b/content/releases/2.0.0-SNAPSHOT/Hooks.html
index 20ac5c7..7b71432 100644
--- a/content/releases/2.0.0-SNAPSHOT/Hooks.html
+++ b/content/releases/2.0.0-SNAPSHOT/Hooks.html
@@ -144,13 +144,13 @@
 
 <p class="post-meta"></p>
 
-<p>Storm provides hooks with which you can insert custom code to run on any number of events within Storm. You create a hook by extending the <a href="javadocs/org/apache/storm/hooks/BaseTaskHook.html">BaseTaskHook</a> class and overriding the appropriate method for the event you want to catch. There are two ways to register your hook:</p>
+<div class="documentation-content"><p>Storm provides hooks with which you can insert custom code to run on any number of events within Storm. You create a hook by extending the <a href="javadocs/org/apache/storm/hooks/BaseTaskHook.html">BaseTaskHook</a> class and overriding the appropriate method for the event you want to catch. There are two ways to register your hook:</p>
 
 <ol>
 <li>In the open method of your spout or prepare method of your bolt using the <a href="javadocs/org/apache/storm/task/TopologyContext.html#addTaskHook">TopologyContext</a> method.</li>
 <li>Through the Storm configuration using the <a href="javadocs/org/apache/storm/Config.html#TOPOLOGY_AUTO_TASK_HOOKS">&quot;topology.auto.task.hooks&quot;</a> config. These hooks are automatically registered in every spout or bolt, and are useful for doing things like integrating with a custom monitoring system.</li>
 </ol>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/IConfigLoader.html b/content/releases/2.0.0-SNAPSHOT/IConfigLoader.html
index 366d728..e9de91f 100644
--- a/content/releases/2.0.0-SNAPSHOT/IConfigLoader.html
+++ b/content/releases/2.0.0-SNAPSHOT/IConfigLoader.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="introduction">Introduction</h3>
+<div class="documentation-content"><h3 id="introduction">Introduction</h3>
 
 <p>IConfigLoader is an interface designed to allow dynamic loading of scheduler resource constraints. Currently, the MultiTenant scheduler uses this interface to dynamically load the number of isolated nodes a given user has been guaranteed, and the ResoureAwareScheduler uses the interface to dynamically load per user resource guarantees.</p>
 
@@ -195,7 +195,7 @@
 <li>scheduler.config.loader.polltime.secs: Currently only used in <code>ArtifactoryConfigLoader</code>. It is the frequency at which the plugin will call out to artifactory instead of returning the most recently cached result. The default is 600 seconds.</li>
 <li>scheduler.config.loader.artifactory.base.directory: Only used in <code>ArtifactoryConfigLoader</code>. It is the part of the uri, configurable in Artifactory, which represents the top of the directory tree. It defaults to &quot;/artifactory&quot;.</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Implementation-docs.html b/content/releases/2.0.0-SNAPSHOT/Implementation-docs.html
index 6d9dae9..d5d6d11 100644
--- a/content/releases/2.0.0-SNAPSHOT/Implementation-docs.html
+++ b/content/releases/2.0.0-SNAPSHOT/Implementation-docs.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This section of the wiki is dedicated to explaining how Storm is implemented. You should have a good grasp of how to use Storm before reading these sections. </p>
+<div class="documentation-content"><p>This section of the wiki is dedicated to explaining how Storm is implemented. You should have a good grasp of how to use Storm before reading these sections. </p>
 
 <ul>
 <li><a href="Structure-of-the-codebase.html">Structure of the codebase</a></li>
@@ -155,7 +155,7 @@
 <li><a href="nimbus-ha-design.html">Nimbus HA</a></li>
 <li><a href="storm-sql-internal.html">Storm SQL</a></li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Installing-native-dependencies.html b/content/releases/2.0.0-SNAPSHOT/Installing-native-dependencies.html
index fb68946..abfce3d 100644
--- a/content/releases/2.0.0-SNAPSHOT/Installing-native-dependencies.html
+++ b/content/releases/2.0.0-SNAPSHOT/Installing-native-dependencies.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The native dependencies are only needed on actual Storm clusters. When running Storm in local mode, Storm uses a pure Java messaging system so that you don&#39;t need to install native dependencies on your development machine.</p>
+<div class="documentation-content"><p>The native dependencies are only needed on actual Storm clusters. When running Storm in local mode, Storm uses a pure Java messaging system so that you don&#39;t need to install native dependencies on your development machine.</p>
 
 <p>Installing ZeroMQ and JZMQ is usually straightforward. Sometimes, however, people run into issues with autoconf and get strange errors. If you run into any issues, please email the <a href="http://groups.google.com/group/storm-user">Storm mailing list</a> or come get help in the #storm-user room on freenode. </p>
 
@@ -175,7 +175,7 @@
 </ol>
 
 <p>If you run into any errors when running <code>./configure</code>, <a href="http://stackoverflow.com/questions/3522248/how-do-i-compile-jzmq-for-zeromq-on-osx">this thread</a> may provide a solution.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Joins.html b/content/releases/2.0.0-SNAPSHOT/Joins.html
index 809ab78..96b615f 100644
--- a/content/releases/2.0.0-SNAPSHOT/Joins.html
+++ b/content/releases/2.0.0-SNAPSHOT/Joins.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm core supports joining multiple data streams into one with the help of <code>JoinBolt</code>.
+<div class="documentation-content"><p>Storm core supports joining multiple data streams into one with the help of <code>JoinBolt</code>.
 <code>JoinBolt</code> is a Windowed bolt, i.e. it waits for the configured window duration to match up the
 tuples among the streams being joined. This helps align the streams within a Window boundary.</p>
 
@@ -272,7 +272,7 @@
 <li>Lastly, keep the window size to the minimum value necessary for solving the problem at hand.</li>
 </ul></li>
 </ol>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Kestrel-and-Storm.html b/content/releases/2.0.0-SNAPSHOT/Kestrel-and-Storm.html
index 6658e34..6167aec 100644
--- a/content/releases/2.0.0-SNAPSHOT/Kestrel-and-Storm.html
+++ b/content/releases/2.0.0-SNAPSHOT/Kestrel-and-Storm.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page explains how to use Storm to consume items from a Kestrel cluster.</p>
+<div class="documentation-content"><p>This page explains how to use Storm to consume items from a Kestrel cluster.</p>
 
 <h2 id="preliminaries">Preliminaries</h2>
 
@@ -334,7 +334,7 @@
 Now execute the program to add items to the queue and launch the Storm topology. The order in which you launch the programs is of no importance.
 
 If you run the topology with TOPOLOGY_DEBUG you should see tuples being emitted in the topology.
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Lifecycle-of-a-topology.html b/content/releases/2.0.0-SNAPSHOT/Lifecycle-of-a-topology.html
index eebe42d..d2765a6 100644
--- a/content/releases/2.0.0-SNAPSHOT/Lifecycle-of-a-topology.html
+++ b/content/releases/2.0.0-SNAPSHOT/Lifecycle-of-a-topology.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>(<strong>NOTE</strong>: this page is based on the 0.7.1 code; many things have changed since then, including a split between tasks and executors, and a reorganization of the code under <code>storm-client/src</code> rather than <code>src/</code>.)</p>
+<div class="documentation-content"><p>(<strong>NOTE</strong>: this page is based on the 0.7.1 code; many things have changed since then, including a split between tasks and executors, and a reorganization of the code under <code>storm-client/src</code> rather than <code>src/</code>.)</p>
 
 <p>This page explains in detail the lifecycle of a topology from running the &quot;storm jar&quot; command to uploading the topology to Nimbus to the supervisors starting/stopping workers to workers and tasks setting themselves up. It also explains how Nimbus monitors topologies and how topologies are shutdown when they are killed.</p>
 
@@ -261,7 +261,7 @@
 <li>Removing a topology cleans out the assignment and static information from ZK <a href="https://github.com/apache/storm/blob/0.7.1/src/clj/org/apache/storm/daemon/nimbus.clj#L116">code</a></li>
 <li>A separate cleanup thread runs the <code>do-cleanup</code> function which will clean up the heartbeat dir and the jars/configs stored locally. <a href="https://github.com/apache/storm/blob/0.7.1/src/clj/org/apache/storm/daemon/nimbus.clj#L577">code</a></li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Local-mode.html b/content/releases/2.0.0-SNAPSHOT/Local-mode.html
index fef6c8b..f306942 100644
--- a/content/releases/2.0.0-SNAPSHOT/Local-mode.html
+++ b/content/releases/2.0.0-SNAPSHOT/Local-mode.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Local mode simulates a Storm cluster in process and is useful for developing and testing topologies. Running topologies in local mode is similar to running topologies <a href="Running-topologies-on-a-production-cluster.html">on a cluster</a>.</p>
+<div class="documentation-content"><p>Local mode simulates a Storm cluster in process and is useful for developing and testing topologies. Running topologies in local mode is similar to running topologies <a href="Running-topologies-on-a-production-cluster.html">on a cluster</a>.</p>
 
 <p>To run a topology in local mode you have two options.  The most common option is to run your topology with <code>storm local</code> instead of <code>storm jar</code></p>
 
@@ -213,7 +213,7 @@
 
 <p>These, like all other configs, can be set on the command line when launching your toplogy with the <code>-c</code> flag.  The flag is of the form <code>-c &lt;conf_name&gt;=&lt;JSON_VALUE&gt;</code>  so to enable debugging when launching your topology in local mode you could run</p>
 <div class="highlight"><pre><code class="language-" data-lang="">storm local topology.jar &lt;MY_MAIN_CLASS&gt; -c topology.debug=true
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Logs.html b/content/releases/2.0.0-SNAPSHOT/Logs.html
index 4d4e70f..ad0e965 100644
--- a/content/releases/2.0.0-SNAPSHOT/Logs.html
+++ b/content/releases/2.0.0-SNAPSHOT/Logs.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Logs in Storm are essential for tracking the status, operations, error messages and debug information for all the 
+<div class="documentation-content"><p>Logs in Storm are essential for tracking the status, operations, error messages and debug information for all the 
 daemons (e.g., nimbus, supervisor, logviewer, drpc, ui, pacemaker) and topologies&#39; workers.</p>
 
 <h3 id="location-of-the-logs">Location of the Logs</h3>
@@ -171,7 +171,7 @@
 <p>Search in a topology: a user can also search a string for a certain topology by clicking the icon of magnifying lens at the top right corner of the UI page. This means the UI will try to search on all the supervisor nodes in a distributed way to find the matched string in all logs for this topology. The search can happen for either normal text log files or rolled zip log files by checking/unchecking the &quot;Search archived logs:&quot; box. Then the matched results can be shown on the UI with url links, directing the user to the certain logs on each supervisor node. This powerful feature is very helpful for users to find certain problematic supervisor nodes running this topology.</p>
 
 <p><img src="images/search-a-topology.png" alt="Search in a topology" title="Search in a topology"></p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Maven.html b/content/releases/2.0.0-SNAPSHOT/Maven.html
index ce7b7a2..a4be378 100644
--- a/content/releases/2.0.0-SNAPSHOT/Maven.html
+++ b/content/releases/2.0.0-SNAPSHOT/Maven.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>To develop topologies, you&#39;ll need the Storm jars on your classpath. You should either include the unpacked jars in the classpath for your project or use Maven to include Storm as a development dependency. Storm is hosted on Maven Central. To include Storm in your project as a development dependency, add the following to your pom.xml:</p>
+<div class="documentation-content"><p>To develop topologies, you&#39;ll need the Storm jars on your classpath. You should either include the unpacked jars in the classpath for your project or use Maven to include Storm as a development dependency. Storm is hosted on Maven Central. To include Storm in your project as a development dependency, add the following to your pom.xml:</p>
 <div class="highlight"><pre><code class="language-xml" data-lang="xml"><span class="nt">&lt;dependency&gt;</span>
   <span class="nt">&lt;groupId&gt;</span>org.apache.storm<span class="nt">&lt;/groupId&gt;</span>
   <span class="nt">&lt;artifactId&gt;</span>storm-client<span class="nt">&lt;/artifactId&gt;</span>
@@ -157,7 +157,7 @@
 <h3 id="developing-storm">Developing Storm</h3>
 
 <p>Please refer to <a href="http://github.com/apache/storm/blob/master/DEVELOPER.md">DEVELOPER.md</a> for more details.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Message-passing-implementation.html b/content/releases/2.0.0-SNAPSHOT/Message-passing-implementation.html
index a598345..3bc6c66 100644
--- a/content/releases/2.0.0-SNAPSHOT/Message-passing-implementation.html
+++ b/content/releases/2.0.0-SNAPSHOT/Message-passing-implementation.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>(Note: this walkthrough is out of date as of 0.8.0. 0.8.0 revamped the message passing infrastructure to be based on the Disruptor)</p>
+<div class="documentation-content"><p>(Note: this walkthrough is out of date as of 0.8.0. 0.8.0 revamped the message passing infrastructure to be based on the Disruptor)</p>
 
 <p>This page walks through how emitting and transferring tuples works in Storm.</p>
 
@@ -186,7 +186,7 @@
 </ul></li>
 </ul></li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Metrics.html b/content/releases/2.0.0-SNAPSHOT/Metrics.html
index 576c702..248755b 100644
--- a/content/releases/2.0.0-SNAPSHOT/Metrics.html
+++ b/content/releases/2.0.0-SNAPSHOT/Metrics.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm exposes a metrics interface to report summary statistics across the full topology.
+<div class="documentation-content"><p>Storm exposes a metrics interface to report summary statistics across the full topology.
 The numbers you see on the UI come from some of these built in metrics, but are reported through the worker heartbeats instead of through the IMetricsConsumer described below.</p>
 
 <h3 id="metric-types">Metric Types</h3>
@@ -474,7 +474,7 @@
 <li><code>newWorkerEvent</code> is 1 when a worker is first started and 0 all other times.  This can be used to tell when a worker has crashed and is restarted.</li>
 <li><code>startTimeSecs</code> is when the worker started in seconds since the epoch</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Multilang-protocol.html b/content/releases/2.0.0-SNAPSHOT/Multilang-protocol.html
index aebc4c4..91f34d4 100644
--- a/content/releases/2.0.0-SNAPSHOT/Multilang-protocol.html
+++ b/content/releases/2.0.0-SNAPSHOT/Multilang-protocol.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page explains the multilang protocol as of Storm 0.7.1. Versions prior to 0.7.1 used a somewhat different protocol, documented [here](Storm-multi-language-protocol-(versions-0.7.0-and-below).html).</p>
+<div class="documentation-content"><p>This page explains the multilang protocol as of Storm 0.7.1. Versions prior to 0.7.1 used a somewhat different protocol, documented [here](Storm-multi-language-protocol-(versions-0.7.0-and-below).html).</p>
 
 <h1 id="storm-multi-language-protocol">Storm Multi-Language Protocol</h1>
 
@@ -436,7 +436,7 @@
 </code></pre></div>
 <p>When subprocess receives heartbeat tuple, it must send a <code>sync</code> command back to
 ShellBolt.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Pacemaker.html b/content/releases/2.0.0-SNAPSHOT/Pacemaker.html
index 0412e35..4011335 100644
--- a/content/releases/2.0.0-SNAPSHOT/Pacemaker.html
+++ b/content/releases/2.0.0-SNAPSHOT/Pacemaker.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="introduction">Introduction</h3>
+<div class="documentation-content"><h3 id="introduction">Introduction</h3>
 
 <p>Pacemaker is a storm daemon designed to process heartbeats from workers. As Storm is scaled up, ZooKeeper begins to become a bottleneck due to high volumes of writes from workers doing heartbeats. Lots of writes to disk and too much traffic across the network is generated as ZooKeeper tries to maintain consistency.</p>
 
@@ -253,7 +253,7 @@
 On a 270 supervisor cluster, fully scheduled with topologies, Pacemaker resource utilization was 70% of one core and nearly 1GiB of RAM on a machine with 4 <code>Intel(R) Xeon(R) CPU E5530 @ 2.40GHz</code> and 24GiB of RAM.</p>
 
 <p>Pacemaker now supports HA. Multiple Pacemaker instances can be used at once in a storm cluster to allow massive scalability. Just include the names of the Pacemaker hosts in the pacemaker.servers config and workers and Nimbus will start communicating with them. They&#39;re fault tolerant as well. The system keeps on working as long as there is at least one pacemaker left running - provided it can handle the load.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Performance.html b/content/releases/2.0.0-SNAPSHOT/Performance.html
index 2224555..ba1089d 100644
--- a/content/releases/2.0.0-SNAPSHOT/Performance.html
+++ b/content/releases/2.0.0-SNAPSHOT/Performance.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Latency, throughput and resource consumption are the three key dimensions involved in performance tuning.
+<div class="documentation-content"><p>Latency, throughput and resource consumption are the three key dimensions involved in performance tuning.
 In the following sections we discuss the settings that can used to tune along these dimension and understand the trade-offs.</p>
 
 <p>It is important to understand that these settings can vary depending on the topology, the type of hardware and the number of hosts used by the topology.</p>
@@ -311,7 +311,7 @@
 core for executors that are not likely to saturate the CPU.</p>
 
 <p>The <em>system bolt</em> generally processes very few messages per second, and so requires very little cpu (typically less than 10% of a physical core).</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Powered-By.html b/content/releases/2.0.0-SNAPSHOT/Powered-By.html
index f7851d4..06e6898 100644
--- a/content/releases/2.0.0-SNAPSHOT/Powered-By.html
+++ b/content/releases/2.0.0-SNAPSHOT/Powered-By.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Want to be added to this page? Send an email <a href="mailto:nathan.marz@gmail.com">here</a>.</p>
+<div class="documentation-content"><p>Want to be added to this page? Send an email <a href="mailto:nathan.marz@gmail.com">here</a>.</p>
 
 <table>
 
@@ -1179,7 +1179,7 @@
 
 
 </table>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Project-ideas.html b/content/releases/2.0.0-SNAPSHOT/Project-ideas.html
index 0149c2a..b6a3e04 100644
--- a/content/releases/2.0.0-SNAPSHOT/Project-ideas.html
+++ b/content/releases/2.0.0-SNAPSHOT/Project-ideas.html
@@ -144,12 +144,12 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><strong>DSLs for non-JVM languages:</strong> These DSL&#39;s should be all-inclusive and not require any Java for the creation of topologies, spouts, or bolts. Since topologies are <a href="http://thrift.apache.org/">Thrift</a> structs, Nimbus is a Thrift service, and bolts can be written in any language, this is possible.</li>
 <li><strong>Online machine learning algorithms:</strong> Something like <a href="http://mahout.apache.org/">Mahout</a> but for online algorithms</li>
 <li><strong>Suite of performance benchmarks:</strong> These benchmarks should test Storm&#39;s performance on CPU and IO intensive workloads. There should be benchmarks for different classes of applications, such as stream processing (where throughput is the priority) and distributed RPC (where latency is the priority). </li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Rationale.html b/content/releases/2.0.0-SNAPSHOT/Rationale.html
index 6afc3b6..9005e8d 100644
--- a/content/releases/2.0.0-SNAPSHOT/Rationale.html
+++ b/content/releases/2.0.0-SNAPSHOT/Rationale.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The past decade has seen a revolution in data processing. MapReduce, Hadoop, and related technologies have made it possible to store and process data at scales previously unthinkable. Unfortunately, these data processing technologies are not realtime systems, nor are they meant to be. There&#39;s no hack that will turn Hadoop into a realtime system; realtime data processing has a fundamentally different set of requirements than batch processing.</p>
+<div class="documentation-content"><p>The past decade has seen a revolution in data processing. MapReduce, Hadoop, and related technologies have made it possible to store and process data at scales previously unthinkable. Unfortunately, these data processing technologies are not realtime systems, nor are they meant to be. There&#39;s no hack that will turn Hadoop into a realtime system; realtime data processing has a fundamentally different set of requirements than batch processing.</p>
 
 <p>However, realtime data processing at massive scale is becoming more and more of a requirement for businesses. The lack of a &quot;Hadoop of realtime&quot; has become the biggest hole in the data processing ecosystem.</p>
 
@@ -176,7 +176,7 @@
 <li><strong>Fault-tolerant</strong>: If there are faults during execution of your computation, Storm will reassign tasks as necessary. Storm makes sure that a computation can run forever (or until you kill the computation).</li>
 <li><strong>Programming language agnostic</strong>: Robust and scalable realtime processing shouldn&#39;t be limited to a single platform. Storm topologies and processing components can be defined in any language, making Storm accessible to nearly anyone.</li>
 </ol>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Resource_Aware_Scheduler_overview.html b/content/releases/2.0.0-SNAPSHOT/Resource_Aware_Scheduler_overview.html
index 814b5e2..a1a4bff 100644
--- a/content/releases/2.0.0-SNAPSHOT/Resource_Aware_Scheduler_overview.html
+++ b/content/releases/2.0.0-SNAPSHOT/Resource_Aware_Scheduler_overview.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="introduction">Introduction</h1>
+<div class="documentation-content"><h1 id="introduction">Introduction</h1>
 
 <p>The purpose of this document is to provide a description of the Resource Aware Scheduler for the Storm distributed real-time computation system.  This document will provide you with both a high level description of the resource aware scheduler in Storm.  Some of the benefits are using a resource aware scheduler on top of Storm is outlined in the following presentation at Hadoop Summit 2016:</p>
 
@@ -691,7 +691,7 @@
 <td><img src="images/ras_new_strategy_runtime_yahoo.png" alt=""></td>
 </tr>
 </tbody></table>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Running-topologies-on-a-production-cluster.html b/content/releases/2.0.0-SNAPSHOT/Running-topologies-on-a-production-cluster.html
index 0cfa1a1..e978c7c 100644
--- a/content/releases/2.0.0-SNAPSHOT/Running-topologies-on-a-production-cluster.html
+++ b/content/releases/2.0.0-SNAPSHOT/Running-topologies-on-a-production-cluster.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Running topologies on a production cluster is similar to running in <a href="Local-mode.html">Local mode</a>. Here are the steps:</p>
+<div class="documentation-content"><p>Running topologies on a production cluster is similar to running in <a href="Local-mode.html">Local mode</a>. Here are the steps:</p>
 
 <p>1) Define the topology (Use <a href="javadocs/org/apache/storm/topology/TopologyBuilder.html">TopologyBuilder</a> if defining using Java)</p>
 
@@ -212,7 +212,7 @@
 <p>The best place to monitor a topology is using the Storm UI. The Storm UI provides information about errors happening in tasks and fine-grained stats on the throughput and latency performance of each component of each running topology.</p>
 
 <p>You can also look at the worker logs on the cluster machines.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/SECURITY.html b/content/releases/2.0.0-SNAPSHOT/SECURITY.html
index 4ceb90e..6770736 100644
--- a/content/releases/2.0.0-SNAPSHOT/SECURITY.html
+++ b/content/releases/2.0.0-SNAPSHOT/SECURITY.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="running-apache-storm-securely">Running Apache Storm Securely</h1>
+<div class="documentation-content"><h1 id="running-apache-storm-securely">Running Apache Storm Securely</h1>
 
 <p>Apache Storm offers a range of configuration options when trying to secure
 your cluster.  By default all authentication and authorization is disabled but 
@@ -709,7 +709,7 @@
  | storm.zookeeper.topology.auth.payload | A string representing the payload for topology Zookeeper authentication. |</p>
 
 <p>Note: If storm.zookeeper.topology.auth.payload isn&#39;t set,storm will generate a ZooKeeper secret payload for MD5-digest with generateZookeeperDigestSecretPayload() method.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/STORM-UI-REST-API.html b/content/releases/2.0.0-SNAPSHOT/STORM-UI-REST-API.html
index f014c10..e196203 100644
--- a/content/releases/2.0.0-SNAPSHOT/STORM-UI-REST-API.html
+++ b/content/releases/2.0.0-SNAPSHOT/STORM-UI-REST-API.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The Storm UI daemon provides a REST API that allows you to interact with a Storm cluster, which includes retrieving
+<div class="documentation-content"><p>The Storm UI daemon provides a REST API that allows you to interact with a Storm cluster, which includes retrieving
 metrics data and configuration information as well as management operations such as starting or stopping topologies.</p>
 
 <h1 id="data-format">Data format</h1>
@@ -3118,7 +3118,7 @@
 <h3 id="drpc-func-get">/drpc/:func (GET)</h3>
 
 <p>In some rare cases <code>:args</code> may not be needed by the DRPC command.  If no <code>:args</code> section is given in the DRPC request and empty string <code>&quot;&quot;</code> will be used for the arguments.</p>
-
+</div>
 
 
 	          </div>
diff --git "a/content/releases/2.0.0-SNAPSHOT/Serialization-\050prior-to-0.6.0\051.html" "b/content/releases/2.0.0-SNAPSHOT/Serialization-\050prior-to-0.6.0\051.html"
index a7a1baf..56a6d2f 100644
--- "a/content/releases/2.0.0-SNAPSHOT/Serialization-\050prior-to-0.6.0\051.html"
+++ "b/content/releases/2.0.0-SNAPSHOT/Serialization-\050prior-to-0.6.0\051.html"
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Tuples can be comprised of objects of any types. Since Storm is a distributed system, it needs to know how to serialize and deserialize objects when they&#39;re passed between tasks. By default Storm can serialize ints, shorts, longs, floats, doubles, bools, bytes, strings, and byte arrays, but if you want to use another type in your tuples, you&#39;ll need to implement a custom serializer.</p>
+<div class="documentation-content"><p>Tuples can be comprised of objects of any types. Since Storm is a distributed system, it needs to know how to serialize and deserialize objects when they&#39;re passed between tasks. By default Storm can serialize ints, shorts, longs, floats, doubles, bools, bytes, strings, and byte arrays, but if you want to use another type in your tuples, you&#39;ll need to implement a custom serializer.</p>
 
 <h3 id="dynamic-typing">Dynamic typing</h3>
 
@@ -188,7 +188,7 @@
 <p>Storm provides helpers for registering serializers in a topology config. The <a href="javadocs/backtype/storm/Config.html">Config</a> class has a method called <code>addSerialization</code> that takes in a serializer class to add to the config.</p>
 
 <p>There&#39;s an advanced config called Config.TOPOLOGY_SKIP_MISSING_SERIALIZATIONS. If you set this to true, Storm will ignore any serializations that are registered but do not have their code available on the classpath. Otherwise, Storm will throw errors when it can&#39;t find a serialization. This is useful if you run many topologies on a cluster that each have different serializations, but you want to declare all the serializations across all topologies in the <code>storm.yaml</code> files.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Serialization.html b/content/releases/2.0.0-SNAPSHOT/Serialization.html
index ba39709..40fad17 100644
--- a/content/releases/2.0.0-SNAPSHOT/Serialization.html
+++ b/content/releases/2.0.0-SNAPSHOT/Serialization.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page is about how the serialization system in Storm works for versions 0.6.0 and onwards. Storm used a different serialization system prior to 0.6.0 which is documented on <a href="Serialization-(prior-to-0.6.0).html">Serialization (prior to 0.6.0)</a>. </p>
+<div class="documentation-content"><p>This page is about how the serialization system in Storm works for versions 0.6.0 and onwards. Storm used a different serialization system prior to 0.6.0 which is documented on <a href="Serialization-(prior-to-0.6.0).html">Serialization (prior to 0.6.0)</a>. </p>
 
 <p>Tuples can be comprised of objects of any types. Since Storm is a distributed system, it needs to know how to serialize and deserialize objects when they&#39;re passed between tasks.</p>
 
@@ -206,7 +206,7 @@
 <p>When a topology is submitted, a single set of serializations is chosen to be used by all components in the topology for sending messages. This is done by merging the component-specific serializer registrations with the regular set of serialization registrations. If two components define serializers for the same class, one of the serializers is chosen arbitrarily.</p>
 
 <p>To force a serializer for a particular class if there&#39;s a conflict between two component-specific registrations, just define the serializer you want to use in the topology-specific configuration. The topology-specific configuration has precedence over component-specific configurations for serialization registrations.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Serializers.html b/content/releases/2.0.0-SNAPSHOT/Serializers.html
index 43c4c5c..46a75c0 100644
--- a/content/releases/2.0.0-SNAPSHOT/Serializers.html
+++ b/content/releases/2.0.0-SNAPSHOT/Serializers.html
@@ -144,10 +144,10 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><a href="https://github.com/rapportive-oss/storm-json">storm-json</a>: Simple JSON serializer for Storm</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Setting-up-a-Storm-cluster.html b/content/releases/2.0.0-SNAPSHOT/Setting-up-a-Storm-cluster.html
index b3cb6a0..a3892eb 100644
--- a/content/releases/2.0.0-SNAPSHOT/Setting-up-a-Storm-cluster.html
+++ b/content/releases/2.0.0-SNAPSHOT/Setting-up-a-Storm-cluster.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page outlines the steps for getting a Storm cluster up and running. If you&#39;re on AWS, you should check out the <a href="https://github.com/nathanmarz/storm-deploy/wiki">storm-deploy</a> project. <a href="https://github.com/nathanmarz/storm-deploy/wiki">storm-deploy</a> completely automates the provisioning, configuration, and installation of Storm clusters on EC2. It also sets up Ganglia for you so you can monitor CPU, disk, and network usage.</p>
+<div class="documentation-content"><p>This page outlines the steps for getting a Storm cluster up and running. If you&#39;re on AWS, you should check out the <a href="https://github.com/nathanmarz/storm-deploy/wiki">storm-deploy</a> project. <a href="https://github.com/nathanmarz/storm-deploy/wiki">storm-deploy</a> completely automates the provisioning, configuration, and installation of Storm clusters on EC2. It also sets up Ganglia for you so you can monitor CPU, disk, and network usage.</p>
 
 <p>If you run into difficulties with your Storm cluster, first check for a solution is in the <a href="Troubleshooting.html">Troubleshooting</a> page. Otherwise, email the mailing list.</p>
 
@@ -260,7 +260,7 @@
 <p>DRPC optionally offers a REST API as well.  To enable this set teh config <code>drpc.http.port</code> to the port you want to run on before launching the DRPC server. See the <a href="STORM-UI-REST-API.html">REST documentation</a> for more information on how to use it.</p>
 
 <p>It also supports SSL by setting <code>drpc.https.port</code> along with the keystore and optional truststore similar to how you would configure the UI.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Setting-up-development-environment.html b/content/releases/2.0.0-SNAPSHOT/Setting-up-development-environment.html
index 3ebcf1c..0823b69 100644
--- a/content/releases/2.0.0-SNAPSHOT/Setting-up-development-environment.html
+++ b/content/releases/2.0.0-SNAPSHOT/Setting-up-development-environment.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page outlines what you need to do to get a Storm development environment set up. In summary, the steps are:</p>
+<div class="documentation-content"><p>This page outlines what you need to do to get a Storm development environment set up. In summary, the steps are:</p>
 
 <ol>
 <li>Download a <a href="..//downloads.html">Storm release</a> , unpack it, and put the unpacked <code>bin/</code> directory on your PATH</li>
@@ -171,7 +171,7 @@
 
 <p>The previous step installed the <code>storm</code> client on your machine which is used to communicate with remote Storm clusters. Now all you have to do is tell the client which Storm cluster to talk to. To do this, all you have to do is put the host address of the master in the <code>~/.storm/storm.yaml</code> file. It should look something like this:</p>
 <div class="highlight"><pre><code class="language-" data-lang="">nimbus.seeds: ["123.45.678.890"]
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Spout-implementations.html b/content/releases/2.0.0-SNAPSHOT/Spout-implementations.html
index 1a7fe46..8aebe85 100644
--- a/content/releases/2.0.0-SNAPSHOT/Spout-implementations.html
+++ b/content/releases/2.0.0-SNAPSHOT/Spout-implementations.html
@@ -144,14 +144,14 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><a href="https://github.com/nathanmarz/storm-kestrel">storm-kestrel</a>: Adapter to use Kestrel as a spout</li>
 <li><a href="https://github.com/rapportive-oss/storm-amqp-spout">storm-amqp-spout</a>: Adapter to use AMQP source as a spout</li>
 <li><a href="https://github.com/ptgoetz/storm-jms">storm-jms</a>: Adapter to use a JMS source as a spout</li>
 <li><a href="https://github.com/sorenmacbeth/storm-redis-pubsub">storm-redis-pubsub</a>: A spout that subscribes to a Redis pubsub stream</li>
 <li><a href="https://github.com/haitaoyao/storm-beanstalkd-spout">storm-beanstalkd-spout</a>: A spout that subscribes to a beanstalkd queue</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/State-checkpointing.html b/content/releases/2.0.0-SNAPSHOT/State-checkpointing.html
index 5a22519..f28830c 100644
--- a/content/releases/2.0.0-SNAPSHOT/State-checkpointing.html
+++ b/content/releases/2.0.0-SNAPSHOT/State-checkpointing.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="state-support-in-core-storm">State support in core storm</h1>
+<div class="documentation-content"><h1 id="state-support-in-core-storm">State support in core storm</h1>
 
 <p>Storm core has abstractions for bolts to save and retrieve the state of its operations. There is a default in-memory
 based state implementation and also a Redis backed implementation that provides state persistence.</p>
@@ -421,7 +421,7 @@
 </ul>
 
 <p><code>org.apache.storm:storm-hbase:&lt;storm-version&gt;</code></p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Storm-Scheduler.html b/content/releases/2.0.0-SNAPSHOT/Storm-Scheduler.html
index 278dc3e..feadece 100644
--- a/content/releases/2.0.0-SNAPSHOT/Storm-Scheduler.html
+++ b/content/releases/2.0.0-SNAPSHOT/Storm-Scheduler.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm now has 4 kinds of built-in schedulers: <a href="http://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/scheduler/DefaultScheduler.java">DefaultScheduler</a>, <a href="http://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/scheduler/IsolationScheduler.java">IsolationScheduler</a>, <a href="http://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/scheduler/multitenant/MultitenantScheduler.java">MultitenantScheduler</a>, <a href="Resource_Aware_Scheduler_overview.html">ResourceAwareScheduler</a>. </p>
+<div class="documentation-content"><p>Storm now has 4 kinds of built-in schedulers: <a href="http://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/scheduler/DefaultScheduler.java">DefaultScheduler</a>, <a href="http://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/scheduler/IsolationScheduler.java">IsolationScheduler</a>, <a href="http://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/scheduler/multitenant/MultitenantScheduler.java">MultitenantScheduler</a>, <a href="Resource_Aware_Scheduler_overview.html">ResourceAwareScheduler</a>. </p>
 
 <h2 id="pluggable-scheduler">Pluggable scheduler</h2>
 
@@ -163,7 +163,7 @@
 <p>Any topologies submitted to the cluster not listed there will not be isolated. Note that there is no way for a user of Storm to affect their isolation settings – this is only allowed by the administrator of the cluster (this is very much intentional).</p>
 
 <p>The isolation scheduler solves the multi-tenancy problem – avoiding resource contention between topologies – by providing full isolation between topologies. The intention is that &quot;productionized&quot; topologies should be listed in the isolation config, and test or in-development topologies should not. The remaining machines on the cluster serve the dual role of failover for isolated topologies and for running the non-isolated topologies.</p>
-
+</div>
 
 
 	          </div>
diff --git "a/content/releases/2.0.0-SNAPSHOT/Storm-multi-language-protocol-\050versions-0.7.0-and-below\051.html" "b/content/releases/2.0.0-SNAPSHOT/Storm-multi-language-protocol-\050versions-0.7.0-and-below\051.html"
index 53222cd..30c07b4 100644
--- "a/content/releases/2.0.0-SNAPSHOT/Storm-multi-language-protocol-\050versions-0.7.0-and-below\051.html"
+++ "b/content/releases/2.0.0-SNAPSHOT/Storm-multi-language-protocol-\050versions-0.7.0-and-below\051.html"
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page explains the multilang protocol for versions 0.7.0 and below. The protocol changed in version 0.7.1.</p>
+<div class="documentation-content"><p>This page explains the multilang protocol for versions 0.7.0 and below. The protocol changed in version 0.7.1.</p>
 
 <h1 id="storm-multi-language-protocol">Storm Multi-Language Protocol</h1>
 
@@ -253,7 +253,7 @@
 <p>Note: This command is not JSON encoded, it is sent as a simple string.</p>
 
 <p>This lets the parent bolt know that the script has finished processing and is ready for another tuple.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Stream-API.html b/content/releases/2.0.0-SNAPSHOT/Stream-API.html
index 630b79e..082c1bb 100644
--- a/content/releases/2.0.0-SNAPSHOT/Stream-API.html
+++ b/content/releases/2.0.0-SNAPSHOT/Stream-API.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><a href="#concepts">Concepts</a>
 
 <ul>
@@ -565,7 +565,7 @@
   <span class="n">StormSubmitter</span><span class="o">.</span><span class="na">submitTopologyWithProgressBar</span><span class="o">(</span><span class="s">"topology-name"</span><span class="o">,</span> <span class="n">config</span><span class="o">,</span> <span class="n">builder</span><span class="o">.</span><span class="na">build</span><span class="o">());</span>
 </code></pre></div>
 <p>More examples are available under <a href="../examples/storm-starter/src/jvm/org/apache/storm/starter/streams">storm-starter</a> which will help you get started.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Structure-of-the-codebase.html b/content/releases/2.0.0-SNAPSHOT/Structure-of-the-codebase.html
index 157e82b..4a5be6e 100644
--- a/content/releases/2.0.0-SNAPSHOT/Structure-of-the-codebase.html
+++ b/content/releases/2.0.0-SNAPSHOT/Structure-of-the-codebase.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>There are three distinct layers to Storm&#39;s codebase.</p>
+<div class="documentation-content"><p>There are three distinct layers to Storm&#39;s codebase.</p>
 
 <p>First, Storm was designed from the very beginning to be compatible with multiple languages. Nimbus is a Thrift service and topologies are defined as Thrift structures. The usage of Thrift allows Storm to be used from any language.</p>
 
@@ -276,7 +276,7 @@
 <p><a href="http://github.com/apache/storm/blob/master/storm-clojure/src/clj/org/apache/storm/testing.clj">org.apache.storm.testing</a>: Implementation of facilities used to test Storm topologies. Includes time simulation, <code>complete-topology</code> for running a fixed set of tuples through a topology and capturing the output, tracker topologies for having fine grained control over detecting when a cluster is &quot;idle&quot;, and other utilities.</p>
 
 <p><a href="http://github.com/apache/storm/blob/master/storm-core/src/clj/org/apache/storm/ui">org.apache.storm.ui.*</a>: Implementation of Storm UI. Completely independent from rest of code base and uses the Nimbus Thrift API to get data.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Support-for-non-java-languages.html b/content/releases/2.0.0-SNAPSHOT/Support-for-non-java-languages.html
index a105966..49ada0a 100644
--- a/content/releases/2.0.0-SNAPSHOT/Support-for-non-java-languages.html
+++ b/content/releases/2.0.0-SNAPSHOT/Support-for-non-java-languages.html
@@ -144,13 +144,13 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><a href="https://github.com/velvia/ScalaStorm">Scala DSL</a></li>
 <li><a href="https://github.com/colinsurprenant/storm-jruby">JRuby DSL</a></li>
 <li><a href="Clojure-DSL.html">Clojure DSL</a></li>
 <li><a href="https://github.com/gphat/io-storm">io-storm</a>: Perl multilang adapter</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Transactional-topologies.html b/content/releases/2.0.0-SNAPSHOT/Transactional-topologies.html
index 02f5400..83708e3 100644
--- a/content/releases/2.0.0-SNAPSHOT/Transactional-topologies.html
+++ b/content/releases/2.0.0-SNAPSHOT/Transactional-topologies.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p><strong>NOTE</strong>: Transactional topologies have been deprecated -- use the <a href="Trident-tutorial.html">Trident</a> framework instead.</p>
+<div class="documentation-content"><p><strong>NOTE</strong>: Transactional topologies have been deprecated -- use the <a href="Trident-tutorial.html">Trident</a> framework instead.</p>
 
 <hr>
 
@@ -510,7 +510,7 @@
 <li>so it can&#39;t call finishbatch until it&#39;s received all tuples from all subscribed components AND its received the commit stream tuple (for committers). this ensures that it can&#39;t prematurely call finishBatch</li>
 </ul></li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Trident-API-Overview.html b/content/releases/2.0.0-SNAPSHOT/Trident-API-Overview.html
index f150428..4b74eff 100644
--- a/content/releases/2.0.0-SNAPSHOT/Trident-API-Overview.html
+++ b/content/releases/2.0.0-SNAPSHOT/Trident-API-Overview.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The core data model in Trident is the &quot;Stream&quot;, processed as a series of batches. A stream is partitioned among the nodes in the cluster, and operations applied to a stream are applied in parallel across each partition.</p>
+<div class="documentation-content"><p>The core data model in Trident is the &quot;Stream&quot;, processed as a series of batches. A stream is partitioned among the nodes in the cluster, and operations applied to a stream are applied in parallel across each partition.</p>
 
 <p>There are five kinds of operations in Trident:</p>
 
@@ -669,7 +669,7 @@
 <p>You might be wondering – how do you do something like a &quot;windowed join&quot;, where tuples from one side of the join are joined against the last hour of tuples from the other side of the join.</p>
 
 <p>To do this, you would make use of partitionPersist and stateQuery. The last hour of tuples from one side of the join would be stored and rotated in a source of state, keyed by the join field. Then the stateQuery would do lookups by the join field to perform the &quot;join&quot;.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Trident-RAS-API.html b/content/releases/2.0.0-SNAPSHOT/Trident-RAS-API.html
index c48ea3f..8a29333 100644
--- a/content/releases/2.0.0-SNAPSHOT/Trident-RAS-API.html
+++ b/content/releases/2.0.0-SNAPSHOT/Trident-RAS-API.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="trident-ras-api">Trident RAS API</h2>
+<div class="documentation-content"><h2 id="trident-ras-api">Trident RAS API</h2>
 
 <p>The Trident RAS (Resource Aware Scheduler) API provides a mechanism to allow users to specify the resource consumption of a Trident topology. The API looks exactly like the base RAS API, only it is called on Trident Streams instead of Bolts and Spouts.</p>
 
@@ -192,7 +192,7 @@
 <p>Resource declarations may be called after any operation. The operations without explicit resources will get the defaults. If you choose to set resources for only some operations, defaults must be declared, or topology submission will fail.
 Resource declarations have the same <em>boundaries</em> as parallelism hints. They don&#39;t cross any groupings, shufflings, or any other kind of repartitioning.
 Resources are declared per operation, but get combined within boundaries.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Trident-spouts.html b/content/releases/2.0.0-SNAPSHOT/Trident-spouts.html
index 75ef12b..5ee9589 100644
--- a/content/releases/2.0.0-SNAPSHOT/Trident-spouts.html
+++ b/content/releases/2.0.0-SNAPSHOT/Trident-spouts.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="trident-spouts">Trident spouts</h1>
+<div class="documentation-content"><h1 id="trident-spouts">Trident spouts</h1>
 
 <p>Like in the vanilla Storm API, spouts are the source of streams in a Trident topology. On top of the vanilla Storm spouts, Trident exposes additional APIs for more sophisticated spouts.</p>
 
@@ -182,7 +182,7 @@
 </ol>
 
 <p>And, like mentioned in the beginning of this tutorial, you can use regular IRichSpout&#39;s as well.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Trident-state.html b/content/releases/2.0.0-SNAPSHOT/Trident-state.html
index 05479b6..dd816e1 100644
--- a/content/releases/2.0.0-SNAPSHOT/Trident-state.html
+++ b/content/releases/2.0.0-SNAPSHOT/Trident-state.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Trident has first-class abstractions for reading from and writing to stateful sources. The state can either be internal to the topology – e.g., kept in-memory and backed by HDFS – or externally stored in a database like Memcached or Cassandra. There&#39;s no difference in the Trident API for either case.</p>
+<div class="documentation-content"><p>Trident has first-class abstractions for reading from and writing to stateful sources. The state can either be internal to the topology – e.g., kept in-memory and backed by HDFS – or externally stored in a database like Memcached or Cassandra. There&#39;s no difference in the Trident API for either case.</p>
 
 <p>Trident manages state in a fault-tolerant way so that state updates are idempotent in the face of retries and failures. This lets you reason about Trident topologies as if each message were processed exactly-once.</p>
 
@@ -415,7 +415,7 @@
 <p>Finally, Trident provides the <a href="http://github.com/apache/storm/blob/master/storm-client/src/jvm/org/apache/storm/trident/state/map/SnapshottableMap.java">SnapshottableMap</a> class that turns a MapState into a Snapshottable object, by storing global aggregations into a fixed key.</p>
 
 <p>Take a look at the implementation of <a href="https://github.com/nathanmarz/trident-memcached/blob/master/src/jvm/trident/memcached/MemcachedState.java">MemcachedState</a> to see how all these utilities can be put together to make a high performance MapState implementation. MemcachedState allows you to choose between opaque transactional, transactional, and non-transactional semantics.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Trident-tutorial.html b/content/releases/2.0.0-SNAPSHOT/Trident-tutorial.html
index 54c0c45..010901c 100644
--- a/content/releases/2.0.0-SNAPSHOT/Trident-tutorial.html
+++ b/content/releases/2.0.0-SNAPSHOT/Trident-tutorial.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Trident is a high-level abstraction for doing realtime computing on top of Storm. It allows you to seamlessly intermix high throughput (millions of messages per second), stateful stream processing with low latency distributed querying. If you&#39;re familiar with high level batch processing tools like Pig or Cascading, the concepts of Trident will be very familiar – Trident has joins, aggregations, grouping, functions, and filters. In addition to these, Trident adds primitives for doing stateful, incremental processing on top of any database or persistence store. Trident has consistent, exactly-once semantics, so it is easy to reason about Trident topologies.</p>
+<div class="documentation-content"><p>Trident is a high-level abstraction for doing realtime computing on top of Storm. It allows you to seamlessly intermix high throughput (millions of messages per second), stateful stream processing with low latency distributed querying. If you&#39;re familiar with high level batch processing tools like Pig or Cascading, the concepts of Trident will be very familiar – Trident has joins, aggregations, grouping, functions, and filters. In addition to these, Trident adds primitives for doing stateful, incremental processing on top of any database or persistence store. Trident has consistent, exactly-once semantics, so it is easy to reason about Trident topologies.</p>
 
 <h2 id="illustrative-example">Illustrative example</h2>
 
@@ -356,7 +356,7 @@
 <h2 id="conclusion">Conclusion</h2>
 
 <p>Trident makes realtime computation elegant. You&#39;ve seen how high throughput stream processing, state manipulation, and low-latency querying can be seamlessly intermixed via Trident&#39;s API. Trident lets you express your realtime computations in a natural way while still getting maximal performance.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Troubleshooting.html b/content/releases/2.0.0-SNAPSHOT/Troubleshooting.html
index 0064fe9..cbd7795 100644
--- a/content/releases/2.0.0-SNAPSHOT/Troubleshooting.html
+++ b/content/releases/2.0.0-SNAPSHOT/Troubleshooting.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page lists issues people have run into when using Storm along with their solutions.</p>
+<div class="documentation-content"><p>This page lists issues people have run into when using Storm along with their solutions.</p>
 
 <h3 id="worker-processes-are-crashing-on-startup-with-no-stack-trace">Worker processes are crashing on startup with no stack trace</h3>
 
@@ -279,7 +279,7 @@
 <ul>
 <li>This means that you&#39;re emitting a mutable object as an output tuple. Everything you emit into the output collector must be immutable. What&#39;s happening is that your bolt is modifying the object while it is being serialized to be sent over the network.</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Tutorial.html b/content/releases/2.0.0-SNAPSHOT/Tutorial.html
index e742464..f173466 100644
--- a/content/releases/2.0.0-SNAPSHOT/Tutorial.html
+++ b/content/releases/2.0.0-SNAPSHOT/Tutorial.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>In this tutorial, you&#39;ll learn how to create Storm topologies and deploy them to a Storm cluster. Java will be the main language used, but a few examples will use Python to illustrate Storm&#39;s multi-language capabilities.</p>
+<div class="documentation-content"><p>In this tutorial, you&#39;ll learn how to create Storm topologies and deploy them to a Storm cluster. Java will be the main language used, but a few examples will use Python to illustrate Storm&#39;s multi-language capabilities.</p>
 
 <h2 id="preliminaries">Preliminaries</h2>
 
@@ -402,7 +402,7 @@
 <h2 id="conclusion">Conclusion</h2>
 
 <p>This tutorial gave a broad overview of developing, testing, and deploying Storm topologies. The rest of the documentation dives deeper into all the aspects of using Storm.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Understanding-the-parallelism-of-a-Storm-topology.html b/content/releases/2.0.0-SNAPSHOT/Understanding-the-parallelism-of-a-Storm-topology.html
index a63e6b1..76054be 100644
--- a/content/releases/2.0.0-SNAPSHOT/Understanding-the-parallelism-of-a-Storm-topology.html
+++ b/content/releases/2.0.0-SNAPSHOT/Understanding-the-parallelism-of-a-Storm-topology.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="what-makes-a-running-topology-worker-processes-executors-and-tasks">What makes a running topology: worker processes, executors and tasks</h2>
+<div class="documentation-content"><h2 id="what-makes-a-running-topology-worker-processes-executors-and-tasks">What makes a running topology: worker processes, executors and tasks</h2>
 
 <p>Storm distinguishes between the following three main entities that are used to actually run a topology in a Storm cluster:</p>
 
@@ -274,7 +274,7 @@
 <li><a href="Tutorial.html">Tutorial</a></li>
 <li><a href="javadocs/">Storm API documentation</a>, most notably the class <code>Config</code></li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Using-non-JVM-languages-with-Storm.html b/content/releases/2.0.0-SNAPSHOT/Using-non-JVM-languages-with-Storm.html
index e58522d..91972de 100644
--- a/content/releases/2.0.0-SNAPSHOT/Using-non-JVM-languages-with-Storm.html
+++ b/content/releases/2.0.0-SNAPSHOT/Using-non-JVM-languages-with-Storm.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li>two pieces: creating topologies and implementing spouts and bolts in other languages</li>
 <li>creating topologies in another language is easy since topologies are just thrift structures (link to storm.thrift)</li>
 <li>implementing spouts and bolts in another language is called a &quot;multilang components&quot; or &quot;shelling&quot;
@@ -198,7 +198,7 @@
 <p>Then you can connect to Nimbus using the Thrift API and submit the topology, passing {uploaded-jar-location} into the submitTopology method. For reference, here&#39;s the submitTopology definition:</p>
 <div class="highlight"><pre><code class="language-" data-lang="">void submitTopology(1: string name, 2: string uploadedJarLocation, 3: string jsonConf, 4: StormTopology topology)
     throws (1: AlreadyAliveException e, 2: InvalidTopologyException ite);
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/Windowing.html b/content/releases/2.0.0-SNAPSHOT/Windowing.html
index 4d6c90b..24fd4ef 100644
--- a/content/releases/2.0.0-SNAPSHOT/Windowing.html
+++ b/content/releases/2.0.0-SNAPSHOT/Windowing.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm core has support for processing a group of tuples that falls within a window. Windows are specified with the 
+<div class="documentation-content"><p>Storm core has support for processing a group of tuples that falls within a window. Windows are specified with the 
 following two parameters,</p>
 
 <ol>
@@ -479,7 +479,7 @@
 
 <p>For more details take a look at the sample topology in storm-starter <a href="../examples/storm-starter/src/jvm/org/apache/storm/starter/PersistentWindowingTopology.java">PersistentWindowingTopology</a>
 which will help you get started.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/cgroups_in_storm.html b/content/releases/2.0.0-SNAPSHOT/cgroups_in_storm.html
index e243ecb..a0d4b23 100644
--- a/content/releases/2.0.0-SNAPSHOT/cgroups_in_storm.html
+++ b/content/releases/2.0.0-SNAPSHOT/cgroups_in_storm.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="cgroups-in-storm">CGroups in Storm</h1>
+<div class="documentation-content"><h1 id="cgroups-in-storm">CGroups in Storm</h1>
 
 <p>CGroups are used by Storm to limit the resource usage of workers to guarantee fairness and QOS.  </p>
 
@@ -315,7 +315,7 @@
 <h2 id="future-work">Future Work</h2>
 
 <p>There is a lot of work on adding in elasticity to storm.  Eventually we hope to be able to do all of the above analysis for you and grow/shrink your topology on demand.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/distcache-blobstore.html b/content/releases/2.0.0-SNAPSHOT/distcache-blobstore.html
index a1769be..9af143f 100644
--- a/content/releases/2.0.0-SNAPSHOT/distcache-blobstore.html
+++ b/content/releases/2.0.0-SNAPSHOT/distcache-blobstore.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="storm-distributed-cache-api">Storm Distributed Cache API</h1>
+<div class="documentation-content"><h1 id="storm-distributed-cache-api">Storm Distributed Cache API</h1>
 
 <p>The distributed cache feature in storm is used to efficiently distribute files
 (or blobs, which is the equivalent terminology for a file in the distributed
@@ -799,7 +799,7 @@
  2: required string session;
  3: optional i64 data_size;
 }
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/dynamic-log-level-settings.html b/content/releases/2.0.0-SNAPSHOT/dynamic-log-level-settings.html
index edb2513..73baee2 100644
--- a/content/releases/2.0.0-SNAPSHOT/dynamic-log-level-settings.html
+++ b/content/releases/2.0.0-SNAPSHOT/dynamic-log-level-settings.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>We have added the ability to set log level settings for a running topology using the Storm UI and the Storm CLI. </p>
+<div class="documentation-content"><p>We have added the ability to set log level settings for a running topology using the Storm UI and the Storm CLI. </p>
 
 <p>The log level settings apply the same way as you&#39;d expect from log4j, as all we are doing is telling log4j to set the level of the logger you provide. If you set the log level of a parent logger, the children loggers start using that level (unless the children have a more restrictive level already). A timeout can optionally be provided (except for DEBUG mode, where it’s required in the UI), if workers should reset log levels automatically.</p>
 
@@ -179,7 +179,7 @@
 <p><code>./bin/storm set_log_level my_topology -r ROOT</code></p>
 
 <p>Clears the ROOT logger dynamic log level, resetting it to its original value.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/dynamic-worker-profiling.html b/content/releases/2.0.0-SNAPSHOT/dynamic-worker-profiling.html
index fb18c59..9f9eb4a 100644
--- a/content/releases/2.0.0-SNAPSHOT/dynamic-worker-profiling.html
+++ b/content/releases/2.0.0-SNAPSHOT/dynamic-worker-profiling.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>In multi-tenant mode, storm launches long-running JVMs across cluster without sudo access to user. Self-serving of Java heap-dumps, jstacks and java profiling of these JVMs would improve users&#39; ability to analyze and debug issues when monitoring it actively.</p>
+<div class="documentation-content"><p>In multi-tenant mode, storm launches long-running JVMs across cluster without sudo access to user. Self-serving of Java heap-dumps, jstacks and java profiling of these JVMs would improve users&#39; ability to analyze and debug issues when monitoring it actively.</p>
 
 <p>The storm dynamic profiler lets you dynamically take heap-dumps, jprofile or jstack for a worker jvm running on stock cluster. It let user download these dumps from the browser and use your favorite tools to analyze it  The UI component page provides list workers for the component and action buttons. The logviewer lets you download the dumps generated by these logs. Please see the screenshots for more information.</p>
 
@@ -171,7 +171,7 @@
 <h2 id="configuration">Configuration</h2>
 
 <p>The &quot;worker.profiler.command&quot; can be configured to point to specific pluggable profiler, heapdump commands. The &quot;worker.profiler.enabled&quot; can be disabled if plugin is not available or jdk does not support Jprofile flight recording so that worker JVM options will not have &quot;worker.profiler.childopts&quot;. To use different profiler plugin, you can change these configuration.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/flux.html b/content/releases/2.0.0-SNAPSHOT/flux.html
index 3de3d98..879519c 100644
--- a/content/releases/2.0.0-SNAPSHOT/flux.html
+++ b/content/releases/2.0.0-SNAPSHOT/flux.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>A framework for creating and deploying Apache Storm streaming computations with less friction.</p>
+<div class="documentation-content"><p>A framework for creating and deploying Apache Storm streaming computations with less friction.</p>
 
 <h2 id="definition">Definition</h2>
 
@@ -886,7 +886,7 @@
   <span class="na">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">org.apache.storm.flux.test.TridentTopologySource"</span>
   <span class="c1"># Flux will look for "getTopology", this will override that.</span>
   <span class="na">methodName</span><span class="pi">:</span> <span class="s2">"</span><span class="s">getTopologyWithDifferentMethodName"</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/index.html b/content/releases/2.0.0-SNAPSHOT/index.html
index 18c8d5d..5ee043b 100644
--- a/content/releases/2.0.0-SNAPSHOT/index.html
+++ b/content/releases/2.0.0-SNAPSHOT/index.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="basics-of-storm">Basics of Storm</h3>
+<div class="documentation-content"><h3 id="basics-of-storm">Basics of Storm</h3>
 
 <ul>
 <li><a href="javadocs/index.html">Javadoc</a></li>
@@ -289,7 +289,7 @@
 <li><a href="Implementation-docs.html">Implementation docs</a></li>
 <li><a href="storm-metricstore.html">Storm Metricstore</a></li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/metrics_v2.html b/content/releases/2.0.0-SNAPSHOT/metrics_v2.html
index 04c988a..d345b35 100644
--- a/content/releases/2.0.0-SNAPSHOT/metrics_v2.html
+++ b/content/releases/2.0.0-SNAPSHOT/metrics_v2.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Apache Storm version 1.2 introduced a new metrics system for reporting
+<div class="documentation-content"><p>Apache Storm version 1.2 introduced a new metrics system for reporting
 internal statistics (e.g. acked, failed, emitted, transferred, disruptor queue metrics, etc.) as well as a 
 new API for user defined metrics.</p>
 
@@ -274,7 +274,7 @@
     <span class="kt">boolean</span> <span class="nf">matches</span><span class="o">(</span><span class="n">String</span> <span class="n">name</span><span class="o">,</span> <span class="n">Metric</span> <span class="n">metric</span><span class="o">);</span>
 
 <span class="o">}</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/nimbus-ha-design.html b/content/releases/2.0.0-SNAPSHOT/nimbus-ha-design.html
index 0310222..17a9a28 100644
--- a/content/releases/2.0.0-SNAPSHOT/nimbus-ha-design.html
+++ b/content/releases/2.0.0-SNAPSHOT/nimbus-ha-design.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="problem-statement">Problem Statement:</h2>
+<div class="documentation-content"><h2 id="problem-statement">Problem Statement:</h2>
 
 <p>Currently the storm master aka nimbus, is a process that runs on a single machine under supervision. In most cases the 
 nimbus failure is transient and it is restarted by the supervisor. However sometimes when disks fail and networks 
@@ -361,7 +361,7 @@
 <p>Note: Even though all nimbus hosts have watchers on zookeeper to be notified immediately as soon as a new topology is available for code
 download, the callback pretty much never results in code download. In practice we have observed that the desired replication is only achieved once the background-thread runs. 
 So you should expect your topology submission time to be somewhere between 0 to (2 * nimbus.code.sync.freq.secs) for any nimbus.min.replication.count &gt; 1.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-cassandra.html b/content/releases/2.0.0-SNAPSHOT/storm-cassandra.html
index 162bff7..76a35fb 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-cassandra.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-cassandra.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="bolt-api-implementation-for-apache-cassandra">Bolt API implementation for Apache Cassandra</h3>
+<div class="documentation-content"><h3 id="bolt-api-implementation-for-apache-cassandra">Bolt API implementation for Apache Cassandra</h3>
 
 <p>This library provides core storm bolt on top of Apache Cassandra.
 Provides simple DSL to map storm <em>Tuple</em> to Cassandra Query Language <em>Statement</em>.</p>
@@ -373,7 +373,7 @@
         <span class="n">CassandraStateFactory</span> <span class="n">selectWeatherStationStateFactory</span> <span class="o">=</span> <span class="n">getSelectWeatherStationStateFactory</span><span class="o">();</span>
         <span class="n">TridentState</span> <span class="n">selectState</span> <span class="o">=</span> <span class="n">topology</span><span class="o">.</span><span class="na">newStaticState</span><span class="o">(</span><span class="n">selectWeatherStationStateFactory</span><span class="o">);</span>
         <span class="n">stream</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">stateQuery</span><span class="o">(</span><span class="n">selectState</span><span class="o">,</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"weather_station_id"</span><span class="o">),</span> <span class="k">new</span> <span class="n">CassandraQuery</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"name"</span><span class="o">));</span>         
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-elasticsearch.html b/content/releases/2.0.0-SNAPSHOT/storm-elasticsearch.html
index 26027ed..bba5870 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-elasticsearch.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-elasticsearch.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="storm-elasticsearch-bolt-trident-state">Storm Elasticsearch Bolt &amp; Trident State</h1>
+<div class="documentation-content"><h1 id="storm-elasticsearch-bolt-trident-state">Storm Elasticsearch Bolt &amp; Trident State</h1>
 
 <p>EsIndexBolt, EsPercolateBolt and EsState allows users to stream data from storm into Elasticsearch directly.
   For detailed description, please refer to the following.</p>
@@ -245,7 +245,7 @@
 <li>Sriharsha Chintalapani (<a href="https://github.com/harshach">@harshach</a>)</li>
 <li>Jungtaek Lim (<a href="https://github.com/HeartSaVioR">@HeartSaVioR</a>)</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-eventhubs.html b/content/releases/2.0.0-SNAPSHOT/storm-eventhubs.html
index 7f7720d..4fe5c12 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-eventhubs.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-eventhubs.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm spout and bolt implementation for Microsoft Azure Eventhubs</p>
+<div class="documentation-content"><p>Storm spout and bolt implementation for Microsoft Azure Eventhubs</p>
 
 <h3 id="build">build</h3>
 <div class="highlight"><pre><code class="language-" data-lang="">mvn clean package
@@ -178,7 +178,7 @@
 
 <h3 id="windows-azure-eventhubs">Windows Azure Eventhubs</h3>
 <div class="highlight"><pre><code class="language-" data-lang="">http://azure.microsoft.com/en-us/services/event-hubs/
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-hbase.html b/content/releases/2.0.0-SNAPSHOT/storm-hbase.html
index b88d9bc..8337c96 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-hbase.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-hbase.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm/Trident integration for <a href="https://hbase.apache.org">Apache HBase</a></p>
+<div class="documentation-content"><p>Storm/Trident integration for <a href="https://hbase.apache.org">Apache HBase</a></p>
 
 <h2 id="usage">Usage</h2>
 
@@ -395,7 +395,7 @@
         <span class="n">StormSubmitter</span><span class="o">.</span><span class="na">submitTopology</span><span class="o">(</span><span class="n">topoName</span><span class="o">,</span> <span class="n">config</span><span class="o">,</span> <span class="n">builder</span><span class="o">.</span><span class="na">createTopology</span><span class="o">());</span>
     <span class="o">}</span>
 <span class="o">}</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-hdfs.html b/content/releases/2.0.0-SNAPSHOT/storm-hdfs.html
index 6219a43..bbe6c8c 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-hdfs.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-hdfs.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm components for interacting with HDFS file systems</p>
+<div class="documentation-content"><p>Storm components for interacting with HDFS file systems</p>
 
 <h1 id="hdfs-bolt">HDFS Bolt</h1>
 
@@ -745,7 +745,7 @@
 </tbody></table>
 
 <hr>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-hive.html b/content/releases/2.0.0-SNAPSHOT/storm-hive.html
index 0b17b56..9f1ab32 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-hive.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-hive.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Hive offers streaming API that allows data to be written continuously into Hive. The incoming data 
+<div class="documentation-content"><p>Hive offers streaming API that allows data to be written continuously into Hive. The incoming data 
   can be continuously committed in small batches of records into existing Hive partition or table. Once the data
   is committed its immediately visible to all hive queries. More info on Hive Streaming API 
   <a href="https://cwiki.apache.org/confluence/display/Hive/Streaming+Data+Ingest">https://cwiki.apache.org/confluence/display/Hive/Streaming+Data+Ingest</a></p>
@@ -303,7 +303,7 @@
 
    <span class="n">StateFactory</span> <span class="n">factory</span> <span class="o">=</span> <span class="k">new</span> <span class="n">HiveStateFactory</span><span class="o">().</span><span class="na">withOptions</span><span class="o">(</span><span class="n">hiveOptions</span><span class="o">);</span>
    <span class="n">TridentState</span> <span class="n">state</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">partitionPersist</span><span class="o">(</span><span class="n">factory</span><span class="o">,</span> <span class="n">hiveFields</span><span class="o">,</span> <span class="k">new</span> <span class="n">HiveUpdater</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">());</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-jdbc.html b/content/releases/2.0.0-SNAPSHOT/storm-jdbc.html
index 38f940c..60a6072 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-jdbc.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-jdbc.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm/Trident integration for JDBC. This package includes the core bolts and trident states that allows a storm topology
+<div class="documentation-content"><p>Storm/Trident integration for JDBC. This package includes the core bolts and trident states that allows a storm topology
 to either insert storm tuples in a database table or to execute select queries against a database and enrich tuples 
 in a storm topology.</p>
 
@@ -399,7 +399,7 @@
 <div class="highlight"><pre><code class="language-" data-lang="">select * from user;
 </code></pre></div>
 <p>For trident you can view <code>org.apache.storm.jdbc.topology.UserPersistanceTridentTopology</code>.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-jms-example.html b/content/releases/2.0.0-SNAPSHOT/storm-jms-example.html
index 71148a3..dff0444 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-jms-example.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-jms-example.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="example-storm-jms-topology">Example Storm JMS Topology</h2>
+<div class="documentation-content"><h2 id="example-storm-jms-topology">Example Storm JMS Topology</h2>
 
 <p>The storm-jms source code contains an example project (in the &quot;examples&quot; directory) 
 builds a multi-bolt/multi-spout topology (depicted below) that uses the JMS Spout and JMS Bolt components.</p>
@@ -248,7 +248,7 @@
 DEBUG (backtype.storm.contrib.jms.spout.JmsSpout:251) - JMS Message acked: ID:budreau.home-60117-1321735025796-0:0:1:1:1
 </code></pre></div>
 <p>The topology will run for 2 minutes, then gracefully shut down.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-jms-spring.html b/content/releases/2.0.0-SNAPSHOT/storm-jms-spring.html
index 1156492..9867bea 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-jms-spring.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-jms-spring.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="connecting-to-jms-using-springs-jms-support">Connecting to JMS Using Spring&#39;s JMS Support</h3>
+<div class="documentation-content"><h3 id="connecting-to-jms-using-springs-jms-support">Connecting to JMS Using Spring&#39;s JMS Support</h3>
 
 <p>Create a Spring applicationContext.xml file that defines one or more destination (topic/queue) beans, as well as a connecton factory.</p>
 <div class="highlight"><pre><code class="language-" data-lang=""><span class="cp">&lt;?xml version="1.0" encoding="UTF-8"?&gt;</span>
@@ -163,7 +163,7 @@
         <span class="na">brokerURL=</span><span class="s">"tcp://localhost:61616"</span> <span class="nt">/&gt;</span>
 
 <span class="nt">&lt;/beans&gt;</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-jms.html b/content/releases/2.0.0-SNAPSHOT/storm-jms.html
index c32754d..d89c265 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-jms.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-jms.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="about-storm-jms">About Storm JMS</h2>
+<div class="documentation-content"><h2 id="about-storm-jms">About Storm JMS</h2>
 
 <p>Storm JMS is a generic framework for integrating JMS messaging within the Storm framework.</p>
 
@@ -169,7 +169,7 @@
 <p><a href="storm-jms-example.html">Example Topology</a></p>
 
 <p><a href="storm-jms-spring.html">Using Spring JMS</a></p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-kafka-client.html b/content/releases/2.0.0-SNAPSHOT/storm-kafka-client.html
index da69dea..8d34721 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-kafka-client.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-kafka-client.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="storm-apache-kafka-integration-using-the-kafka-client-jar">Storm Apache Kafka integration using the kafka-client jar</h1>
+<div class="documentation-content"><h1 id="storm-apache-kafka-integration-using-the-kafka-client-jar">Storm Apache Kafka integration using the kafka-client jar</h1>
 
 <p>This includes the new Apache Kafka consumer API.</p>
 
@@ -530,7 +530,7 @@
 <td><code>UNCOMMITTED_LATEST</code></td>
 </tr>
 </tbody></table>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-kafka.html b/content/releases/2.0.0-SNAPSHOT/storm-kafka.html
index f55d072..ce5ad77 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-kafka.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-kafka.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Provides core Storm and Trident spout implementations for consuming data from Apache Kafka 0.8.x.</p>
+<div class="documentation-content"><p>Provides core Storm and Trident spout implementations for consuming data from Apache Kafka 0.8.x.</p>
 
 <h2 id="spouts">Spouts</h2>
 
@@ -504,7 +504,7 @@
 <ul>
 <li>P. Taylor Goetz (<a href="mailto:ptgoetz@apache.org">ptgoetz@apache.org</a>)</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-metrics-profiling-internal-actions.html b/content/releases/2.0.0-SNAPSHOT/storm-metrics-profiling-internal-actions.html
index e0b7868..5f19b6a 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-metrics-profiling-internal-actions.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-metrics-profiling-internal-actions.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="storm-metrics-for-profiling-various-storm-internal-actions">Storm Metrics for Profiling Various Storm Internal Actions</h1>
+<div class="documentation-content"><h1 id="storm-metrics-for-profiling-various-storm-internal-actions">Storm Metrics for Profiling Various Storm Internal Actions</h1>
 
 <p>With the addition of these metrics, Storm users can collect, view, and analyze the performance of various internal actions.  The actions that are profiled include thrift rpc calls and http quests within the storm daemons. For instance, in the Storm Nimbus daemon, the following thrift calls defined in the Nimbus$Iface are profiled:</p>
 
@@ -213,7 +213,7 @@
 <p>For more information about io.dropwizard.metrics and metrics-clojure packages please reference their original documentation:
 - <a href="https://dropwizard.github.io/metrics/3.1.0/">https://dropwizard.github.io/metrics/3.1.0/</a>
 - <a href="http://metrics-clojure.readthedocs.org/en/latest/">http://metrics-clojure.readthedocs.org/en/latest/</a></p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-metricstore.html b/content/releases/2.0.0-SNAPSHOT/storm-metricstore.html
index 1897532..94fa6d0 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-metricstore.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-metricstore.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>A metric store (<a href="http://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/metricstore/MetricStore.java"><code>MetricStore</code></a>) interface was added 
+<div class="documentation-content"><p>A metric store (<a href="http://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/metricstore/MetricStore.java"><code>MetricStore</code></a>) interface was added 
 to Nimbus to allow storing metric information (<a href="http://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/metricstore/Metric.java"><code>Metric</code></a>) 
 to a database.  The default implementation 
 (<a href="http://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/metricstore/rocksdb/RocksDbStore.java"><code>RocksDbStore</code></a>) is using RocksDB, 
@@ -331,7 +331,7 @@
 <td>The sum of the metric values</td>
 </tr>
 </tbody></table>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-mongodb.html b/content/releases/2.0.0-SNAPSHOT/storm-mongodb.html
index 6ce63ed..bc74d28 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-mongodb.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-mongodb.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm/Trident integration for <a href="https://www.mongodb.org/">MongoDB</a>. This package includes the core bolts and trident states that allows a storm topology to either insert storm tuples in a database collection or to execute update queries against a database collection in a storm topology.</p>
+<div class="documentation-content"><p>Storm/Trident integration for <a href="https://www.mongodb.org/">MongoDB</a>. This package includes the core bolts and trident states that allows a storm topology to either insert storm tuples in a database collection or to execute update queries against a database collection in a storm topology.</p>
 
 <h2 id="insert-into-database">Insert into Database</h2>
 
@@ -417,7 +417,7 @@
 
         <span class="n">stream</span><span class="o">.</span><span class="na">stateQuery</span><span class="o">(</span><span class="n">state</span><span class="o">,</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"word"</span><span class="o">),</span> <span class="k">new</span> <span class="n">MapGet</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"sum"</span><span class="o">))</span>
                 <span class="o">.</span><span class="na">each</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"word"</span><span class="o">,</span> <span class="s">"sum"</span><span class="o">),</span> <span class="k">new</span> <span class="n">PrintFunction</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">());</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-mqtt.html b/content/releases/2.0.0-SNAPSHOT/storm-mqtt.html
index cf0f260..d68cc5e 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-mqtt.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-mqtt.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="about">About</h2>
+<div class="documentation-content"><h2 id="about">About</h2>
 
 <p>MQTT is a lightweight publish/subscribe protocol frequently used in IoT applications.</p>
 
@@ -483,7 +483,7 @@
 <ul>
 <li>P. Taylor Goetz (<a href="mailto:ptgoetz@apache.org">ptgoetz@apache.org</a>)</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-redis.html b/content/releases/2.0.0-SNAPSHOT/storm-redis.html
index e043595..9f88575 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-redis.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-redis.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm/Trident integration for <a href="http://redis.io/">Redis</a></p>
+<div class="documentation-content"><p>Storm/Trident integration for <a href="http://redis.io/">Redis</a></p>
 
 <p>Storm-redis uses Jedis for Redis client.</p>
 
@@ -382,7 +382,7 @@
         <span class="n">stream</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">stateQuery</span><span class="o">(</span><span class="n">state</span><span class="o">,</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"word"</span><span class="o">),</span>
                                 <span class="k">new</span> <span class="nf">RedisClusterStateQuerier</span><span class="o">(</span><span class="n">lookupMapper</span><span class="o">),</span>
                                 <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">"columnName"</span><span class="o">,</span><span class="s">"columnValue"</span><span class="o">));</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-solr.html b/content/releases/2.0.0-SNAPSHOT/storm-solr.html
index 90d52e2..0a32c3c 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-solr.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-solr.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm and Trident integration for Apache Solr. This package includes a bolt and a trident state that enable a Storm topology
+<div class="documentation-content"><p>Storm and Trident integration for Apache Solr. This package includes a bolt and a trident state that enable a Storm topology
 stream the contents of storm tuples to index Solr collections.</p>
 
 <h1 id="index-storm-tuples-into-a-solr-collection">Index Storm tuples into a Solr collection</h1>
@@ -308,7 +308,7 @@
 <p>You can also see the results by opening the Apache Solr UI and pasting the <code>id</code> pattern in the <code>q</code> textbox in the queries page</p>
 
 <p><a href="http://localhost:8983/solr/#/gettingstarted_shard1_replica2/query">http://localhost:8983/solr/#/gettingstarted_shard1_replica2/query</a></p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-sql-example.html b/content/releases/2.0.0-SNAPSHOT/storm-sql-example.html
index 43fa1db..74214d3 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-sql-example.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-sql-example.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page shows how to use Storm SQL by showing the example of processing Apache logs. 
+<div class="documentation-content"><p>This page shows how to use Storm SQL by showing the example of processing Apache logs. 
 This page is written by &quot;how-to&quot; style so you can follow the step and learn how to utilize Storm SQL step by step. </p>
 
 <h2 id="preparation">Preparation</h2>
@@ -379,7 +379,7 @@
 (You may noticed that the types of some of output fields are different than output table schema.)</p>
 
 <p>Its behavior is subject to change when Storm SQL changes its backend API to core (tuple by tuple, low-level or high-level) one.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-sql-internal.html b/content/releases/2.0.0-SNAPSHOT/storm-sql-internal.html
index 9d32886..5eb132b 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-sql-internal.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-sql-internal.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page describes the design and the implementation of the Storm SQL integration.</p>
+<div class="documentation-content"><p>This page describes the design and the implementation of the Storm SQL integration.</p>
 
 <h2 id="overview">Overview</h2>
 
@@ -195,7 +195,7 @@
 (Use <code>--artifacts</code> if your data source JARs are available in Maven repository since it handles transitive dependencies.)</p>
 
 <p>Please refer <a href="storm-sql.html">Storm SQL integration</a> page to how to do it.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-sql-reference.html b/content/releases/2.0.0-SNAPSHOT/storm-sql-reference.html
index d4da8ec..3ba85bc 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-sql-reference.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-sql-reference.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm SQL uses Apache Calcite to parse and evaluate the SQL statements. 
+<div class="documentation-content"><p>Storm SQL uses Apache Calcite to parse and evaluate the SQL statements. 
 Storm SQL also adopts Rex compiler from Calcite, so Storm SQL is expected to handle SQL dialect recognized by Calcite&#39;s default SQL parser. </p>
 
 <p>The page is based on Calcite SQL reference on website, and removes the area Storm SQL doesn&#39;t support, and also adds the area Storm SQL supports.</p>
@@ -2101,7 +2101,7 @@
 
 <p>Also, hdfs configuration files should be provided.
 You can put the <code>core-site.xml</code> and <code>hdfs-site.xml</code> into the <code>conf</code> directory which is in Storm installation directory.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-sql.html b/content/releases/2.0.0-SNAPSHOT/storm-sql.html
index 403b662..cfec1ae 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-sql.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-sql.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The Storm SQL integration allows users to run SQL queries over streaming data in Storm. Not only the SQL interface allows faster development cycles on streaming analytics, but also opens up the opportunities to unify batch data processing like <a href="///hive.apache.org">Apache Hive</a> and real-time streaming data analytics.</p>
+<div class="documentation-content"><p>The Storm SQL integration allows users to run SQL queries over streaming data in Storm. Not only the SQL interface allows faster development cycles on streaming analytics, but also opens up the opportunities to unify batch data processing like <a href="///hive.apache.org">Apache Hive</a> and real-time streaming data analytics.</p>
 
 <p>At a very high level StormSQL compiles the SQL queries to <a href="Trident-API-Overview.html">Trident</a> topologies and executes them in Storm clusters. This document provides information of how to use StormSQL as end users. For people that are interested in more details in the design and the implementation of StormSQL please refer to the <a href="storm-sql-internal.html">this</a> page.</p>
 
@@ -284,7 +284,7 @@
 <li>Windowing is yet to be implemented.</li>
 <li>Aggregation and join are not supported (waiting for <code>Streaming SQL</code> to be matured)</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/2.0.0-SNAPSHOT/windows-users-guide.html b/content/releases/2.0.0-SNAPSHOT/windows-users-guide.html
index 9292700..4e03824 100644
--- a/content/releases/2.0.0-SNAPSHOT/windows-users-guide.html
+++ b/content/releases/2.0.0-SNAPSHOT/windows-users-guide.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page guides how to set up environment on Windows for Apache Storm.</p>
+<div class="documentation-content"><p>This page guides how to set up environment on Windows for Apache Storm.</p>
 
 <h2 id="symbolic-link">Symbolic Link</h2>
 
@@ -172,7 +172,7 @@
 on Nimbus and all of the Supervisor nodes.  This will also disable features that require symlinks.  Currently this is only downloading
 dependent blobs, but may change in the future.  Some topologies may rely on symbolic links to resources in the current working directory of the worker that are
 created as a convienence, so it is not a 100% backwards compatible change.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Acking-framework-implementation.html b/content/releases/current/Acking-framework-implementation.html
index a9108de..28ec8bc 100644
--- a/content/releases/current/Acking-framework-implementation.html
+++ b/content/releases/current/Acking-framework-implementation.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p><a href="https://github.com/apache/incubator-storm/blob/46c3ba7/storm-core/src/clj/backtype/storm/daemon/acker.clj#L28">Storm&#39;s acker</a> tracks completion of each tupletree with a checksum hash: each time a tuple is sent, its value is XORed into the checksum, and each time a tuple is acked its value is XORed in again. If all tuples have been successfully acked, the checksum will be zero (the odds that the checksum will be zero otherwise are vanishingly small).</p>
+<div class="documentation-content"><p><a href="https://github.com/apache/incubator-storm/blob/46c3ba7/storm-core/src/clj/backtype/storm/daemon/acker.clj#L28">Storm&#39;s acker</a> tracks completion of each tupletree with a checksum hash: each time a tuple is sent, its value is XORed into the checksum, and each time a tuple is acked its value is XORed in again. If all tuples have been successfully acked, the checksum will be zero (the odds that the checksum will be zero otherwise are vanishingly small).</p>
 
 <p>You can read a bit more about the <a href="Guaranteeing-message-processing.html#what-is-storms-reliability-api">reliability mechanism</a> elsewhere on the wiki -- this explains the internal details.</p>
 
@@ -180,7 +180,7 @@
 <p>Internally, it holds several HashMaps (&#39;buckets&#39;) of its own, each holding a cohort of records that will expire at the same time.  Let&#39;s call the longest-lived bucket death row, and the most recent the nursery. Whenever a value is <code>.put()</code> to the RotatingMap, it is relocated to the nursery -- and removed from any other bucket it might have been in (effectively resetting its death clock).</p>
 
 <p>Whenever its owner calls <code>.rotate()</code>, the RotatingMap advances each cohort one step further towards expiration. (Typically, Storm objects call rotate on every receipt of a system tick stream tuple.) If there are any key-value pairs in the former death row bucket, the RotatingMap invokes a callback (given in the constructor) for each key-value pair, letting its owner take appropriate action (eg, failing a tuple.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Classpath-handling.html b/content/releases/current/Classpath-handling.html
index f68b86b..634a5ee 100644
--- a/content/releases/current/Classpath-handling.html
+++ b/content/releases/current/Classpath-handling.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="storm-is-an-application-container">Storm is an Application Container</h3>
+<div class="documentation-content"><h3 id="storm-is-an-application-container">Storm is an Application Container</h3>
 
 <p>Storm provides an application container environment, a la Apache Tomcat, which creates potential for classpath conflicts between Storm and your application.  The most common way of using Storm involves submitting an &quot;uber JAR&quot; containing your application code with all of its dependencies bundled in, and then Storm distributes this JAR to Worker nodes.  Then Storm runs your application within a Storm process called a <code>Worker</code> -- thus the JVM&#39;s classpath contains the dependencies of your JAR as well as whatever dependencies the Worker itself has.  So careful handling of classpaths and dependencies is critical for the correct functioning of Storm.</p>
 
@@ -173,7 +173,7 @@
 <p>When the <code>storm.py</code> script launches a <code>java</code> command, it first constructs the classpath from the optional settings mentioned above, as well as including some default locations such as the <code>${STORM_DIR}/</code>, <code>${STORM_DIR}/lib/</code>, <code>${STORM_DIR}/extlib/</code> and <code>${STORM_DIR}/extlib-daemon/</code> directories.  In past releases, Storm would enumerate all JARs in those directories and then explicitly add all of those JARs into the <code>-cp</code> / <code>--classpath</code> argument to the launched <code>java</code> commands.  As such, the classpath would get so long that the <code>java</code> commands could breach the Linux Kernel process table limit of 4096 bytes for recording commands.  That led to truncated commands in <code>ps</code> output, making it hard to operate Storm clusters because you could not easily differentiate the processes nor easily see from <code>ps</code> which port a worker is listening to.</p>
 
 <p>After Storm dropped support for Java 5, this classpath expansion was no longer necessary, because Java 6 supports classpath wildcards. Classpath wildcards allow you to specify a directory ending with a <code>*</code> element, such as <code>foo/bar/*</code>, and the JVM will automatically expand the classpath to include all <code>.jar</code> files in the wildcard directory.  As of <a href="https://issues.apache.org/jira/browse/STORM-2191">STORM-2191</a> Storm just uses classpath wildcards instead of explicitly listing all JARs, thereby shortening all of the commands and making operating Storm clusters a bit easier.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Clojure-DSL.html b/content/releases/current/Clojure-DSL.html
index 89fa383..fd2616a 100644
--- a/content/releases/current/Clojure-DSL.html
+++ b/content/releases/current/Clojure-DSL.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm comes with a Clojure DSL for defining spouts, bolts, and topologies. The Clojure DSL has access to everything the Java API exposes, so if you&#39;re a Clojure user you can code Storm topologies without touching Java at all. The Clojure DSL is defined in the source in the <a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/clj/org/apache/storm/clojure.clj">org.apache.storm.clojure</a> namespace.</p>
+<div class="documentation-content"><p>Storm comes with a Clojure DSL for defining spouts, bolts, and topologies. The Clojure DSL has access to everything the Java API exposes, so if you&#39;re a Clojure user you can code Storm topologies without touching Java at all. The Clojure DSL is defined in the source in the <a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/clj/org/apache/storm/clojure.clj">org.apache.storm.clojure</a> namespace.</p>
 
 <p>This page outlines all the pieces of the Clojure DSL, including:</p>
 
@@ -371,7 +371,7 @@
 <h3 id="testing-topologies">Testing topologies</h3>
 
 <p><a href="http://www.pixelmachine.org/2011/12/17/Testing-Storm-Topologies.html">This blog post</a> and its <a href="http://www.pixelmachine.org/2011/12/21/Testing-Storm-Topologies-Part-2.html">follow-up</a> give a good overview of Storm&#39;s powerful built-in facilities for testing topologies in Clojure.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Command-line-client.html b/content/releases/current/Command-line-client.html
index 19e9671..b651b35 100644
--- a/content/releases/current/Command-line-client.html
+++ b/content/releases/current/Command-line-client.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page describes all the commands that are possible with the &quot;storm&quot; command line client. To learn how to set up your &quot;storm&quot; client to talk to a remote cluster, follow the instructions in <a href="Setting-up-development-environment.html">Setting up development environment</a>. See <a href="Classpath-handling.html">Classpath handling</a> for details on using external libraries in these commands.</p>
+<div class="documentation-content"><p>This page describes all the commands that are possible with the &quot;storm&quot; command line client. To learn how to set up your &quot;storm&quot; client to talk to a remote cluster, follow the instructions in <a href="Setting-up-development-environment.html">Setting up development environment</a>. See <a href="Classpath-handling.html">Classpath handling</a> for details on using external libraries in these commands.</p>
 
 <p>These commands are:</p>
 
@@ -423,7 +423,7 @@
 <p>Syntax: <code>storm help [command]</code></p>
 
 <p>Print one help message or list of available commands</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Common-patterns.html b/content/releases/current/Common-patterns.html
index 5460965..5333dd7 100644
--- a/content/releases/current/Common-patterns.html
+++ b/content/releases/current/Common-patterns.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page lists a variety of common patterns in Storm topologies.</p>
+<div class="documentation-content"><p>This page lists a variety of common patterns in Storm topologies.</p>
 
 <ol>
 <li>Batching</li>
@@ -212,7 +212,7 @@
 <p><code>KeyedFairBolt</code> also wraps the bolt containing your logic and makes sure your topology processes multiple DRPC invocations at the same time, instead of doing them serially one at a time.</p>
 
 <p>See <a href="Distributed-RPC.html">Distributed RPC</a> for more details.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Concepts.html b/content/releases/current/Concepts.html
index 0c5ea0d..bfd8b7a 100644
--- a/content/releases/current/Concepts.html
+++ b/content/releases/current/Concepts.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page lists the main concepts of Storm and links to resources where you can find more information. The concepts discussed are:</p>
+<div class="documentation-content"><p>This page lists the main concepts of Storm and links to resources where you can find more information. The concepts discussed are:</p>
 
 <ol>
 <li>Topologies</li>
@@ -268,7 +268,7 @@
 <ul>
 <li><a href="javadocs/org/apache/storm/Config.html#TOPOLOGY_WORKERS">Config.TOPOLOGY_WORKERS</a>: this config sets the number of workers to allocate for executing the topology</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Configuration.html b/content/releases/current/Configuration.html
index fcee36e..6f300d9 100644
--- a/content/releases/current/Configuration.html
+++ b/content/releases/current/Configuration.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm has a variety of configurations for tweaking the behavior of nimbus, supervisors, and running topologies. Some configurations are system configurations and cannot be modified on a topology by topology basis, whereas other configurations can be modified per topology. </p>
+<div class="documentation-content"><p>Storm has a variety of configurations for tweaking the behavior of nimbus, supervisors, and running topologies. Some configurations are system configurations and cannot be modified on a topology by topology basis, whereas other configurations can be modified per topology. </p>
 
 <p>Every configuration has a default value defined in <a href="http://github.com/apache/storm/blob/v1.2.1/conf/defaults.yaml">defaults.yaml</a> in the Storm codebase. You can override these configurations by defining a storm.yaml in the classpath of Nimbus and the supervisors. Finally, you can define a topology-specific configuration that you submit along with your topology when using <a href="javadocs/org/apache/storm/StormSubmitter.html">StormSubmitter</a>. However, the topology-specific configuration can only override configs prefixed with &quot;TOPOLOGY&quot;.</p>
 
@@ -175,7 +175,7 @@
 <li><a href="Running-topologies-on-a-production-cluster.html">Running topologies on a production cluster</a>: lists useful configurations when running topologies on a cluster</li>
 <li><a href="Local-mode.html">Local mode</a>: lists useful configurations when using local mode</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Contributing-to-Storm.html b/content/releases/current/Contributing-to-Storm.html
index 8badb1c..9fa0bdb 100644
--- a/content/releases/current/Contributing-to-Storm.html
+++ b/content/releases/current/Contributing-to-Storm.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="getting-started-with-contributing">Getting started with contributing</h3>
+<div class="documentation-content"><h3 id="getting-started-with-contributing">Getting started with contributing</h3>
 
 <p>Some of the issues on the <a href="https://issues.apache.org/jira/browse/STORM">issue tracker</a> are marked with the <a href="https://issues.apache.org/jira/browse/STORM-2891?jql=project%20%3D%20STORM%20AND%20status%20%3D%20Open%20AND%20labels%20in%20(newbie%2C%20%22newbie%2B%2B%22)">&quot;Newbie&quot;</a> label. If you&#39;re interested in contributing to Storm but don&#39;t know where to begin, these are good issues to start with. These issues are a great way to get your feet wet with learning the codebase because they require learning about only an isolated portion of the codebase and are a relatively small amount of work.</p>
 
@@ -172,7 +172,7 @@
 <h3 id="contributing-documentation">Contributing documentation</h3>
 
 <p>Documentation contributions are very welcome! The best way to send contributions is as emails through the mailing list.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Creating-a-new-Storm-project.html b/content/releases/current/Creating-a-new-Storm-project.html
index e679958..9dc8638 100644
--- a/content/releases/current/Creating-a-new-Storm-project.html
+++ b/content/releases/current/Creating-a-new-Storm-project.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page outlines how to set up a Storm project for development. The steps are:</p>
+<div class="documentation-content"><p>This page outlines how to set up a Storm project for development. The steps are:</p>
 
 <ol>
 <li>Add Storm jars to classpath</li>
@@ -166,7 +166,7 @@
 <p>For more information on writing topologies in other languages, see <a href="Using-non-JVM-languages-with-Storm.html">Using non-JVM languages with Storm</a>.</p>
 
 <p>To test that everything is working in Eclipse, you should now be able to <code>Run</code> the <code>WordCountTopology.java</code> file. You will see messages being emitted at the console for 10 seconds.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/DSLs-and-multilang-adapters.html b/content/releases/current/DSLs-and-multilang-adapters.html
index 8be8db5..7f10518 100644
--- a/content/releases/current/DSLs-and-multilang-adapters.html
+++ b/content/releases/current/DSLs-and-multilang-adapters.html
@@ -144,14 +144,14 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><a href="https://github.com/velvia/ScalaStorm">Scala DSL</a></li>
 <li><a href="https://github.com/colinsurprenant/redstorm">JRuby DSL</a></li>
 <li><a href="Clojure-DSL.html">Clojure DSL</a></li>
 <li><a href="https://github.com/tomdz/storm-esper">Storm/Esper integration</a>: Streaming SQL on top of Storm</li>
 <li><a href="https://github.com/dan-blanchard/io-storm">io-storm</a>: Perl multilang adapter</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Daemon-Fault-Tolerance.html b/content/releases/current/Daemon-Fault-Tolerance.html
index 565e12c..8981fb0 100644
--- a/content/releases/current/Daemon-Fault-Tolerance.html
+++ b/content/releases/current/Daemon-Fault-Tolerance.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm has several different daemon processes.  Nimbus that schedules workers, supervisors that launch and kill workers, the log viewer that gives access to logs, and the UI that shows the status of a cluster.</p>
+<div class="documentation-content"><p>Storm has several different daemon processes.  Nimbus that schedules workers, supervisors that launch and kill workers, the log viewer that gives access to logs, and the UI that shows the status of a cluster.</p>
 
 <h2 id="what-happens-when-a-worker-dies">What happens when a worker dies?</h2>
 
@@ -169,7 +169,7 @@
 <h2 id="how-does-storm-guarantee-data-processing">How does Storm guarantee data processing?</h2>
 
 <p>Storm provides mechanisms to guarantee data processing even if nodes die or messages are lost. See <a href="Guaranteeing-message-processing.html">Guaranteeing message processing</a> for the details.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Defining-a-non-jvm-language-dsl-for-storm.html b/content/releases/current/Defining-a-non-jvm-language-dsl-for-storm.html
index c3fde21..38f9395 100644
--- a/content/releases/current/Defining-a-non-jvm-language-dsl-for-storm.html
+++ b/content/releases/current/Defining-a-non-jvm-language-dsl-for-storm.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The right place to start to learn how to make a non-JVM DSL for Storm is <a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/storm.thrift">storm-core/src/storm.thrift</a>. Since Storm topologies are just Thrift structures, and Nimbus is a Thrift daemon, you can create and submit topologies in any language.</p>
+<div class="documentation-content"><p>The right place to start to learn how to make a non-JVM DSL for Storm is <a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/storm.thrift">storm-core/src/storm.thrift</a>. Since Storm topologies are just Thrift structures, and Nimbus is a Thrift daemon, you can create and submit topologies in any language.</p>
 
 <p>When you create the Thrift structs for spouts and bolts, the code for the spout or bolt is specified in the ComponentObject struct:</p>
 <div class="highlight"><pre><code class="language-" data-lang="">union ComponentObject {
@@ -165,7 +165,7 @@
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kt">void</span> <span class="nf">submitTopology</span><span class="o">(</span><span class="mi">1</span><span class="o">:</span> <span class="n">string</span> <span class="n">name</span><span class="o">,</span> <span class="mi">2</span><span class="o">:</span> <span class="n">string</span> <span class="n">uploadedJarLocation</span><span class="o">,</span> <span class="mi">3</span><span class="o">:</span> <span class="n">string</span> <span class="n">jsonConf</span><span class="o">,</span> <span class="mi">4</span><span class="o">:</span> <span class="n">StormTopology</span> <span class="n">topology</span><span class="o">)</span> <span class="kd">throws</span> <span class="o">(</span><span class="mi">1</span><span class="o">:</span> <span class="n">AlreadyAliveException</span> <span class="n">e</span><span class="o">,</span> <span class="mi">2</span><span class="o">:</span> <span class="n">InvalidTopologyException</span> <span class="n">ite</span><span class="o">);</span>
 </code></pre></div>
 <p>Finally, one of the key things to do in a non-JVM DSL is make it easy to define the entire topology in one file (the bolts, spouts, and the definition of the topology).</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Distributed-RPC.html b/content/releases/current/Distributed-RPC.html
index 73e2569..2baa19b 100644
--- a/content/releases/current/Distributed-RPC.html
+++ b/content/releases/current/Distributed-RPC.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The idea behind distributed RPC (DRPC) is to parallelize the computation of really intense functions on the fly using Storm. The Storm topology takes in as input a stream of function arguments, and it emits an output stream of the results for each of those function calls. </p>
+<div class="documentation-content"><p>The idea behind distributed RPC (DRPC) is to parallelize the computation of really intense functions on the fly using Storm. The Storm topology takes in as input a stream of function arguments, and it emits an output stream of the results for each of those function calls. </p>
 
 <p>DRPC is not so much a feature of Storm as it is a pattern expressed from Storm&#39;s primitives of streams, spouts, bolts, and topologies. DRPC could have been packaged as a separate library from Storm, but it&#39;s so useful that it&#39;s bundled with Storm.</p>
 
@@ -330,7 +330,7 @@
 <li>KeyedFairBolt for weaving the processing of multiple requests at the same time</li>
 <li>How to use <code>CoordinatedBolt</code> directly</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Eventlogging.html b/content/releases/current/Eventlogging.html
index 8d9a05f..4557c1b 100644
--- a/content/releases/current/Eventlogging.html
+++ b/content/releases/current/Eventlogging.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="introduction">Introduction</h1>
+<div class="documentation-content"><h1 id="introduction">Introduction</h1>
 
 <p>Topology event inspector provides the ability to view the tuples as it flows through different stages in a storm topology.
 This could be useful for inspecting the tuples emitted at a spout or a bolt in the topology pipeline while the topology is running, without stopping or redeploying the topology. The normal flow of tuples from the spouts to the bolts is not affected by turning on event logging.</p>
@@ -269,7 +269,7 @@
 
 <p>Please keep in mind that EventLoggerBolt is just a kind of Bolt, so whole throughput of the topology will go down when registered event loggers cannot keep up handling incoming events, so you may want to take care of the Bolt like normal Bolt.
 One of idea to avoid this is making your implementation of IEventLogger as <code>non-blocking</code> fashion.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/FAQ.html b/content/releases/current/FAQ.html
index 81e8d50..562ee8d 100644
--- a/content/releases/current/FAQ.html
+++ b/content/releases/current/FAQ.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="best-practices">Best Practices</h2>
+<div class="documentation-content"><h2 id="best-practices">Best Practices</h2>
 
 <h3 id="what-rules-of-thumb-can-you-give-me-for-configuring-storm-trident">What rules of thumb can you give me for configuring Storm+Trident?</h3>
 
@@ -276,7 +276,7 @@
 <li>When possible, make your process incremental: each value that comes in makes the answer more an more true. A Trident ReducerAggregator is an operator that takes a prior result and a set of new records and returns a new result. This lets the result be cached and serialized to a datastore; if a server drops off line for a day and then comes back with a full day&#39;s worth of data in a rush, the old results will be calmly retrieved and updated.</li>
 <li>Lambda architecture: Record all events into an archival store (S3, HBase, HDFS) on receipt. in the fast layer, once the time window is clear, process the bucket to get an actionable answer, and ignore everything older than the time window. Periodically run a global aggregation to calculate a &quot;correct&quot; answer.</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Fault-tolerance.html b/content/releases/current/Fault-tolerance.html
index bf71b1a..61cbf6b 100644
--- a/content/releases/current/Fault-tolerance.html
+++ b/content/releases/current/Fault-tolerance.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page explains the design details of Storm that make it a fault-tolerant system.</p>
+<div class="documentation-content"><p>This page explains the design details of Storm that make it a fault-tolerant system.</p>
 
 <h2 id="what-happens-when-a-worker-dies">What happens when a worker dies?</h2>
 
@@ -169,7 +169,7 @@
 <h2 id="how-does-storm-guarantee-data-processing">How does Storm guarantee data processing?</h2>
 
 <p>Storm provides mechanisms to guarantee data processing even if nodes die or messages are lost. See <a href="Guaranteeing-message-processing.html">Guaranteeing message processing</a> for the details.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Guaranteeing-message-processing.html b/content/releases/current/Guaranteeing-message-processing.html
index fe6aadc..e7a81c4 100644
--- a/content/releases/current/Guaranteeing-message-processing.html
+++ b/content/releases/current/Guaranteeing-message-processing.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm offers several different levels of guaranteed message processing, including best effort, at least once, and exactly once through <a href="Trident-tutorial.html">Trident</a>.
+<div class="documentation-content"><p>Storm offers several different levels of guaranteed message processing, including best effort, at least once, and exactly once through <a href="Trident-tutorial.html">Trident</a>.
 This page describes how Storm can guarantee at least once processing.</p>
 
 <h3 id="what-does-it-mean-for-a-message-to-be-fully-processed">What does it mean for a message to be &quot;fully processed&quot;?</h3>
@@ -301,7 +301,7 @@
 <p>The second way is to remove reliability on a message by message basis. You can turn off tracking for an individual spout tuple by omitting a message id in the <code>SpoutOutputCollector.emit</code> method.</p>
 
 <p>Finally, if you don&#39;t care if a particular subset of the tuples downstream in the topology fail to be processed, you can emit them as unanchored tuples. Since they&#39;re not anchored to any spout tuples, they won&#39;t cause any spout tuples to fail if they aren&#39;t acked.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Hooks.html b/content/releases/current/Hooks.html
index 138481a..67e52d3 100644
--- a/content/releases/current/Hooks.html
+++ b/content/releases/current/Hooks.html
@@ -144,13 +144,13 @@
 
 <p class="post-meta"></p>
 
-<p>Storm provides hooks with which you can insert custom code to run on any number of events within Storm. You create a hook by extending the <a href="javadocs/org/apache/storm/hooks/BaseTaskHook.html">BaseTaskHook</a> class and overriding the appropriate method for the event you want to catch. There are two ways to register your hook:</p>
+<div class="documentation-content"><p>Storm provides hooks with which you can insert custom code to run on any number of events within Storm. You create a hook by extending the <a href="javadocs/org/apache/storm/hooks/BaseTaskHook.html">BaseTaskHook</a> class and overriding the appropriate method for the event you want to catch. There are two ways to register your hook:</p>
 
 <ol>
 <li>In the open method of your spout or prepare method of your bolt using the <a href="javadocs/org/apache/storm/task/TopologyContext.html#addTaskHook">TopologyContext</a> method.</li>
 <li>Through the Storm configuration using the <a href="javadocs/org/apache/storm/Config.html#TOPOLOGY_AUTO_TASK_HOOKS">&quot;topology.auto.task.hooks&quot;</a> config. These hooks are automatically registered in every spout or bolt, and are useful for doing things like integrating with a custom monitoring system.</li>
 </ol>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Implementation-docs.html b/content/releases/current/Implementation-docs.html
index 6dcbf6a..e522728 100644
--- a/content/releases/current/Implementation-docs.html
+++ b/content/releases/current/Implementation-docs.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This section of the wiki is dedicated to explaining how Storm is implemented. You should have a good grasp of how to use Storm before reading these sections. </p>
+<div class="documentation-content"><p>This section of the wiki is dedicated to explaining how Storm is implemented. You should have a good grasp of how to use Storm before reading these sections. </p>
 
 <ul>
 <li><a href="Structure-of-the-codebase.html">Structure of the codebase</a></li>
@@ -154,7 +154,7 @@
 <li><a href="nimbus-ha-design.html">Nimbus HA</a></li>
 <li><a href="storm-sql-internal.html">Storm SQL</a></li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Installing-native-dependencies.html b/content/releases/current/Installing-native-dependencies.html
index 1371936..b7fee03 100644
--- a/content/releases/current/Installing-native-dependencies.html
+++ b/content/releases/current/Installing-native-dependencies.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The native dependencies are only needed on actual Storm clusters. When running Storm in local mode, Storm uses a pure Java messaging system so that you don&#39;t need to install native dependencies on your development machine.</p>
+<div class="documentation-content"><p>The native dependencies are only needed on actual Storm clusters. When running Storm in local mode, Storm uses a pure Java messaging system so that you don&#39;t need to install native dependencies on your development machine.</p>
 
 <p>Installing ZeroMQ and JZMQ is usually straightforward. Sometimes, however, people run into issues with autoconf and get strange errors. If you run into any issues, please email the <a href="http://groups.google.com/group/storm-user">Storm mailing list</a> or come get help in the #storm-user room on freenode. </p>
 
@@ -175,7 +175,7 @@
 </ol>
 
 <p>If you run into any errors when running <code>./configure</code>, <a href="http://stackoverflow.com/questions/3522248/how-do-i-compile-jzmq-for-zeromq-on-osx">this thread</a> may provide a solution.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Joins.html b/content/releases/current/Joins.html
index b95e985..410e45a 100644
--- a/content/releases/current/Joins.html
+++ b/content/releases/current/Joins.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm core supports joining multiple data streams into one with the help of <code>JoinBolt</code>.
+<div class="documentation-content"><p>Storm core supports joining multiple data streams into one with the help of <code>JoinBolt</code>.
 <code>JoinBolt</code> is a Windowed bolt, i.e. it waits for the configured window duration to match up the
 tuples among the streams being joined. This helps align the streams within a Window boundary.</p>
 
@@ -272,7 +272,7 @@
 <li>Lastly, keep the window size to the minimum value necessary for solving the problem at hand.</li>
 </ul></li>
 </ol>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Kestrel-and-Storm.html b/content/releases/current/Kestrel-and-Storm.html
index c31597d..bd1fb02 100644
--- a/content/releases/current/Kestrel-and-Storm.html
+++ b/content/releases/current/Kestrel-and-Storm.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page explains how to use Storm to consume items from a Kestrel cluster.</p>
+<div class="documentation-content"><p>This page explains how to use Storm to consume items from a Kestrel cluster.</p>
 
 <h2 id="preliminaries">Preliminaries</h2>
 
@@ -334,7 +334,7 @@
 Now execute the program to add items to the queue and launch the Storm topology. The order in which you launch the programs is of no importance.
 
 If you run the topology with TOPOLOGY_DEBUG you should see tuples being emitted in the topology.
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/current/Lifecycle-of-a-topology.html b/content/releases/current/Lifecycle-of-a-topology.html
index 7239101..d91ed32 100644
--- a/content/releases/current/Lifecycle-of-a-topology.html
+++ b/content/releases/current/Lifecycle-of-a-topology.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>(<strong>NOTE</strong>: this page is based on the 0.7.1 code; many things have changed since then, including a split between tasks and executors, and a reorganization of the code under <code>storm-core/src</code> rather than <code>src/</code>.)</p>
+<div class="documentation-content"><p>(<strong>NOTE</strong>: this page is based on the 0.7.1 code; many things have changed since then, including a split between tasks and executors, and a reorganization of the code under <code>storm-core/src</code> rather than <code>src/</code>.)</p>
 
 <p>This page explains in detail the lifecycle of a topology from running the &quot;storm jar&quot; command to uploading the topology to Nimbus to the supervisors starting/stopping workers to workers and tasks setting themselves up. It also explains how Nimbus monitors topologies and how topologies are shutdown when they are killed.</p>
 
@@ -261,7 +261,7 @@
 <li>Removing a topology cleans out the assignment and static information from ZK <a href="https://github.com/apache/storm/blob/0.7.1/src/clj/org/apache/storm/daemon/nimbus.clj#L116">code</a></li>
 <li>A separate cleanup thread runs the <code>do-cleanup</code> function which will clean up the heartbeat dir and the jars/configs stored locally. <a href="https://github.com/apache/storm/blob/0.7.1/src/clj/org/apache/storm/daemon/nimbus.clj#L577">code</a></li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Local-mode.html b/content/releases/current/Local-mode.html
index 5149afd..9152f7e 100644
--- a/content/releases/current/Local-mode.html
+++ b/content/releases/current/Local-mode.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Local mode simulates a Storm cluster in process and is useful for developing and testing topologies. Running topologies in local mode is similar to running topologies <a href="Running-topologies-on-a-production-cluster.html">on a cluster</a>. </p>
+<div class="documentation-content"><p>Local mode simulates a Storm cluster in process and is useful for developing and testing topologies. Running topologies in local mode is similar to running topologies <a href="Running-topologies-on-a-production-cluster.html">on a cluster</a>. </p>
 
 <p>To create an in-process cluster, simply use the <code>LocalCluster</code> class. For example:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kn">import</span> <span class="nn">org.apache.storm.LocalCluster</span><span class="o">;</span>
@@ -164,7 +164,7 @@
 <li><strong>Config.TOPOLOGY_MAX_TASK_PARALLELISM</strong>: This config puts a ceiling on the number of threads spawned for a single component. Oftentimes production topologies have a lot of parallelism (hundreds of threads) which places unreasonable load when trying to test the topology in local mode. This config lets you easy control that parallelism.</li>
 <li><strong>Config.TOPOLOGY_DEBUG</strong>: When this is set to true, Storm will log a message every time a tuple is emitted from any spout or bolt. This is extremely useful for debugging.</li>
 </ol>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Logs.html b/content/releases/current/Logs.html
index 4d8c3af..314eff2 100644
--- a/content/releases/current/Logs.html
+++ b/content/releases/current/Logs.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Logs in Storm are essential for tracking the status, operations, error messages and debug information for all the 
+<div class="documentation-content"><p>Logs in Storm are essential for tracking the status, operations, error messages and debug information for all the 
 daemons (e.g., nimbus, supervisor, logviewer, drpc, ui, pacemaker) and topologies&#39; workers.</p>
 
 <h3 id="location-of-the-logs">Location of the Logs</h3>
@@ -171,7 +171,7 @@
 <p>Search in a topology: a user can also search a string for a certain topology by clicking the icon of magnifying lens at the top right corner of the UI page. This means the UI will try to search on all the supervisor nodes in a distributed way to find the matched string in all logs for this topology. The search can happen for either normal text log files or rolled zip log files by checking/unchecking the &quot;Search archived logs:&quot; box. Then the matched results can be shown on the UI with url links, directing the user to the certain logs on each supervisor node. This powerful feature is very helpful for users to find certain problematic supervisor nodes running this topology.</p>
 
 <p><img src="images/search-a-topology.png" alt="Search in a topology" title="Search in a topology"></p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Maven.html b/content/releases/current/Maven.html
index 2a9d037..f356085 100644
--- a/content/releases/current/Maven.html
+++ b/content/releases/current/Maven.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>To develop topologies, you&#39;ll need the Storm jars on your classpath. You should either include the unpacked jars in the classpath for your project or use Maven to include Storm as a development dependency. Storm is hosted on Maven Central. To include Storm in your project as a development dependency, add the following to your pom.xml:</p>
+<div class="documentation-content"><p>To develop topologies, you&#39;ll need the Storm jars on your classpath. You should either include the unpacked jars in the classpath for your project or use Maven to include Storm as a development dependency. Storm is hosted on Maven Central. To include Storm in your project as a development dependency, add the following to your pom.xml:</p>
 <div class="highlight"><pre><code class="language-xml" data-lang="xml"><span class="nt">&lt;dependency&gt;</span>
   <span class="nt">&lt;groupId&gt;</span>org.apache.storm<span class="nt">&lt;/groupId&gt;</span>
   <span class="nt">&lt;artifactId&gt;</span>storm-core<span class="nt">&lt;/artifactId&gt;</span>
@@ -157,7 +157,7 @@
 <h3 id="developing-storm">Developing Storm</h3>
 
 <p>Please refer to <a href="http://github.com/apache/storm/blob/v1.2.1/DEVELOPER.md">DEVELOPER.md</a> for more details.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Message-passing-implementation.html b/content/releases/current/Message-passing-implementation.html
index 0efb3f1..fc46bb0 100644
--- a/content/releases/current/Message-passing-implementation.html
+++ b/content/releases/current/Message-passing-implementation.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>(Note: this walkthrough is out of date as of 0.8.0. 0.8.0 revamped the message passing infrastructure to be based on the Disruptor)</p>
+<div class="documentation-content"><p>(Note: this walkthrough is out of date as of 0.8.0. 0.8.0 revamped the message passing infrastructure to be based on the Disruptor)</p>
 
 <p>This page walks through how emitting and transferring tuples works in Storm.</p>
 
@@ -186,7 +186,7 @@
 </ul></li>
 </ul></li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Metrics.html b/content/releases/current/Metrics.html
index 26d2047..94f1e8e 100644
--- a/content/releases/current/Metrics.html
+++ b/content/releases/current/Metrics.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm exposes a metrics interface to report summary statistics across the full topology.
+<div class="documentation-content"><p>Storm exposes a metrics interface to report summary statistics across the full topology.
 The numbers you see on the UI come from some of these built in metrics, but are reported through the worker heartbeats instead of through the IMetricsConsumer described below.</p>
 
 <h3 id="metric-types">Metric Types</h3>
@@ -466,7 +466,7 @@
 <li><code>newWorkerEvent</code> is 1 when a worker is first started and 0 all other times.  This can be used to tell when a worker has crashed and is restarted.</li>
 <li><code>startTimeSecs</code> is when the worker started in seconds since the epoch</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Multilang-protocol.html b/content/releases/current/Multilang-protocol.html
index 3f3accd..5b65343 100644
--- a/content/releases/current/Multilang-protocol.html
+++ b/content/releases/current/Multilang-protocol.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page explains the multilang protocol as of Storm 0.7.1. Versions prior to 0.7.1 used a somewhat different protocol, documented [here](Storm-multi-language-protocol-(versions-0.7.0-and-below).html).</p>
+<div class="documentation-content"><p>This page explains the multilang protocol as of Storm 0.7.1. Versions prior to 0.7.1 used a somewhat different protocol, documented [here](Storm-multi-language-protocol-(versions-0.7.0-and-below).html).</p>
 
 <h1 id="storm-multi-language-protocol">Storm Multi-Language Protocol</h1>
 
@@ -436,7 +436,7 @@
 </code></pre></div>
 <p>When subprocess receives heartbeat tuple, it must send a <code>sync</code> command back to
 ShellBolt.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Pacemaker.html b/content/releases/current/Pacemaker.html
index 9257f35..7353e9a 100644
--- a/content/releases/current/Pacemaker.html
+++ b/content/releases/current/Pacemaker.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="introduction">Introduction</h3>
+<div class="documentation-content"><h3 id="introduction">Introduction</h3>
 
 <p>Pacemaker is a storm daemon designed to process heartbeats from workers. As Storm is scaled up, ZooKeeper begins to become a bottleneck due to high volumes of writes from workers doing heartbeats. Lots of writes to disk and too much traffic across the network is generated as ZooKeeper tries to maintain consistency.</p>
 
@@ -258,7 +258,7 @@
 On a 270 supervisor cluster, fully scheduled with topologies, Pacemaker resource utilization was 70% of one core and nearly 1GiB of RAM on a machine with 4 <code>Intel(R) Xeon(R) CPU E5530 @ 2.40GHz</code> and 24GiB of RAM.</p>
 
 <p>Pacemaker now supports HA. Multiple Pacemaker instances can be used at once in a storm cluster to allow massive scalability. Just include the names of the Pacemaker hosts in the pacemaker.servers config and workers and Nimbus will start communicating with them. They&#39;re fault tolerant as well. The system keeps on working as long as there is at least one pacemaker left running - provided it can handle the load.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Powered-By.html b/content/releases/current/Powered-By.html
index b939e4f..eeb9eb2 100644
--- a/content/releases/current/Powered-By.html
+++ b/content/releases/current/Powered-By.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Want to be added to this page? Send an email <a href="mailto:nathan.marz@gmail.com">here</a>.</p>
+<div class="documentation-content"><p>Want to be added to this page? Send an email <a href="mailto:nathan.marz@gmail.com">here</a>.</p>
 
 <table>
 
@@ -1169,7 +1169,7 @@
 
 
 </table>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Project-ideas.html b/content/releases/current/Project-ideas.html
index ee22774..625f451 100644
--- a/content/releases/current/Project-ideas.html
+++ b/content/releases/current/Project-ideas.html
@@ -144,12 +144,12 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><strong>DSLs for non-JVM languages:</strong> These DSL&#39;s should be all-inclusive and not require any Java for the creation of topologies, spouts, or bolts. Since topologies are <a href="http://thrift.apache.org/">Thrift</a> structs, Nimbus is a Thrift service, and bolts can be written in any language, this is possible.</li>
 <li><strong>Online machine learning algorithms:</strong> Something like <a href="http://mahout.apache.org/">Mahout</a> but for online algorithms</li>
 <li><strong>Suite of performance benchmarks:</strong> These benchmarks should test Storm&#39;s performance on CPU and IO intensive workloads. There should be benchmarks for different classes of applications, such as stream processing (where throughput is the priority) and distributed RPC (where latency is the priority). </li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Rationale.html b/content/releases/current/Rationale.html
index 2fd316d..6dc60f4 100644
--- a/content/releases/current/Rationale.html
+++ b/content/releases/current/Rationale.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The past decade has seen a revolution in data processing. MapReduce, Hadoop, and related technologies have made it possible to store and process data at scales previously unthinkable. Unfortunately, these data processing technologies are not realtime systems, nor are they meant to be. There&#39;s no hack that will turn Hadoop into a realtime system; realtime data processing has a fundamentally different set of requirements than batch processing.</p>
+<div class="documentation-content"><p>The past decade has seen a revolution in data processing. MapReduce, Hadoop, and related technologies have made it possible to store and process data at scales previously unthinkable. Unfortunately, these data processing technologies are not realtime systems, nor are they meant to be. There&#39;s no hack that will turn Hadoop into a realtime system; realtime data processing has a fundamentally different set of requirements than batch processing.</p>
 
 <p>However, realtime data processing at massive scale is becoming more and more of a requirement for businesses. The lack of a &quot;Hadoop of realtime&quot; has become the biggest hole in the data processing ecosystem.</p>
 
@@ -176,7 +176,7 @@
 <li><strong>Fault-tolerant</strong>: If there are faults during execution of your computation, Storm will reassign tasks as necessary. Storm makes sure that a computation can run forever (or until you kill the computation).</li>
 <li><strong>Programming language agnostic</strong>: Robust and scalable realtime processing shouldn&#39;t be limited to a single platform. Storm topologies and processing components can be defined in any language, making Storm accessible to nearly anyone.</li>
 </ol>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Resource_Aware_Scheduler_overview.html b/content/releases/current/Resource_Aware_Scheduler_overview.html
index 2055f21..8c3a5d1 100644
--- a/content/releases/current/Resource_Aware_Scheduler_overview.html
+++ b/content/releases/current/Resource_Aware_Scheduler_overview.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="introduction">Introduction</h1>
+<div class="documentation-content"><h1 id="introduction">Introduction</h1>
 
 <p>The purpose of this document is to provide a description of the Resource Aware Scheduler for the Storm distributed real-time computation system.  This document will provide you with both a high level description of the resource aware scheduler in Storm.  Some of the benefits are using a resource aware scheduler on top of Storm is outlined in the following presentation at Hadoop Summit 2016:</p>
 
@@ -617,7 +617,7 @@
 <td><img src="images/ras_new_strategy_runtime_yahoo.png" alt=""></td>
 </tr>
 </tbody></table>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Running-topologies-on-a-production-cluster.html b/content/releases/current/Running-topologies-on-a-production-cluster.html
index c49b731..af54a31 100644
--- a/content/releases/current/Running-topologies-on-a-production-cluster.html
+++ b/content/releases/current/Running-topologies-on-a-production-cluster.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Running topologies on a production cluster is similar to running in <a href="Local-mode.html">Local mode</a>. Here are the steps:</p>
+<div class="documentation-content"><p>Running topologies on a production cluster is similar to running in <a href="Local-mode.html">Local mode</a>. Here are the steps:</p>
 
 <p>1) Define the topology (Use <a href="javadocs/org/apache/storm/topology/TopologyBuilder.html">TopologyBuilder</a> if defining using Java)</p>
 
@@ -212,7 +212,7 @@
 <p>The best place to monitor a topology is using the Storm UI. The Storm UI provides information about errors happening in tasks and fine-grained stats on the throughput and latency performance of each component of each running topology.</p>
 
 <p>You can also look at the worker logs on the cluster machines.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/SECURITY.html b/content/releases/current/SECURITY.html
index 8a6978f..9515823 100644
--- a/content/releases/current/SECURITY.html
+++ b/content/releases/current/SECURITY.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="running-apache-storm-securely">Running Apache Storm Securely</h1>
+<div class="documentation-content"><h1 id="running-apache-storm-securely">Running Apache Storm Securely</h1>
 
 <p>Apache Storm offers a range of configuration options when trying to secure
 your cluster.  By default all authentication and authorization is disabled but 
@@ -683,7 +683,7 @@
  | storm.zookeeper.topology.auth.payload | A string representing the payload for topology Zookeeper authentication. |</p>
 
 <p>Note: If storm.zookeeper.topology.auth.payload isn&#39;t set,storm will generate a ZooKeeper secret payload for MD5-digest with generateZookeeperDigestSecretPayload() method.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/STORM-UI-REST-API.html b/content/releases/current/STORM-UI-REST-API.html
index 92aca68..12e9159 100644
--- a/content/releases/current/STORM-UI-REST-API.html
+++ b/content/releases/current/STORM-UI-REST-API.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The Storm UI daemon provides a REST API that allows you to interact with a Storm cluster, which includes retrieving
+<div class="documentation-content"><p>The Storm UI daemon provides a REST API that allows you to interact with a Storm cluster, which includes retrieving
 metrics data and configuration information as well as management operations such as starting or stopping topologies.</p>
 
 <h1 id="data-format">Data format</h1>
@@ -2936,7 +2936,7 @@
   </span><span class="s2">"error"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Internal Server Error"</span><span class="p">,</span><span class="w">
   </span><span class="s2">"errorMessage"</span><span class="p">:</span><span class="w"> </span><span class="s2">"java.lang.NullPointerException</span><span class="se">\n\t</span><span class="s2">at clojure.core$name.invoke(core.clj:1505)</span><span class="se">\n\t</span><span class="s2">at org.apache.storm.ui.core$component_page.invoke(core.clj:752)</span><span class="se">\n\t</span><span class="s2">at org.apache.storm.ui.core$fn__7766.invoke(core.clj:782)</span><span class="se">\n\t</span><span class="s2">at compojure.core$make_route$fn__5755.invoke(core.clj:93)</span><span class="se">\n\t</span><span class="s2">at compojure.core$if_route$fn__5743.invoke(core.clj:39)</span><span class="se">\n\t</span><span class="s2">at compojure.core$if_method$fn__5736.invoke(core.clj:24)</span><span class="se">\n\t</span><span class="s2">at compojure.core$routing$fn__5761.invoke(core.clj:106)</span><span class="se">\n\t</span><span class="s2">at clojure.core$some.invoke(core.clj:2443)</span><span class="se">\n\t</span><span class="s2">at compojure.core$routing.doInvoke(core.clj:106)</span><span class="se">\n\t</span><span class="s2">at clojure.lang.RestFn.applyTo(RestFn.java:139)</span><span class="se">\n\t</span><span class="s2">at clojure.core$apply.invoke(core.clj:619)</span><span class="se">\n\t</span><span class="s2">at compojure.core$routes$fn__5765.invoke(core.clj:111)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.reload$wrap_reload$fn__6880.invoke(reload.clj:14)</span><span class="se">\n\t</span><span class="s2">at org.apache.storm.ui.core$catch_errors$fn__7800.invoke(core.clj:836)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.keyword_params$wrap_keyword_params$fn__6319.invoke(keyword_params.clj:27)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.nested_params$wrap_nested_params$fn__6358.invoke(nested_params.clj:65)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.params$wrap_params$fn__6291.invoke(params.clj:55)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.multipart_params$wrap_multipart_params$fn__6386.invoke(multipart_params.clj:103)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.flash$wrap_flash$fn__6675.invoke(flash.clj:14)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.session$wrap_session$fn__6664.invoke(session.clj:43)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.cookies$wrap_cookies$fn__6595.invoke(cookies.clj:160)</span><span class="se">\n\t</span><span class="s2">at ring.adapter.jetty$proxy_handler$fn__6112.invoke(jetty.clj:16)</span><span class="se">\n\t</span><span class="s2">at ring.adapter.jetty.proxy$org.mortbay.jetty.handler.AbstractHandler$0.handle(Unknown Source)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.Server.handle(Server.java:326)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)</span><span class="se">\n</span><span class="s2">"</span><span class="w">
 </span><span class="p">}</span><span class="w">
-</span></code></pre></div>
+</span></code></pre></div></div>
 
 
 	          </div>
diff --git "a/content/releases/current/Serialization-\050prior-to-0.6.0\051.html" "b/content/releases/current/Serialization-\050prior-to-0.6.0\051.html"
index dab36c9..8b1b245 100644
--- "a/content/releases/current/Serialization-\050prior-to-0.6.0\051.html"
+++ "b/content/releases/current/Serialization-\050prior-to-0.6.0\051.html"
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Tuples can be comprised of objects of any types. Since Storm is a distributed system, it needs to know how to serialize and deserialize objects when they&#39;re passed between tasks. By default Storm can serialize ints, shorts, longs, floats, doubles, bools, bytes, strings, and byte arrays, but if you want to use another type in your tuples, you&#39;ll need to implement a custom serializer.</p>
+<div class="documentation-content"><p>Tuples can be comprised of objects of any types. Since Storm is a distributed system, it needs to know how to serialize and deserialize objects when they&#39;re passed between tasks. By default Storm can serialize ints, shorts, longs, floats, doubles, bools, bytes, strings, and byte arrays, but if you want to use another type in your tuples, you&#39;ll need to implement a custom serializer.</p>
 
 <h3 id="dynamic-typing">Dynamic typing</h3>
 
@@ -188,7 +188,7 @@
 <p>Storm provides helpers for registering serializers in a topology config. The <a href="javadocs/backtype/storm/Config.html">Config</a> class has a method called <code>addSerialization</code> that takes in a serializer class to add to the config.</p>
 
 <p>There&#39;s an advanced config called Config.TOPOLOGY_SKIP_MISSING_SERIALIZATIONS. If you set this to true, Storm will ignore any serializations that are registered but do not have their code available on the classpath. Otherwise, Storm will throw errors when it can&#39;t find a serialization. This is useful if you run many topologies on a cluster that each have different serializations, but you want to declare all the serializations across all topologies in the <code>storm.yaml</code> files.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Serialization.html b/content/releases/current/Serialization.html
index b52937a..a79aeed 100644
--- a/content/releases/current/Serialization.html
+++ b/content/releases/current/Serialization.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page is about how the serialization system in Storm works for versions 0.6.0 and onwards. Storm used a different serialization system prior to 0.6.0 which is documented on <a href="Serialization-(prior-to-0.6.0).html">Serialization (prior to 0.6.0)</a>. </p>
+<div class="documentation-content"><p>This page is about how the serialization system in Storm works for versions 0.6.0 and onwards. Storm used a different serialization system prior to 0.6.0 which is documented on <a href="Serialization-(prior-to-0.6.0).html">Serialization (prior to 0.6.0)</a>. </p>
 
 <p>Tuples can be comprised of objects of any types. Since Storm is a distributed system, it needs to know how to serialize and deserialize objects when they&#39;re passed between tasks.</p>
 
@@ -200,7 +200,7 @@
 <p>When a topology is submitted, a single set of serializations is chosen to be used by all components in the topology for sending messages. This is done by merging the component-specific serializer registrations with the regular set of serialization registrations. If two components define serializers for the same class, one of the serializers is chosen arbitrarily.</p>
 
 <p>To force a serializer for a particular class if there&#39;s a conflict between two component-specific registrations, just define the serializer you want to use in the topology-specific configuration. The topology-specific configuration has precedence over component-specific configurations for serialization registrations.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Serializers.html b/content/releases/current/Serializers.html
index 200c717..f2d3acb 100644
--- a/content/releases/current/Serializers.html
+++ b/content/releases/current/Serializers.html
@@ -144,10 +144,10 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><a href="https://github.com/rapportive-oss/storm-json">storm-json</a>: Simple JSON serializer for Storm</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Setting-up-a-Storm-cluster.html b/content/releases/current/Setting-up-a-Storm-cluster.html
index 2fcab0c..0592dd3 100644
--- a/content/releases/current/Setting-up-a-Storm-cluster.html
+++ b/content/releases/current/Setting-up-a-Storm-cluster.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page outlines the steps for getting a Storm cluster up and running. If you&#39;re on AWS, you should check out the <a href="https://github.com/nathanmarz/storm-deploy/wiki">storm-deploy</a> project. <a href="https://github.com/nathanmarz/storm-deploy/wiki">storm-deploy</a> completely automates the provisioning, configuration, and installation of Storm clusters on EC2. It also sets up Ganglia for you so you can monitor CPU, disk, and network usage.</p>
+<div class="documentation-content"><p>This page outlines the steps for getting a Storm cluster up and running. If you&#39;re on AWS, you should check out the <a href="https://github.com/nathanmarz/storm-deploy/wiki">storm-deploy</a> project. <a href="https://github.com/nathanmarz/storm-deploy/wiki">storm-deploy</a> completely automates the provisioning, configuration, and installation of Storm clusters on EC2. It also sets up Ganglia for you so you can monitor CPU, disk, and network usage.</p>
 
 <p>If you run into difficulties with your Storm cluster, first check for a solution is in the <a href="Troubleshooting.html">Troubleshooting</a> page. Otherwise, email the mailing list.</p>
 
@@ -246,7 +246,7 @@
 </ol>
 
 <p>As you can see, running the daemons is very straightforward. The daemons will log to the logs/ directory in wherever you extracted the Storm release.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Setting-up-development-environment.html b/content/releases/current/Setting-up-development-environment.html
index 73bbd95..5e8e70d 100644
--- a/content/releases/current/Setting-up-development-environment.html
+++ b/content/releases/current/Setting-up-development-environment.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page outlines what you need to do to get a Storm development environment set up. In summary, the steps are:</p>
+<div class="documentation-content"><p>This page outlines what you need to do to get a Storm development environment set up. In summary, the steps are:</p>
 
 <ol>
 <li>Download a <a href="..//downloads.html">Storm release</a> , unpack it, and put the unpacked <code>bin/</code> directory on your PATH</li>
@@ -171,7 +171,7 @@
 
 <p>The previous step installed the <code>storm</code> client on your machine which is used to communicate with remote Storm clusters. Now all you have to do is tell the client which Storm cluster to talk to. To do this, all you have to do is put the host address of the master in the <code>~/.storm/storm.yaml</code> file. It should look something like this:</p>
 <div class="highlight"><pre><code class="language-" data-lang="">nimbus.seeds: ["123.45.678.890"]
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/current/Spout-implementations.html b/content/releases/current/Spout-implementations.html
index 64223b1..ad75ae1 100644
--- a/content/releases/current/Spout-implementations.html
+++ b/content/releases/current/Spout-implementations.html
@@ -144,14 +144,14 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><a href="https://github.com/nathanmarz/storm-kestrel">storm-kestrel</a>: Adapter to use Kestrel as a spout</li>
 <li><a href="https://github.com/rapportive-oss/storm-amqp-spout">storm-amqp-spout</a>: Adapter to use AMQP source as a spout</li>
 <li><a href="https://github.com/ptgoetz/storm-jms">storm-jms</a>: Adapter to use a JMS source as a spout</li>
 <li><a href="https://github.com/sorenmacbeth/storm-redis-pubsub">storm-redis-pubsub</a>: A spout that subscribes to a Redis pubsub stream</li>
 <li><a href="https://github.com/haitaoyao/storm-beanstalkd-spout">storm-beanstalkd-spout</a>: A spout that subscribes to a beanstalkd queue</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/State-checkpointing.html b/content/releases/current/State-checkpointing.html
index 458070b..1425498 100644
--- a/content/releases/current/State-checkpointing.html
+++ b/content/releases/current/State-checkpointing.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="state-support-in-core-storm">State support in core storm</h1>
+<div class="documentation-content"><h1 id="state-support-in-core-storm">State support in core storm</h1>
 
 <p>Storm core has abstractions for bolts to save and retrieve the state of its operations. There is a default in-memory
 based state implementation and also a Redis backed implementation that provides state persistence.</p>
@@ -419,7 +419,7 @@
 </ul>
 
 <p><code>org.apache.storm:storm-hbase:&lt;storm-version&gt;</code></p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Storm-Scheduler.html b/content/releases/current/Storm-Scheduler.html
index ca72cc0..805fac2 100644
--- a/content/releases/current/Storm-Scheduler.html
+++ b/content/releases/current/Storm-Scheduler.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm now has 4 kinds of built-in schedulers: <a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/clj/org/apache/storm/scheduler/DefaultScheduler.clj">DefaultScheduler</a>, <a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/clj/org/apache/storm/scheduler/IsolationScheduler.clj">IsolationScheduler</a>, <a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/jvm/org/apache/storm/scheduler/multitenant/MultitenantScheduler.java">MultitenantScheduler</a>, <a href="Resource_Aware_Scheduler_overview.html">ResourceAwareScheduler</a>. </p>
+<div class="documentation-content"><p>Storm now has 4 kinds of built-in schedulers: <a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/clj/org/apache/storm/scheduler/DefaultScheduler.clj">DefaultScheduler</a>, <a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/clj/org/apache/storm/scheduler/IsolationScheduler.clj">IsolationScheduler</a>, <a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/jvm/org/apache/storm/scheduler/multitenant/MultitenantScheduler.java">MultitenantScheduler</a>, <a href="Resource_Aware_Scheduler_overview.html">ResourceAwareScheduler</a>. </p>
 
 <h2 id="pluggable-scheduler">Pluggable scheduler</h2>
 
@@ -163,7 +163,7 @@
 <p>Any topologies submitted to the cluster not listed there will not be isolated. Note that there is no way for a user of Storm to affect their isolation settings – this is only allowed by the administrator of the cluster (this is very much intentional).</p>
 
 <p>The isolation scheduler solves the multi-tenancy problem – avoiding resource contention between topologies – by providing full isolation between topologies. The intention is that &quot;productionized&quot; topologies should be listed in the isolation config, and test or in-development topologies should not. The remaining machines on the cluster serve the dual role of failover for isolated topologies and for running the non-isolated topologies.</p>
-
+</div>
 
 
 	          </div>
diff --git "a/content/releases/current/Storm-multi-language-protocol-\050versions-0.7.0-and-below\051.html" "b/content/releases/current/Storm-multi-language-protocol-\050versions-0.7.0-and-below\051.html"
index 1c41348..d9df735 100644
--- "a/content/releases/current/Storm-multi-language-protocol-\050versions-0.7.0-and-below\051.html"
+++ "b/content/releases/current/Storm-multi-language-protocol-\050versions-0.7.0-and-below\051.html"
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page explains the multilang protocol for versions 0.7.0 and below. The protocol changed in version 0.7.1.</p>
+<div class="documentation-content"><p>This page explains the multilang protocol for versions 0.7.0 and below. The protocol changed in version 0.7.1.</p>
 
 <h1 id="storm-multi-language-protocol">Storm Multi-Language Protocol</h1>
 
@@ -253,7 +253,7 @@
 <p>Note: This command is not JSON encoded, it is sent as a simple string.</p>
 
 <p>This lets the parent bolt know that the script has finished processing and is ready for another tuple.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Structure-of-the-codebase.html b/content/releases/current/Structure-of-the-codebase.html
index f095080..ffe035b 100644
--- a/content/releases/current/Structure-of-the-codebase.html
+++ b/content/releases/current/Structure-of-the-codebase.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>There are three distinct layers to Storm&#39;s codebase.</p>
+<div class="documentation-content"><p>There are three distinct layers to Storm&#39;s codebase.</p>
 
 <p>First, Storm was designed from the very beginning to be compatible with multiple languages. Nimbus is a Thrift service and topologies are defined as Thrift structures. The usage of Thrift allows Storm to be used from any language.</p>
 
@@ -287,7 +287,7 @@
 <p><a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/clj/org/apache/storm/util.clj">org.apache.storm.util</a>: Contains generic utility functions used throughout the code base.</p>
 
 <p><a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/clj/org/apache/storm/zookeeper.clj">org.apache.storm.zookeeper</a>: Clojure wrapper around the Zookeeper API and implements some &quot;high-level&quot; stuff like &quot;mkdirs&quot; and &quot;delete-recursive&quot;.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Support-for-non-java-languages.html b/content/releases/current/Support-for-non-java-languages.html
index ab0c42b..e7bce3a 100644
--- a/content/releases/current/Support-for-non-java-languages.html
+++ b/content/releases/current/Support-for-non-java-languages.html
@@ -144,13 +144,13 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><a href="https://github.com/velvia/ScalaStorm">Scala DSL</a></li>
 <li><a href="https://github.com/colinsurprenant/storm-jruby">JRuby DSL</a></li>
 <li><a href="Clojure-DSL.html">Clojure DSL</a></li>
 <li><a href="https://github.com/gphat/io-storm">io-storm</a>: Perl multilang adapter</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Transactional-topologies.html b/content/releases/current/Transactional-topologies.html
index 37b4863..36b65bf 100644
--- a/content/releases/current/Transactional-topologies.html
+++ b/content/releases/current/Transactional-topologies.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p><strong>NOTE</strong>: Transactional topologies have been deprecated -- use the <a href="Trident-tutorial.html">Trident</a> framework instead.</p>
+<div class="documentation-content"><p><strong>NOTE</strong>: Transactional topologies have been deprecated -- use the <a href="Trident-tutorial.html">Trident</a> framework instead.</p>
 
 <hr>
 
@@ -510,7 +510,7 @@
 <li>so it can&#39;t call finishbatch until it&#39;s received all tuples from all subscribed components AND its received the commit stream tuple (for committers). this ensures that it can&#39;t prematurely call finishBatch</li>
 </ul></li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Trident-API-Overview.html b/content/releases/current/Trident-API-Overview.html
index 36dff27..eb5cdf5 100644
--- a/content/releases/current/Trident-API-Overview.html
+++ b/content/releases/current/Trident-API-Overview.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The core data model in Trident is the &quot;Stream&quot;, processed as a series of batches. A stream is partitioned among the nodes in the cluster, and operations applied to a stream are applied in parallel across each partition.</p>
+<div class="documentation-content"><p>The core data model in Trident is the &quot;Stream&quot;, processed as a series of batches. A stream is partitioned among the nodes in the cluster, and operations applied to a stream are applied in parallel across each partition.</p>
 
 <p>There are five kinds of operations in Trident:</p>
 
@@ -669,7 +669,7 @@
 <p>You might be wondering – how do you do something like a &quot;windowed join&quot;, where tuples from one side of the join are joined against the last hour of tuples from the other side of the join.</p>
 
 <p>To do this, you would make use of partitionPersist and stateQuery. The last hour of tuples from one side of the join would be stored and rotated in a source of state, keyed by the join field. Then the stateQuery would do lookups by the join field to perform the &quot;join&quot;.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Trident-RAS-API.html b/content/releases/current/Trident-RAS-API.html
index 428dd6f..d18217c 100644
--- a/content/releases/current/Trident-RAS-API.html
+++ b/content/releases/current/Trident-RAS-API.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="trident-ras-api">Trident RAS API</h2>
+<div class="documentation-content"><h2 id="trident-ras-api">Trident RAS API</h2>
 
 <p>The Trident RAS (Resource Aware Scheduler) API provides a mechanism to allow users to specify the resource consumption of a Trident topology. The API looks exactly like the base RAS API, only it is called on Trident Streams instead of Bolts and Spouts.</p>
 
@@ -192,7 +192,7 @@
 <p>Resource declarations may be called after any operation. The operations without explicit resources will get the defaults. If you choose to set resources for only some operations, defaults must be declared, or topology submission will fail.
 Resource declarations have the same <em>boundaries</em> as parallelism hints. They don&#39;t cross any groupings, shufflings, or any other kind of repartitioning.
 Resources are declared per operation, but get combined within boundaries.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Trident-spouts.html b/content/releases/current/Trident-spouts.html
index d08a745..e0b736d 100644
--- a/content/releases/current/Trident-spouts.html
+++ b/content/releases/current/Trident-spouts.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="trident-spouts">Trident spouts</h1>
+<div class="documentation-content"><h1 id="trident-spouts">Trident spouts</h1>
 
 <p>Like in the vanilla Storm API, spouts are the source of streams in a Trident topology. On top of the vanilla Storm spouts, Trident exposes additional APIs for more sophisticated spouts.</p>
 
@@ -182,7 +182,7 @@
 </ol>
 
 <p>And, like mentioned in the beginning of this tutorial, you can use regular IRichSpout&#39;s as well.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Trident-state.html b/content/releases/current/Trident-state.html
index a174820..2c9e059 100644
--- a/content/releases/current/Trident-state.html
+++ b/content/releases/current/Trident-state.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Trident has first-class abstractions for reading from and writing to stateful sources. The state can either be internal to the topology – e.g., kept in-memory and backed by HDFS – or externally stored in a database like Memcached or Cassandra. There&#39;s no difference in the Trident API for either case.</p>
+<div class="documentation-content"><p>Trident has first-class abstractions for reading from and writing to stateful sources. The state can either be internal to the topology – e.g., kept in-memory and backed by HDFS – or externally stored in a database like Memcached or Cassandra. There&#39;s no difference in the Trident API for either case.</p>
 
 <p>Trident manages state in a fault-tolerant way so that state updates are idempotent in the face of retries and failures. This lets you reason about Trident topologies as if each message were processed exactly-once.</p>
 
@@ -415,7 +415,7 @@
 <p>Finally, Trident provides the <a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/jvm/org/apache/storm/trident/state/map/SnapshottableMap.java">SnapshottableMap</a> class that turns a MapState into a Snapshottable object, by storing global aggregations into a fixed key.</p>
 
 <p>Take a look at the implementation of <a href="https://github.com/nathanmarz/trident-memcached/blob/master/src/jvm/trident/memcached/MemcachedState.java">MemcachedState</a> to see how all these utilities can be put together to make a high performance MapState implementation. MemcachedState allows you to choose between opaque transactional, transactional, and non-transactional semantics.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Trident-tutorial.html b/content/releases/current/Trident-tutorial.html
index 4403c50..4d2bbbb 100644
--- a/content/releases/current/Trident-tutorial.html
+++ b/content/releases/current/Trident-tutorial.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Trident is a high-level abstraction for doing realtime computing on top of Storm. It allows you to seamlessly intermix high throughput (millions of messages per second), stateful stream processing with low latency distributed querying. If you&#39;re familiar with high level batch processing tools like Pig or Cascading, the concepts of Trident will be very familiar – Trident has joins, aggregations, grouping, functions, and filters. In addition to these, Trident adds primitives for doing stateful, incremental processing on top of any database or persistence store. Trident has consistent, exactly-once semantics, so it is easy to reason about Trident topologies.</p>
+<div class="documentation-content"><p>Trident is a high-level abstraction for doing realtime computing on top of Storm. It allows you to seamlessly intermix high throughput (millions of messages per second), stateful stream processing with low latency distributed querying. If you&#39;re familiar with high level batch processing tools like Pig or Cascading, the concepts of Trident will be very familiar – Trident has joins, aggregations, grouping, functions, and filters. In addition to these, Trident adds primitives for doing stateful, incremental processing on top of any database or persistence store. Trident has consistent, exactly-once semantics, so it is easy to reason about Trident topologies.</p>
 
 <h2 id="illustrative-example">Illustrative example</h2>
 
@@ -356,7 +356,7 @@
 <h2 id="conclusion">Conclusion</h2>
 
 <p>Trident makes realtime computation elegant. You&#39;ve seen how high throughput stream processing, state manipulation, and low-latency querying can be seamlessly intermixed via Trident&#39;s API. Trident lets you express your realtime computations in a natural way while still getting maximal performance.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Troubleshooting.html b/content/releases/current/Troubleshooting.html
index 721c844..8ed7a9b 100644
--- a/content/releases/current/Troubleshooting.html
+++ b/content/releases/current/Troubleshooting.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page lists issues people have run into when using Storm along with their solutions.</p>
+<div class="documentation-content"><p>This page lists issues people have run into when using Storm along with their solutions.</p>
 
 <h3 id="worker-processes-are-crashing-on-startup-with-no-stack-trace">Worker processes are crashing on startup with no stack trace</h3>
 
@@ -279,7 +279,7 @@
 <ul>
 <li>This means that you&#39;re emitting a mutable object as an output tuple. Everything you emit into the output collector must be immutable. What&#39;s happening is that your bolt is modifying the object while it is being serialized to be sent over the network.</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Tutorial.html b/content/releases/current/Tutorial.html
index ecf28c1..45eb3cd 100644
--- a/content/releases/current/Tutorial.html
+++ b/content/releases/current/Tutorial.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>In this tutorial, you&#39;ll learn how to create Storm topologies and deploy them to a Storm cluster. Java will be the main language used, but a few examples will use Python to illustrate Storm&#39;s multi-language capabilities.</p>
+<div class="documentation-content"><p>In this tutorial, you&#39;ll learn how to create Storm topologies and deploy them to a Storm cluster. Java will be the main language used, but a few examples will use Python to illustrate Storm&#39;s multi-language capabilities.</p>
 
 <h2 id="preliminaries">Preliminaries</h2>
 
@@ -428,7 +428,7 @@
 <h2 id="conclusion">Conclusion</h2>
 
 <p>This tutorial gave a broad overview of developing, testing, and deploying Storm topologies. The rest of the documentation dives deeper into all the aspects of using Storm.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Understanding-the-parallelism-of-a-Storm-topology.html b/content/releases/current/Understanding-the-parallelism-of-a-Storm-topology.html
index d337ef5..b965f89 100644
--- a/content/releases/current/Understanding-the-parallelism-of-a-Storm-topology.html
+++ b/content/releases/current/Understanding-the-parallelism-of-a-Storm-topology.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="what-makes-a-running-topology-worker-processes-executors-and-tasks">What makes a running topology: worker processes, executors and tasks</h2>
+<div class="documentation-content"><h2 id="what-makes-a-running-topology-worker-processes-executors-and-tasks">What makes a running topology: worker processes, executors and tasks</h2>
 
 <p>Storm distinguishes between the following three main entities that are used to actually run a topology in a Storm cluster:</p>
 
@@ -274,7 +274,7 @@
 <li><a href="Tutorial.html">Tutorial</a></li>
 <li><a href="javadocs/">Storm API documentation</a>, most notably the class <code>Config</code></li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/Using-non-JVM-languages-with-Storm.html b/content/releases/current/Using-non-JVM-languages-with-Storm.html
index 59f7a38..23253db 100644
--- a/content/releases/current/Using-non-JVM-languages-with-Storm.html
+++ b/content/releases/current/Using-non-JVM-languages-with-Storm.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li>two pieces: creating topologies and implementing spouts and bolts in other languages</li>
 <li>creating topologies in another language is easy since topologies are just thrift structures (link to storm.thrift)</li>
 <li>implementing spouts and bolts in another language is called a &quot;multilang components&quot; or &quot;shelling&quot;
@@ -198,7 +198,7 @@
 <p>Then you can connect to Nimbus using the Thrift API and submit the topology, passing {uploaded-jar-location} into the submitTopology method. For reference, here&#39;s the submitTopology definition:</p>
 <div class="highlight"><pre><code class="language-" data-lang="">void submitTopology(1: string name, 2: string uploadedJarLocation, 3: string jsonConf, 4: StormTopology topology)
     throws (1: AlreadyAliveException e, 2: InvalidTopologyException ite);
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/current/Windowing.html b/content/releases/current/Windowing.html
index 68428f2..939177f 100644
--- a/content/releases/current/Windowing.html
+++ b/content/releases/current/Windowing.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm core has support for processing a group of tuples that falls within a window. Windows are specified with the 
+<div class="documentation-content"><p>Storm core has support for processing a group of tuples that falls within a window. Windows are specified with the 
 following two parameters,</p>
 
 <ol>
@@ -380,7 +380,7 @@
 
 <p>An example toplogy <code>SlidingWindowTopology</code> shows how to use the apis to compute a sliding window sum and a tumbling window 
 average.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/distcache-blobstore.html b/content/releases/current/distcache-blobstore.html
index 7a03da4..b359881 100644
--- a/content/releases/current/distcache-blobstore.html
+++ b/content/releases/current/distcache-blobstore.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="storm-distributed-cache-api">Storm Distributed Cache API</h1>
+<div class="documentation-content"><h1 id="storm-distributed-cache-api">Storm Distributed Cache API</h1>
 
 <p>The distributed cache feature in storm is used to efficiently distribute files
 (or blobs, which is the equivalent terminology for a file in the distributed
@@ -799,7 +799,7 @@
  2: required string session;
  3: optional i64 data_size;
 }
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/current/dynamic-log-level-settings.html b/content/releases/current/dynamic-log-level-settings.html
index c26d773..82f8a9b 100644
--- a/content/releases/current/dynamic-log-level-settings.html
+++ b/content/releases/current/dynamic-log-level-settings.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>We have added the ability to set log level settings for a running topology using the Storm UI and the Storm CLI. </p>
+<div class="documentation-content"><p>We have added the ability to set log level settings for a running topology using the Storm UI and the Storm CLI. </p>
 
 <p>The log level settings apply the same way as you&#39;d expect from log4j, as all we are doing is telling log4j to set the level of the logger you provide. If you set the log level of a parent logger, the children loggers start using that level (unless the children have a more restrictive level already). A timeout can optionally be provided (except for DEBUG mode, where it’s required in the UI), if workers should reset log levels automatically.</p>
 
@@ -179,7 +179,7 @@
 <p><code>./bin/storm set_log_level my_topology -r ROOT</code></p>
 
 <p>Clears the ROOT logger dynamic log level, resetting it to its original value.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/dynamic-worker-profiling.html b/content/releases/current/dynamic-worker-profiling.html
index eb939d3..e915903 100644
--- a/content/releases/current/dynamic-worker-profiling.html
+++ b/content/releases/current/dynamic-worker-profiling.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>In multi-tenant mode, storm launches long-running JVMs across cluster without sudo access to user. Self-serving of Java heap-dumps, jstacks and java profiling of these JVMs would improve users&#39; ability to analyze and debug issues when monitoring it actively.</p>
+<div class="documentation-content"><p>In multi-tenant mode, storm launches long-running JVMs across cluster without sudo access to user. Self-serving of Java heap-dumps, jstacks and java profiling of these JVMs would improve users&#39; ability to analyze and debug issues when monitoring it actively.</p>
 
 <p>The storm dynamic profiler lets you dynamically take heap-dumps, jprofile or jstack for a worker jvm running on stock cluster. It let user download these dumps from the browser and use your favorite tools to analyze it  The UI component page provides list workers for the component and action buttons. The logviewer lets you download the dumps generated by these logs. Please see the screenshots for more information.</p>
 
@@ -171,7 +171,7 @@
 <h2 id="configuration">Configuration</h2>
 
 <p>The &quot;worker.profiler.command&quot; can be configured to point to specific pluggable profiler, heapdump commands. The &quot;worker.profiler.enabled&quot; can be disabled if plugin is not available or jdk does not support Jprofile flight recording so that worker JVM options will not have &quot;worker.profiler.childopts&quot;. To use different profiler plugin, you can change these configuration.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/flux.html b/content/releases/current/flux.html
index a3afd83..e43b36a 100644
--- a/content/releases/current/flux.html
+++ b/content/releases/current/flux.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>A framework for creating and deploying Apache Storm streaming computations with less friction.</p>
+<div class="documentation-content"><p>A framework for creating and deploying Apache Storm streaming computations with less friction.</p>
 
 <h2 id="definition">Definition</h2>
 
@@ -908,7 +908,7 @@
   <span class="na">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">org.apache.storm.flux.test.TridentTopologySource"</span>
   <span class="c1"># Flux will look for "getTopology", this will override that.</span>
   <span class="na">methodName</span><span class="pi">:</span> <span class="s2">"</span><span class="s">getTopologyWithDifferentMethodName"</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/current/index.html b/content/releases/current/index.html
index 860b688..93d1cea 100644
--- a/content/releases/current/index.html
+++ b/content/releases/current/index.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<blockquote>
+<div class="documentation-content"><blockquote>
 <h4 id="note">NOTE</h4>
 
 <p>In the latest version, the class packages have been changed from &quot;backtype.storm&quot; to &quot;org.apache.storm&quot; so the topology code compiled with older version won&#39;t run on the Storm 1.0.0 just like that. Backward compatibility is available through following configuration </p>
@@ -286,7 +286,7 @@
 <li><a href="Multilang-protocol.html">Multilang protocol</a> (how to provide support for another language)</li>
 <li><a href="Implementation-docs.html">Implementation docs</a></li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/metrics_v2.html b/content/releases/current/metrics_v2.html
index 7e1cba5..47f8f10 100644
--- a/content/releases/current/metrics_v2.html
+++ b/content/releases/current/metrics_v2.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Apache Storm version 1.2 introduces a new metrics system for reporting
+<div class="documentation-content"><p>Apache Storm version 1.2 introduces a new metrics system for reporting
 internal statistics (e.g. acked, failed, emitted, transferred, disruptor queue metrics, etc.) as well as a 
 new API for user defined metrics.</p>
 
@@ -274,7 +274,7 @@
     <span class="kt">boolean</span> <span class="nf">matches</span><span class="o">(</span><span class="n">String</span> <span class="n">name</span><span class="o">,</span> <span class="n">Metric</span> <span class="n">metric</span><span class="o">);</span>
 
 <span class="o">}</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/current/nimbus-ha-design.html b/content/releases/current/nimbus-ha-design.html
index 7bd56b1..4ee5b46 100644
--- a/content/releases/current/nimbus-ha-design.html
+++ b/content/releases/current/nimbus-ha-design.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="problem-statement">Problem Statement:</h2>
+<div class="documentation-content"><h2 id="problem-statement">Problem Statement:</h2>
 
 <p>Currently the storm master aka nimbus, is a process that runs on a single machine under supervision. In most cases the 
 nimbus failure is transient and it is restarted by the supervisor. However sometimes when disks fail and networks 
@@ -361,7 +361,7 @@
 <p>Note: Even though all nimbus hosts have watchers on zookeeper to be notified immediately as soon as a new topology is available for code
 download, the callback pretty much never results in code download. In practice we have observed that the desired replication is only achieved once the background-thread runs. 
 So you should expect your topology submission time to be somewhere between 0 to (2 * nimbus.code.sync.freq.secs) for any nimbus.min.replication.count &gt; 1.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/storm-cassandra.html b/content/releases/current/storm-cassandra.html
index d0f47e4..ec5bc9d 100644
--- a/content/releases/current/storm-cassandra.html
+++ b/content/releases/current/storm-cassandra.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="bolt-api-implementation-for-apache-cassandra">Bolt API implementation for Apache Cassandra</h3>
+<div class="documentation-content"><h3 id="bolt-api-implementation-for-apache-cassandra">Bolt API implementation for Apache Cassandra</h3>
 
 <p>This library provides core storm bolt on top of Apache Cassandra.
 Provides simple DSL to map storm <em>Tuple</em> to Cassandra Query Language <em>Statement</em>.</p>
@@ -373,7 +373,7 @@
         <span class="n">CassandraStateFactory</span> <span class="n">selectWeatherStationStateFactory</span> <span class="o">=</span> <span class="n">getSelectWeatherStationStateFactory</span><span class="o">();</span>
         <span class="n">TridentState</span> <span class="n">selectState</span> <span class="o">=</span> <span class="n">topology</span><span class="o">.</span><span class="na">newStaticState</span><span class="o">(</span><span class="n">selectWeatherStationStateFactory</span><span class="o">);</span>
         <span class="n">stream</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">stateQuery</span><span class="o">(</span><span class="n">selectState</span><span class="o">,</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"weather_station_id"</span><span class="o">),</span> <span class="k">new</span> <span class="n">CassandraQuery</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"name"</span><span class="o">));</span>         
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/current/storm-elasticsearch.html b/content/releases/current/storm-elasticsearch.html
index 9477383..3696122 100644
--- a/content/releases/current/storm-elasticsearch.html
+++ b/content/releases/current/storm-elasticsearch.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="storm-elasticsearch-bolt-trident-state">Storm Elasticsearch Bolt &amp; Trident State</h1>
+<div class="documentation-content"><h1 id="storm-elasticsearch-bolt-trident-state">Storm Elasticsearch Bolt &amp; Trident State</h1>
 
 <p>EsIndexBolt, EsPercolateBolt and EsState allows users to stream data from storm into Elasticsearch directly.
   For detailed description, please refer to the following.</p>
@@ -245,7 +245,7 @@
 <li>Sriharsha Chintalapani (<a href="https://github.com/harshach">@harshach</a>)</li>
 <li>Jungtaek Lim (<a href="https://github.com/HeartSaVioR">@HeartSaVioR</a>)</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/storm-eventhubs.html b/content/releases/current/storm-eventhubs.html
index dd8e158..4f0ac92 100644
--- a/content/releases/current/storm-eventhubs.html
+++ b/content/releases/current/storm-eventhubs.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm spout and bolt implementation for Microsoft Azure Eventhubs</p>
+<div class="documentation-content"><p>Storm spout and bolt implementation for Microsoft Azure Eventhubs</p>
 
 <h3 id="build">build</h3>
 <div class="highlight"><pre><code class="language-" data-lang="">mvn clean package
@@ -178,7 +178,7 @@
 
 <h3 id="windows-azure-eventhubs">Windows Azure Eventhubs</h3>
 <div class="highlight"><pre><code class="language-" data-lang="">http://azure.microsoft.com/en-us/services/event-hubs/
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/current/storm-hbase.html b/content/releases/current/storm-hbase.html
index 87e2e25..3cb5653 100644
--- a/content/releases/current/storm-hbase.html
+++ b/content/releases/current/storm-hbase.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm/Trident integration for <a href="https://hbase.apache.org">Apache HBase</a></p>
+<div class="documentation-content"><p>Storm/Trident integration for <a href="https://hbase.apache.org">Apache HBase</a></p>
 
 <h2 id="usage">Usage</h2>
 
@@ -368,7 +368,7 @@
         <span class="o">}</span>
     <span class="o">}</span>
 <span class="o">}</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/current/storm-hdfs.html b/content/releases/current/storm-hdfs.html
index d0c0266..86f3d5c 100644
--- a/content/releases/current/storm-hdfs.html
+++ b/content/releases/current/storm-hdfs.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm components for interacting with HDFS file systems</p>
+<div class="documentation-content"><p>Storm components for interacting with HDFS file systems</p>
 
 <h2 id="usage">Usage</h2>
 
@@ -469,7 +469,7 @@
 <p>On worker hosts the bolt/trident-state code will use the keytab file with principal provided in the config to authenticate with 
 Namenode. This method is little dangerous as you need to ensure all workers have the keytab file at the same location and you need
 to remember this as you bring up new hosts in the cluster.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/storm-hive.html b/content/releases/current/storm-hive.html
index c86291b..e78f9e8 100644
--- a/content/releases/current/storm-hive.html
+++ b/content/releases/current/storm-hive.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Hive offers streaming API that allows data to be written continuously into Hive. The incoming data 
+<div class="documentation-content"><p>Hive offers streaming API that allows data to be written continuously into Hive. The incoming data 
   can be continuously committed in small batches of records into existing Hive partition or table. Once the data
   is committed its immediately visible to all hive queries. More info on Hive Streaming API 
   <a href="https://cwiki.apache.org/confluence/display/Hive/Streaming+Data+Ingest">https://cwiki.apache.org/confluence/display/Hive/Streaming+Data+Ingest</a></p>
@@ -303,7 +303,7 @@
 
    <span class="n">StateFactory</span> <span class="n">factory</span> <span class="o">=</span> <span class="k">new</span> <span class="n">HiveStateFactory</span><span class="o">().</span><span class="na">withOptions</span><span class="o">(</span><span class="n">hiveOptions</span><span class="o">);</span>
    <span class="n">TridentState</span> <span class="n">state</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">partitionPersist</span><span class="o">(</span><span class="n">factory</span><span class="o">,</span> <span class="n">hiveFields</span><span class="o">,</span> <span class="k">new</span> <span class="n">HiveUpdater</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">());</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/current/storm-jdbc.html b/content/releases/current/storm-jdbc.html
index 99f7562..2e0f874 100644
--- a/content/releases/current/storm-jdbc.html
+++ b/content/releases/current/storm-jdbc.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm/Trident integration for JDBC. This package includes the core bolts and trident states that allows a storm topology
+<div class="documentation-content"><p>Storm/Trident integration for JDBC. This package includes the core bolts and trident states that allows a storm topology
 to either insert storm tuples in a database table or to execute select queries against a database and enrich tuples 
 in a storm topology.</p>
 
@@ -399,7 +399,7 @@
 <div class="highlight"><pre><code class="language-" data-lang="">select * from user;
 </code></pre></div>
 <p>For trident you can view <code>org.apache.storm.jdbc.topology.UserPersistanceTridentTopology</code>.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/storm-jms-example.html b/content/releases/current/storm-jms-example.html
index 6a31fda..3920121 100644
--- a/content/releases/current/storm-jms-example.html
+++ b/content/releases/current/storm-jms-example.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="example-storm-jms-topology">Example Storm JMS Topology</h2>
+<div class="documentation-content"><h2 id="example-storm-jms-topology">Example Storm JMS Topology</h2>
 
 <p>The storm-jms source code contains an example project (in the &quot;examples&quot; directory) 
 builds a multi-bolt/multi-spout topology (depicted below) that uses the JMS Spout and JMS Bolt components.</p>
@@ -248,7 +248,7 @@
 DEBUG (backtype.storm.contrib.jms.spout.JmsSpout:251) - JMS Message acked: ID:budreau.home-60117-1321735025796-0:0:1:1:1
 </code></pre></div>
 <p>The topology will run for 2 minutes, then gracefully shut down.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/storm-jms-spring.html b/content/releases/current/storm-jms-spring.html
index 16e54b9..c18c253 100644
--- a/content/releases/current/storm-jms-spring.html
+++ b/content/releases/current/storm-jms-spring.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="connecting-to-jms-using-springs-jms-support">Connecting to JMS Using Spring&#39;s JMS Support</h3>
+<div class="documentation-content"><h3 id="connecting-to-jms-using-springs-jms-support">Connecting to JMS Using Spring&#39;s JMS Support</h3>
 
 <p>Create a Spring applicationContext.xml file that defines one or more destination (topic/queue) beans, as well as a connecton factory.</p>
 <div class="highlight"><pre><code class="language-" data-lang=""><span class="cp">&lt;?xml version="1.0" encoding="UTF-8"?&gt;</span>
@@ -163,7 +163,7 @@
         <span class="na">brokerURL=</span><span class="s">"tcp://localhost:61616"</span> <span class="nt">/&gt;</span>
 
 <span class="nt">&lt;/beans&gt;</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/current/storm-jms.html b/content/releases/current/storm-jms.html
index 887e058..0cd88e6 100644
--- a/content/releases/current/storm-jms.html
+++ b/content/releases/current/storm-jms.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="about-storm-jms">About Storm JMS</h2>
+<div class="documentation-content"><h2 id="about-storm-jms">About Storm JMS</h2>
 
 <p>Storm JMS is a generic framework for integrating JMS messaging within the Storm framework.</p>
 
@@ -169,7 +169,7 @@
 <p><a href="storm-jms-example.html">Example Topology</a></p>
 
 <p><a href="storm-jms-spring.html">Using Spring JMS</a></p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/storm-kafka-client.html b/content/releases/current/storm-kafka-client.html
index 9644458..e71ffa2 100644
--- a/content/releases/current/storm-kafka-client.html
+++ b/content/releases/current/storm-kafka-client.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="storm-apache-kafka-integration-using-the-kafka-client-jar">Storm Apache Kafka integration using the kafka-client jar</h1>
+<div class="documentation-content"><h1 id="storm-apache-kafka-integration-using-the-kafka-client-jar">Storm Apache Kafka integration using the kafka-client jar</h1>
 
 <p>This includes the new Apache Kafka consumer API.</p>
 
@@ -476,7 +476,7 @@
   <span class="o">.</span><span class="na">setTupleTrackingEnforced</span><span class="o">(</span><span class="kc">true</span><span class="o">)</span>
 </code></pre></div>
 <p>Note: This setting has no effect with AT_LEAST_ONCE processing guarantee, where tuple tracking is required and therefore always enabled.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/storm-kafka.html b/content/releases/current/storm-kafka.html
index e08e547..4062063 100644
--- a/content/releases/current/storm-kafka.html
+++ b/content/releases/current/storm-kafka.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Provides core Storm and Trident spout implementations for consuming data from Apache Kafka 0.8.x.</p>
+<div class="documentation-content"><p>Provides core Storm and Trident spout implementations for consuming data from Apache Kafka 0.8.x.</p>
 
 <h2 id="spouts">Spouts</h2>
 
@@ -498,7 +498,7 @@
 <ul>
 <li>P. Taylor Goetz (<a href="mailto:ptgoetz@apache.org">ptgoetz@apache.org</a>)</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/storm-metrics-profiling-internal-actions.html b/content/releases/current/storm-metrics-profiling-internal-actions.html
index 6d977ca..ec4add3 100644
--- a/content/releases/current/storm-metrics-profiling-internal-actions.html
+++ b/content/releases/current/storm-metrics-profiling-internal-actions.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>With the addition of these metrics, Storm users can collect, view, and analyze the performance of various internal actions.  The actions that are profiled include thrift rpc calls and http quests within the storm daemons. For instance, in the Storm Nimbus daemon, the following thrift calls defined in the Nimbus$Iface are profiled:</p>
+<div class="documentation-content"><p>With the addition of these metrics, Storm users can collect, view, and analyze the performance of various internal actions.  The actions that are profiled include thrift rpc calls and http quests within the storm daemons. For instance, in the Storm Nimbus daemon, the following thrift calls defined in the Nimbus$Iface are profiled:</p>
 
 <ul>
 <li>submitTopology</li>
@@ -211,7 +211,7 @@
 <p>For more information about io.dropwizard.metrics and metrics-clojure packages please reference their original documentation:
 - <a href="https://dropwizard.github.io/metrics/3.1.0/">https://dropwizard.github.io/metrics/3.1.0/</a>
 - <a href="http://metrics-clojure.readthedocs.org/en/latest/">http://metrics-clojure.readthedocs.org/en/latest/</a></p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/storm-mongodb.html b/content/releases/current/storm-mongodb.html
index 6deafa6..1a3caee 100644
--- a/content/releases/current/storm-mongodb.html
+++ b/content/releases/current/storm-mongodb.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm/Trident integration for <a href="https://www.mongodb.org/">MongoDB</a>. This package includes the core bolts and trident states that allows a storm topology to either insert storm tuples in a database collection or to execute update queries against a database collection in a storm topology.</p>
+<div class="documentation-content"><p>Storm/Trident integration for <a href="https://www.mongodb.org/">MongoDB</a>. This package includes the core bolts and trident states that allows a storm topology to either insert storm tuples in a database collection or to execute update queries against a database collection in a storm topology.</p>
 
 <h2 id="insert-into-database">Insert into Database</h2>
 
@@ -298,7 +298,7 @@
 
         <span class="c1">//if a new document should be inserted if there are no matches to the query filter</span>
         <span class="c1">//updateBolt.withUpsert(true);</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/current/storm-mqtt.html b/content/releases/current/storm-mqtt.html
index 6de7bf0..2f71f28 100644
--- a/content/releases/current/storm-mqtt.html
+++ b/content/releases/current/storm-mqtt.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="about">About</h2>
+<div class="documentation-content"><h2 id="about">About</h2>
 
 <p>MQTT is a lightweight publish/subscribe protocol frequently used in IoT applications.</p>
 
@@ -483,7 +483,7 @@
 <ul>
 <li>P. Taylor Goetz (<a href="mailto:ptgoetz@apache.org">ptgoetz@apache.org</a>)</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/storm-redis.html b/content/releases/current/storm-redis.html
index 038df9a..cbad490 100644
--- a/content/releases/current/storm-redis.html
+++ b/content/releases/current/storm-redis.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm/Trident integration for <a href="http://redis.io/">Redis</a></p>
+<div class="documentation-content"><p>Storm/Trident integration for <a href="http://redis.io/">Redis</a></p>
 
 <p>Storm-redis uses Jedis for Redis client.</p>
 
@@ -382,7 +382,7 @@
         <span class="n">stream</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">stateQuery</span><span class="o">(</span><span class="n">state</span><span class="o">,</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"word"</span><span class="o">),</span>
                                 <span class="k">new</span> <span class="nf">RedisClusterStateQuerier</span><span class="o">(</span><span class="n">lookupMapper</span><span class="o">),</span>
                                 <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">"columnName"</span><span class="o">,</span><span class="s">"columnValue"</span><span class="o">));</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>
diff --git a/content/releases/current/storm-solr.html b/content/releases/current/storm-solr.html
index 65b2527..3f5e133 100644
--- a/content/releases/current/storm-solr.html
+++ b/content/releases/current/storm-solr.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm and Trident integration for Apache Solr. This package includes a bolt and a trident state that enable a Storm topology
+<div class="documentation-content"><p>Storm and Trident integration for Apache Solr. This package includes a bolt and a trident state that enable a Storm topology
 stream the contents of storm tuples to index Solr collections.</p>
 
 <h1 id="index-storm-tuples-into-a-solr-collection">Index Storm tuples into a Solr collection</h1>
@@ -308,7 +308,7 @@
 <p>You can also see the results by opening the Apache Solr UI and pasting the <code>id</code> pattern in the <code>q</code> textbox in the queries page</p>
 
 <p><a href="http://localhost:8983/solr/#/gettingstarted_shard1_replica2/query">http://localhost:8983/solr/#/gettingstarted_shard1_replica2/query</a></p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/storm-sql-example.html b/content/releases/current/storm-sql-example.html
index 29f249e..280626f 100644
--- a/content/releases/current/storm-sql-example.html
+++ b/content/releases/current/storm-sql-example.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page shows how to use Storm SQL by showing the example of processing Apache logs. 
+<div class="documentation-content"><p>This page shows how to use Storm SQL by showing the example of processing Apache logs. 
 This page is written by &quot;how-to&quot; style so you can follow the step and learn how to utilize Storm SQL step by step. </p>
 
 <h2 id="preparation">Preparation</h2>
@@ -379,7 +379,7 @@
 (You may noticed that the types of some of output fields are different than output table schema.)</p>
 
 <p>Its behavior is subject to change when Storm SQL changes its backend API to core (tuple by tuple, low-level or high-level) one.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/storm-sql-internal.html b/content/releases/current/storm-sql-internal.html
index 97f809b..959eb6a 100644
--- a/content/releases/current/storm-sql-internal.html
+++ b/content/releases/current/storm-sql-internal.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page describes the design and the implementation of the Storm SQL integration.</p>
+<div class="documentation-content"><p>This page describes the design and the implementation of the Storm SQL integration.</p>
 
 <h2 id="overview">Overview</h2>
 
@@ -195,7 +195,7 @@
 (Use <code>--artifacts</code> if your data source JARs are available in Maven repository since it handles transitive dependencies.)</p>
 
 <p>Please refer <a href="storm-sql.html">Storm SQL integration</a> page to how to do it.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/storm-sql-reference.html b/content/releases/current/storm-sql-reference.html
index 5221649..e26b0e1 100644
--- a/content/releases/current/storm-sql-reference.html
+++ b/content/releases/current/storm-sql-reference.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm SQL uses Apache Calcite to parse and evaluate the SQL statements. 
+<div class="documentation-content"><p>Storm SQL uses Apache Calcite to parse and evaluate the SQL statements. 
 Storm SQL also adopts Rex compiler from Calcite, so Storm SQL is expected to handle SQL dialect recognized by Calcite&#39;s default SQL parser. </p>
 
 <p>The page is based on Calcite SQL reference on website, and removes the area Storm SQL doesn&#39;t support, and also adds the area Storm SQL supports.</p>
@@ -2101,7 +2101,7 @@
 
 <p>Also, hdfs configuration files should be provided.
 You can put the <code>core-site.xml</code> and <code>hdfs-site.xml</code> into the <code>conf</code> directory which is in Storm installation directory.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/storm-sql.html b/content/releases/current/storm-sql.html
index 42effbd..a161fc9 100644
--- a/content/releases/current/storm-sql.html
+++ b/content/releases/current/storm-sql.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The Storm SQL integration allows users to run SQL queries over streaming data in Storm. Not only the SQL interface allows faster development cycles on streaming analytics, but also opens up the opportunities to unify batch data processing like <a href="///hive.apache.org">Apache Hive</a> and real-time streaming data analytics.</p>
+<div class="documentation-content"><p>The Storm SQL integration allows users to run SQL queries over streaming data in Storm. Not only the SQL interface allows faster development cycles on streaming analytics, but also opens up the opportunities to unify batch data processing like <a href="///hive.apache.org">Apache Hive</a> and real-time streaming data analytics.</p>
 
 <p>At a very high level StormSQL compiles the SQL queries to <a href="Trident-API-Overview.html">Trident</a> topologies and executes them in Storm clusters. This document provides information of how to use StormSQL as end users. For people that are interested in more details in the design and the implementation of StormSQL please refer to the <a href="storm-sql-internal.html">this</a> page.</p>
 
@@ -284,7 +284,7 @@
 <li>Windowing is yet to be implemented.</li>
 <li>Aggregation and join are not supported (waiting for <code>Streaming SQL</code> to be matured)</li>
 </ul>
-
+</div>
 
 
 	          </div>
diff --git a/content/releases/current/windows-users-guide.html b/content/releases/current/windows-users-guide.html
index bd83020..752551f 100644
--- a/content/releases/current/windows-users-guide.html
+++ b/content/releases/current/windows-users-guide.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page guides how to set up environment on Windows for Apache Storm.</p>
+<div class="documentation-content"><p>This page guides how to set up environment on Windows for Apache Storm.</p>
 
 <h2 id="symbolic-link">Symbolic Link</h2>
 
@@ -172,7 +172,7 @@
 on Nimbus and all of the Supervisor nodes.  This will also disable features that require symlinks.  Currently this is only downloading
 dependent blobs, but may change in the future.  Some topologies may rely on symbolic links to resources in the current working directory of the worker that are
 created as a convienence, so it is not a 100% backwards compatible change.</p>
-
+</div>
 
 
 	          </div>
diff --git a/content/talksAndVideos.html b/content/talksAndVideos.html
index fa7736d..88bf847 100644
--- a/content/talksAndVideos.html
+++ b/content/talksAndVideos.html
@@ -142,7 +142,7 @@
 
 <p class="post-meta"></p>
 
-<div class="row">
+<div class="documentation-content"><div class="row">
     <div class="col-md-12"> 
         <div class="resources">
             <ul class="nav nav-tabs" role="tablist">
@@ -566,7 +566,7 @@
         </div>
     </div>
 </div>
-
+</div>
 
 
 	          </div>