Merge branch 'asf-site' of https://github.com/DigitalPebble/storm into pull1130
diff --git a/_data/committers.yml b/_data/committers.yml
index ec9b572..1e5eaf5 100644
--- a/_data/committers.yml
+++ b/_data/committers.yml
@@ -98,4 +98,23 @@
   asfid: seanzhong
   github: clockfly
 
+- name: Matthias J. Sax
+  roles: Committer, PMC
+  asfid: mjsax
+  github: mjsax
+
+- name: Boyang Jerry Peng
+  roles: Committer, PMC
+  asfid: jerrypeng
+  github: jerrypeng
+
+- name: Zhuo Liu
+  roles: Committer, PMC
+  asfid: zhuoliu
+  github: zhuoliu
+
+- name: Haohui Mai
+  roles: Committer, PMC
+  asfid: wheat9
+  github: haohui
 
diff --git a/_posts/2015-11-05-storm0100-released.md b/_posts/2015-11-05-storm0100-released.md
new file mode 100644
index 0000000..2c42820
--- /dev/null
+++ b/_posts/2015-11-05-storm0100-released.md
@@ -0,0 +1,54 @@
+---
+layout: post
+title: Storm 0.10.0 released
+author: P. Taylor Goetz
+---
+
+The Apache Storm community is pleased to announce that version 0.10.0 Stable has been released and is available from [the downloads page](/downloads.html).
+
+This release includes a number of improvements and bug fixes identified in the previous beta release. For a description of the new features included in the 0.10.0 release, please [see the previous announcement of 0.10.0-beta1](/2015/06/15/storm0100-beta-released.html).
+
+
+Thanks
+------
+Special thanks are due to all those who have contributed to Apache Storm -- whether through direct code contributions, documentation, bug reports, or helping other users on the mailing lists. Your efforts are much appreciated.
+
+
+Full Changelog
+---------
+
+ * STORM-1108: Fix NPE in simulated time
+ * STORM-1106: Netty should not limit attempts to reconnect
+ * STORM-1099: Fix worker childopts as arraylist of strings
+ * STORM-1096: Fix some issues with impersonation on the UI
+ * STORM-912: Support SSL on Logviewer
+ * STORM-1094: advance kafka offset when deserializer yields no object
+ * STORM-1066: Specify current directory when supervisor launches a worker
+ * STORM-1012: Shaded everything that was not already shaded
+ * STORM-967: Shaded everything that was not already shaded
+ * STORM-922: Shaded everything that was not already shaded
+ * STORM-1042: Shaded everything that was not already shaded
+ * STORM-1026: Adding external classpath elements does not work
+ * STORM-1055: storm-jdbc README needs fixes and context
+ * STORM-1044: Setting dop to zero does not raise an error
+ * STORM-1050: Topologies with same name run on one cluster
+ * STORM-1005: Supervisor do not get running workers after restart.
+ * STORM-803: Cleanup travis-ci build and logs
+ * STORM-1027: Use overflow buffer for emitting metrics
+ * STORM-1024: log4j changes leaving ${sys:storm.log.dir} under STORM_HOME dir
+ * STORM-944: storm-hive pom.xml has a dependency conflict with calcite
+ * STORM-994: Connection leak between nimbus and supervisors
+ * STORM-1001: Undefined STORM_EXT_CLASSPATH adds '::' to classpath of workers
+ * STORM-977: Incorrect signal (-9) when as-user is true
+ * STORM-843: [storm-redis] Add Javadoc to storm-redis
+ * STORM-866: Use storm.log.dir instead of storm.home in log4j2 config
+ * STORM-810: PartitionManager in storm-kafka should commit latest offset before close
+ * STORM-928: Add sources->streams->fields map to Multi-Lang Handshake
+ * STORM-945: <DefaultRolloverStrategy> element is not a policy,and should not be putted in the <Policies> element.
+ * STORM-857: create logs metadata dir when running securely
+ * STORM-793: Made change to logviewer.clj in order to remove the invalid http 500 response
+ * STORM-139: hashCode does not work for byte[]
+ * STORM-860: UI: while topology is transitioned to killed, "Activate" button is enabled but not functioning
+ * STORM-966: ConfigValidation.DoubleValidator doesn't really validate whether the type of the object is a double
+ * STORM-742: Let ShellBolt treat all messages to update heartbeat
+ * STORM-992: A bug in the timer.clj might cause unexpected delay to schedule new event
diff --git a/_posts/2015-11-05-storm096-released.md b/_posts/2015-11-05-storm096-released.md
new file mode 100644
index 0000000..62354e4
--- /dev/null
+++ b/_posts/2015-11-05-storm096-released.md
@@ -0,0 +1,29 @@
+---
+layout: post
+title: Storm 0.9.6 released
+author: P. Taylor Goetz
+---
+
+The Apache Storm community is pleased to announce that version 0.9.6 has been released and is available from [the downloads page](/downloads.html).
+
+This is a maintenance release that includes a number of important bug fixes that improve Storm's stability and fault tolerance. We encourage users of previous versions to upgrade to this latest release.
+
+
+Thanks
+------
+Special thanks are due to all those who have contributed to Apache Storm -- whether through direct code contributions, documentation, bug reports, or helping other users on the mailing lists. Your efforts are much appreciated.
+
+
+Full Changelog
+---------
+
+ * STORM-1027: Use overflow buffer for emitting metrics
+ * STORM-996: netty-unit-tests/test-batch demonstrates out-of-order delivery
+ * STORM-1056: allow supervisor log filename to be configurable via ENV variable
+ * STORM-1051: Netty Client.java's flushMessages produces a NullPointerException
+ * STORM-763: nimbus reassigned worker A to another machine, but other worker's netty client can't connect to the new worker A
+ * STORM-935: Update Disruptor queue version to 2.10.4
+ * STORM-503: Short disruptor queue wait time leads to high CPU usage when idle
+ * STORM-728: Put emitted and transferred stats under correct columns
+ * STORM-643: KafkaUtils repeatedly fetches messages whose offset is out of range
+ * STORM-933: NullPointerException during KafkaSpout deactivation
diff --git a/_site/2012/08/02/storm080-released.html b/_site/2012/08/02/storm080-released.html
index 5025c92..70d0102 100644
--- a/_site/2012/08/02/storm080-released.html
+++ b/_site/2012/08/02/storm080-released.html
@@ -77,7 +77,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -92,6 +92,10 @@
                     <div class="col-md-3">
                         <ul class="news" id="news-list">
                             
+                      		<li><a href="/2015/11/05/storm096-released.html">Storm 0.9.6 released</a></li>
+                    		
+                      		<li><a href="/2015/11/05/storm0100-released.html">Storm 0.10.0 released</a></li>
+                    		
                       		<li><a href="/2015/06/15/storm0100-beta-released.html">Storm 0.10.0 Beta Released</a></li>
                     		
                       		<li><a href="/2015/06/04/storm095-released.html">Storm 0.9.5 released</a></li>
@@ -198,7 +202,7 @@
 
 <p>These may require some tweaking to optimize your topologies, but most likely the default values will work fine for you out of the box. </p>
 
-<h2 id="decreased-zookeeper-load-/-increased-storm-ui-performance">Decreased Zookeeper load / increased Storm UI performance</h2>
+<h2 id="decreased-zookeeper-load-increased-storm-ui-performance">Decreased Zookeeper load / increased Storm UI performance</h2>
 
 <p>Storm sends significantly less traffic to Zookeeper now (on the order of 10x less). Since it also uses so many fewer znodes to store state, the UI is significantly faster as well. </p>
 
diff --git a/_site/2012/09/06/storm081-released.html b/_site/2012/09/06/storm081-released.html
index 063b30e..c1c8fa8 100644
--- a/_site/2012/09/06/storm081-released.html
+++ b/_site/2012/09/06/storm081-released.html
@@ -77,7 +77,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -92,6 +92,10 @@
                     <div class="col-md-3">
                         <ul class="news" id="news-list">
                             
+                      		<li><a href="/2015/11/05/storm096-released.html">Storm 0.9.6 released</a></li>
+                    		
+                      		<li><a href="/2015/11/05/storm0100-released.html">Storm 0.10.0 released</a></li>
+                    		
                       		<li><a href="/2015/06/15/storm0100-beta-released.html">Storm 0.10.0 Beta Released</a></li>
                     		
                       		<li><a href="/2015/06/04/storm095-released.html">Storm 0.9.5 released</a></li>
@@ -150,7 +154,7 @@
                         <div>
                 	        <p>Storm 0.8.1 is now available on the downloads page and in Maven. This release contains many bug fixes as well as a few important new features. These include: </p>
 
-<h2 id="storm&#39;s-unit-testing-facilities-have-been-exposed-via-java">Storm&#39;s unit testing facilities have been exposed via Java</h2>
+<h2 id="storm-39-s-unit-testing-facilities-have-been-exposed-via-java">Storm&#39;s unit testing facilities have been exposed via Java</h2>
 
 <p>This is an extremely powerful API that lets you do things like: 
    a) Easily bring up and tear down local clusters 
diff --git a/_site/2013/01/11/storm082-released.html b/_site/2013/01/11/storm082-released.html
index 1a60b01..7406ba4 100644
--- a/_site/2013/01/11/storm082-released.html
+++ b/_site/2013/01/11/storm082-released.html
@@ -77,7 +77,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -92,6 +92,10 @@
                     <div class="col-md-3">
                         <ul class="news" id="news-list">
                             
+                      		<li><a href="/2015/11/05/storm096-released.html">Storm 0.9.6 released</a></li>
+                    		
+                      		<li><a href="/2015/11/05/storm0100-released.html">Storm 0.10.0 released</a></li>
+                    		
                       		<li><a href="/2015/06/15/storm0100-beta-released.html">Storm 0.10.0 Beta Released</a></li>
                     		
                       		<li><a href="/2015/06/04/storm095-released.html">Storm 0.9.5 released</a></li>
diff --git a/_site/2013/12/08/storm090-released.html b/_site/2013/12/08/storm090-released.html
index 0e9709f..c4f3a30 100644
--- a/_site/2013/12/08/storm090-released.html
+++ b/_site/2013/12/08/storm090-released.html
@@ -77,7 +77,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -92,6 +92,10 @@
                     <div class="col-md-3">
                         <ul class="news" id="news-list">
                             
+                      		<li><a href="/2015/11/05/storm096-released.html">Storm 0.9.6 released</a></li>
+                    		
+                      		<li><a href="/2015/11/05/storm0100-released.html">Storm 0.10.0 released</a></li>
+                    		
                       		<li><a href="/2015/06/15/storm0100-beta-released.html">Storm 0.10.0 Beta Released</a></li>
                     		
                       		<li><a href="/2015/06/04/storm095-released.html">Storm 0.9.5 released</a></li>
@@ -161,7 +165,7 @@
 <p>The Netty transport offers a pure Java alternative that eliminates Storm&#39;s dependency on native libraries. The Netty transport&#39;s performance is up to twice as fast as 0MQ, and it will open the door for authorization and authentication between worker processes. For an in-depth performance comparison of the 0MQ and Netty transports, see <a href="http://yahooeng.tumblr.com/post/64758709722/making-storm-fly-with-netty">this blog post</a> by Storm contributor <a href="https://github.com/revans2">Bobby Evans</a>.</p>
 
 <p>To configure Storm to use the Netty transport simply add the following to your <code>storm.yaml</code> configuration and adjust the values to best suit your use-case:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">storm.messaging.transport: &quot;backtype.storm.messaging.netty.Context&quot;
+<div class="highlight"><pre><code class="language-" data-lang="">storm.messaging.transport: "backtype.storm.messaging.netty.Context"
 storm.messaging.netty.server_worker_threads: 1
 storm.messaging.netty.client_worker_threads: 1
 storm.messaging.netty.buffer_size: 5242880
@@ -178,7 +182,7 @@
 <p>In earlier versions of Storm, viewing worker logs involved determining a worker&#39;s location (host/port), typically through Storm UI, then <code>ssh</code>ing to that host and <code>tail</code>ing the corresponding worker log file. With the new log viewer. You can now easily access a specific worker&#39;s log in a web browser by clicking on a worker&#39;s port number right from Storm UI.</p>
 
 <p>The <code>logviewer</code> daemon runs as a separate process on Storm supervisor nodes. To enable the <code>logviewer</code> run the following command (under supervision) on your cluster&#39;s supervisor nodes:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">$ storm logviewer
+<div class="highlight"><pre><code class="language-" data-lang="">$ storm logviewer
 </code></pre></div>
 <h2 id="improved-windows-support">Improved Windows Support</h2>
 
diff --git a/_site/2014/04/10/storm-logo-contest.html b/_site/2014/04/10/storm-logo-contest.html
index 43bada2..5af8a2f 100644
--- a/_site/2014/04/10/storm-logo-contest.html
+++ b/_site/2014/04/10/storm-logo-contest.html
@@ -77,7 +77,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -92,6 +92,10 @@
                     <div class="col-md-3">
                         <ul class="news" id="news-list">
                             
+                      		<li><a href="/2015/11/05/storm096-released.html">Storm 0.9.6 released</a></li>
+                    		
+                      		<li><a href="/2015/11/05/storm0100-released.html">Storm 0.10.0 released</a></li>
+                    		
                       		<li><a href="/2015/06/15/storm0100-beta-released.html">Storm 0.10.0 Beta Released</a></li>
                     		
                       		<li><a href="/2015/06/04/storm095-released.html">Storm 0.9.5 released</a></li>
diff --git a/_site/2014/06/17/contest-results.html b/_site/2014/06/17/contest-results.html
index 8b694ea..9363c7a 100644
--- a/_site/2014/06/17/contest-results.html
+++ b/_site/2014/06/17/contest-results.html
@@ -77,7 +77,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -92,6 +92,10 @@
                     <div class="col-md-3">
                         <ul class="news" id="news-list">
                             
+                      		<li><a href="/2015/11/05/storm096-released.html">Storm 0.9.6 released</a></li>
+                    		
+                      		<li><a href="/2015/11/05/storm0100-released.html">Storm 0.10.0 released</a></li>
+                    		
                       		<li><a href="/2015/06/15/storm0100-beta-released.html">Storm 0.10.0 Beta Released</a></li>
                     		
                       		<li><a href="/2015/06/04/storm095-released.html">Storm 0.9.5 released</a></li>
diff --git a/_site/2014/06/25/storm092-released.html b/_site/2014/06/25/storm092-released.html
index d53a6c7..4ea77b6 100644
--- a/_site/2014/06/25/storm092-released.html
+++ b/_site/2014/06/25/storm092-released.html
@@ -77,7 +77,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -92,6 +92,10 @@
                     <div class="col-md-3">
                         <ul class="news" id="news-list">
                             
+                      		<li><a href="/2015/11/05/storm096-released.html">Storm 0.9.6 released</a></li>
+                    		
+                      		<li><a href="/2015/11/05/storm0100-released.html">Storm 0.10.0 released</a></li>
+                    		
                       		<li><a href="/2015/06/15/storm0100-beta-released.html">Storm 0.10.0 Beta Released</a></li>
                     		
                       		<li><a href="/2015/06/04/storm095-released.html">Storm 0.9.5 released</a></li>
@@ -183,7 +187,7 @@
 <p>The <code>storm-kafka</code> module can be found in the <code>/external/</code> directory of the source tree and binary distributions. The <code>external</code> area has been set up to contain projects that while not required by Storm, are often used in conjunction with Storm to integrate with some other technology. Such projects also come with a maintenance committment from at least one Storm committer to ensure compatibility with Storm&#39;s main codebase as it evolves.</p>
 
 <p>The <code>storm-kafka</code> dependency is available now from Maven Central at the following coordinates:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">groupId: org.apache.storm
+<div class="highlight"><pre><code class="language-" data-lang="">groupId: org.apache.storm
 artifactId: storm-kafka
 version: 0.9.2-incubating
 </code></pre></div>
diff --git a/_site/2014/10/20/storm093-release-candidate.html b/_site/2014/10/20/storm093-release-candidate.html
index 4456ae2..c61db0b 100644
--- a/_site/2014/10/20/storm093-release-candidate.html
+++ b/_site/2014/10/20/storm093-release-candidate.html
@@ -77,7 +77,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -92,6 +92,10 @@
                     <div class="col-md-3">
                         <ul class="news" id="news-list">
                             
+                      		<li><a href="/2015/11/05/storm096-released.html">Storm 0.9.6 released</a></li>
+                    		
+                      		<li><a href="/2015/11/05/storm0100-released.html">Storm 0.10.0 released</a></li>
+                    		
                       		<li><a href="/2015/06/15/storm0100-beta-released.html">Storm 0.10.0 Beta Released</a></li>
                     		
                       		<li><a href="/2015/06/04/storm095-released.html">Storm 0.9.5 released</a></li>
diff --git a/_site/2014/11/25/storm093-released.html b/_site/2014/11/25/storm093-released.html
index ab03b10..1b54e0d 100644
--- a/_site/2014/11/25/storm093-released.html
+++ b/_site/2014/11/25/storm093-released.html
@@ -77,7 +77,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -92,6 +92,10 @@
                     <div class="col-md-3">
                         <ul class="news" id="news-list">
                             
+                      		<li><a href="/2015/11/05/storm096-released.html">Storm 0.9.6 released</a></li>
+                    		
+                      		<li><a href="/2015/11/05/storm0100-released.html">Storm 0.10.0 released</a></li>
+                    		
                       		<li><a href="/2015/06/15/storm0100-beta-released.html">Storm 0.10.0 Beta Released</a></li>
                     		
                       		<li><a href="/2015/06/04/storm095-released.html">Storm 0.9.5 released</a></li>
diff --git a/_site/2015/03/25/storm094-released.html b/_site/2015/03/25/storm094-released.html
index 8e7feea..c36f75f 100644
--- a/_site/2015/03/25/storm094-released.html
+++ b/_site/2015/03/25/storm094-released.html
@@ -77,7 +77,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -92,6 +92,10 @@
                     <div class="col-md-3">
                         <ul class="news" id="news-list">
                             
+                      		<li><a href="/2015/11/05/storm096-released.html">Storm 0.9.6 released</a></li>
+                    		
+                      		<li><a href="/2015/11/05/storm0100-released.html">Storm 0.10.0 released</a></li>
+                    		
                       		<li><a href="/2015/06/15/storm0100-beta-released.html">Storm 0.10.0 Beta Released</a></li>
                     		
                       		<li><a href="/2015/06/04/storm095-released.html">Storm 0.9.5 released</a></li>
diff --git a/_site/2015/06/04/storm095-released.html b/_site/2015/06/04/storm095-released.html
index 91b37dd..3e93401 100644
--- a/_site/2015/06/04/storm095-released.html
+++ b/_site/2015/06/04/storm095-released.html
@@ -77,7 +77,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -92,6 +92,10 @@
                     <div class="col-md-3">
                         <ul class="news" id="news-list">
                             
+                      		<li><a href="/2015/11/05/storm096-released.html">Storm 0.9.6 released</a></li>
+                    		
+                      		<li><a href="/2015/11/05/storm0100-released.html">Storm 0.10.0 released</a></li>
+                    		
                       		<li><a href="/2015/06/15/storm0100-beta-released.html">Storm 0.10.0 Beta Released</a></li>
                     		
                       		<li><a href="/2015/06/04/storm095-released.html">Storm 0.9.5 released</a></li>
diff --git a/_site/2015/06/15/storm0100-beta-released.html b/_site/2015/06/15/storm0100-beta-released.html
index 0a1c8a6..03d7dcf 100644
--- a/_site/2015/06/15/storm0100-beta-released.html
+++ b/_site/2015/06/15/storm0100-beta-released.html
@@ -77,7 +77,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -92,6 +92,10 @@
                     <div class="col-md-3">
                         <ul class="news" id="news-list">
                             
+                      		<li><a href="/2015/11/05/storm096-released.html">Storm 0.9.6 released</a></li>
+                    		
+                      		<li><a href="/2015/11/05/storm0100-released.html">Storm 0.10.0 released</a></li>
+                    		
                       		<li><a href="/2015/06/15/storm0100-beta-released.html">Storm 0.10.0 Beta Released</a></li>
                     		
                       		<li><a href="/2015/06/04/storm095-released.html">Storm 0.9.5 released</a></li>
@@ -152,7 +156,7 @@
 
 <p>Aside from many stability and performance improvements, this release includes a number of important new features, some of which are highlighted below.</p>
 
-<h2 id="secure,-multi-tenant-deployment">Secure, Multi-Tenant Deployment</h2>
+<h2 id="secure-multi-tenant-deployment">Secure, Multi-Tenant Deployment</h2>
 
 <p>Much like the early days of Hadoop, Apache Storm originally evolved in an environment where security was not a high-priority concern. Rather, it was assumed that Storm would be deployed to environments suitably cordoned off from security threats. While a large number of users were comfortable setting up their own security measures for Storm (usually at the Firewall/OS level), this proved a hindrance to broader adoption among larger enterprises where security policies prohibited deployment without specific safeguards.</p>
 
@@ -240,7 +244,7 @@
 
 <p>Further information can be found in the <a href="https://github.com/apache/storm/blob/v0.10.0-beta/external/storm-redis/README.md">storm-redis documentation</a>.</p>
 
-<h2 id="jdbc/rdbms-integration">JDBC/RDBMS Integration</h2>
+<h2 id="jdbc-rdbms-integration">JDBC/RDBMS Integration</h2>
 
 <p>Many stream processing data flows require accessing data from or writing data to a relational data store. Storm 0.10.0 introduces highly flexible and customizable support for integrating with virtually any JDBC-compliant database.</p>
 
diff --git a/_site/2015/11/05/storm0100-released.html b/_site/2015/11/05/storm0100-released.html
new file mode 100644
index 0000000..ecf7878
--- /dev/null
+++ b/_site/2015/11/05/storm0100-released.html
@@ -0,0 +1,279 @@
+<!DOCTYPE html>
+<html>
+
+    <head>
+    <meta charset="utf-8">
+    <meta http-equiv="X-UA-Compatible" content="IE=edge">
+    <meta name="viewport" content="width=device-width, initial-scale=1">
+
+    <link rel="shortcut icon" href="/favicon.ico" type="image/x-icon">
+    <link rel="icon" href="/favicon.ico" type="image/x-icon">
+
+    <title>Storm 0.10.0 released</title>
+
+    <!-- Bootstrap core CSS -->
+    <link href="/assets/css/bootstrap.min.css" rel="stylesheet">
+    <!-- Bootstrap theme -->
+    <link href="/assets/css/bootstrap-theme.min.css" rel="stylesheet">
+
+    <!-- Custom styles for this template -->
+    <link rel="stylesheet" href="http://fortawesome.github.io/Font-Awesome/assets/font-awesome/css/font-awesome.css">
+    <link href="/css/style.css" rel="stylesheet">
+    <link href="/assets/css/owl.theme.css" rel="stylesheet">
+    <link href="/assets/css/owl.carousel.css" rel="stylesheet">
+    <script type="text/javascript" src="/assets/js/jquery.min.js"></script>
+    <script type="text/javascript" src="/assets/js/bootstrap.min.js"></script>
+    <script type="text/javascript" src="/assets/js/owl.carousel.min.js"></script>
+    <script type="text/javascript" src="/assets/js/storm.js"></script>
+    <!-- Just for debugging purposes. Don't actually copy these 2 lines! -->
+    <!--[if lt IE 9]><script src="../../assets/js/ie8-responsive-file-warning.js"></script><![endif]-->
+    
+    <!-- HTML5 shim and Respond.js for IE8 support of HTML5 elements and media queries -->
+    <!--[if lt IE 9]>
+      <script src="https://oss.maxcdn.com/html5shiv/3.7.2/html5shiv.min.js"></script>
+      <script src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js"></script>
+    <![endif]-->
+  </head>
+
+
+
+    <body>
+
+    <header>
+  <div class="container-fluid">
+      <div class="row">
+          <div class="col-md-10">
+              <a href="/index.html"><img src="/images/logo.png" class="logo" /></a>
+            </div>
+            <div class="col-md-2">
+              <a href="/downloads.html" class="btn-std btn-block btn-download">Download</a>
+            </div>
+        </div>
+    </div>
+</header>
+<!--Header End-->
+<!--Navigation Begin-->
+<div class="navbar" role="banner">
+  <div class="container-fluid">
+      <div class="navbar-header">
+          <button class="navbar-toggle" type="button" data-toggle="collapse" data-target=".bs-navbar-collapse">
+                <span class="icon-bar"></span>
+                <span class="icon-bar"></span>
+                <span class="icon-bar"></span>
+            </button>
+        </div>
+        <nav class="collapse navbar-collapse bs-navbar-collapse" role="navigation">
+          <ul class="nav navbar-nav">
+              <li><a href="/index.html" id="home">Home</a></li>
+                <li><a href="/getting-help.html" id="getting-help">Getting Help</a></li>
+                <li><a href="/about/integrates.html" id="project-info">Project Information</a></li>
+                <li><a href="/documentation.html" id="documentation">Documentation</a></li>
+                <li><a href="/talksAndVideos.html">Talks and Slideshows</a></li>
+                <li class="dropdown">
+                    <a href="#" class="dropdown-toggle" data-toggle="dropdown" id="contribute">Community <b class="caret"></b></a>
+                    <ul class="dropdown-menu">
+                        <li><a href="/contribute/Contributing-to-Storm.html">Contributing</a></li>
+                        <li><a href="/contribute/People.html">People</a></li>
+                        <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
+                    </ul>
+                </li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
+            </ul>
+        </nav>
+    </div>
+</div>
+
+
+
+    <div class="container-fluid">
+        <div class="row">
+            <div class="col-md-12">
+                <div class="row">
+                    <div class="col-md-3">
+                        <ul class="news" id="news-list">
+                            
+                      		<li><a href="/2015/11/05/storm096-released.html">Storm 0.9.6 released</a></li>
+                    		
+                      		<li><a href="/2015/11/05/storm0100-released.html">Storm 0.10.0 released</a></li>
+                    		
+                      		<li><a href="/2015/06/15/storm0100-beta-released.html">Storm 0.10.0 Beta Released</a></li>
+                    		
+                      		<li><a href="/2015/06/04/storm095-released.html">Storm 0.9.5 released</a></li>
+                    		
+                      		<li><a href="/2015/03/25/storm094-released.html">Storm 0.9.4 released</a></li>
+                    		
+                      		<li><a href="/2014/11/25/storm093-released.html">Storm 0.9.3 released</a></li>
+                    		
+                      		<li><a href="/2014/10/20/storm093-release-candidate.html">Storm 0.9.3 release candidate 1 available</a></li>
+                    		
+                      		<li><a href="/2014/06/25/storm092-released.html">Storm 0.9.2 released</a></li>
+                    		
+                      		<li><a href="/2014/06/17/contest-results.html">Storm Logo Contest Results</a></li>
+                    		
+                      		<li><a href="/2014/04/10/storm-logo-contest.html">Apache Storm Logo Contest</a></li>
+                    		
+                      		<li><a href="/2013/12/08/storm090-released.html">Storm 0.9.0 Released</a></li>
+                    		
+                      		<li><a href="/2013/01/11/storm082-released.html">Storm 0.8.2 released</a></li>
+                    		
+                      		<li><a href="/2012/09/06/storm081-released.html">Storm 0.8.1 released</a></li>
+                    		
+                      		<li><a href="/2012/08/02/storm080-released.html">Storm 0.8.0 and Trident released</a></li>
+                    		
+                        </ul>
+                    </div>
+                    <div class="col-md-9" id="news-content">
+                            <h1 class="page-title">
+                               Storm 0.10.0 released
+                            </h1>
+                                
+                            <div class="row" style="margin: -15px;">
+                                <div class="col-md-12">
+                                    <p class="text-muted credit pull-left">Posted on Nov 5, 2015 by P. Taylor Goetz</p>
+                                    <div class="pull-right">
+                                        <a 
+                                                href="https://twitter.com/share" 
+                                                class="twitter-share-button"
+                                                data-count=none
+                                        >Tweet</a>
+                                        <script> !function(d,s,id){
+                                                var js,
+                                                fjs=d.getElementsByTagName(s)[0],
+                                                p=/^http:/.test(d.location)?'http':'https';
+                                                if(!d.getElementById(id)){
+                                                    js=d.createElement(s);
+                                                    js.id=id;
+                                                    js.src=p+'://platform.twitter.com/widgets.js';
+                                                    fjs.parentNode.insertBefore(js,fjs);
+                                                }
+                                            }(document, 'script', 'twitter-wjs');
+                                        </script>
+                                    </div>
+                                </div>
+                            </div>
+                        <div>
+                	        <p>The Apache Storm community is pleased to announce that version 0.10.0 Stable has been released and is available from <a href="/downloads.html">the downloads page</a>.</p>
+
+<p>This release includes a number of improvements and bug fixes identified in the previous beta release. For a description of the new features included in the 0.10.0 release, please <a href="/2015/06/15/storm0100-beta-released.html">see the previous announcement of 0.10.0-beta1</a>.</p>
+
+<h2 id="thanks">Thanks</h2>
+
+<p>Special thanks are due to all those who have contributed to Apache Storm -- whether through direct code contributions, documentation, bug reports, or helping other users on the mailing lists. Your efforts are much appreciated.</p>
+
+<h2 id="full-changelog">Full Changelog</h2>
+
+<ul>
+<li>STORM-1108: Fix NPE in simulated time</li>
+<li>STORM-1106: Netty should not limit attempts to reconnect</li>
+<li>STORM-1099: Fix worker childopts as arraylist of strings</li>
+<li>STORM-1096: Fix some issues with impersonation on the UI</li>
+<li>STORM-912: Support SSL on Logviewer</li>
+<li>STORM-1094: advance kafka offset when deserializer yields no object</li>
+<li>STORM-1066: Specify current directory when supervisor launches a worker</li>
+<li>STORM-1012: Shaded everything that was not already shaded</li>
+<li>STORM-967: Shaded everything that was not already shaded</li>
+<li>STORM-922: Shaded everything that was not already shaded</li>
+<li>STORM-1042: Shaded everything that was not already shaded</li>
+<li>STORM-1026: Adding external classpath elements does not work</li>
+<li>STORM-1055: storm-jdbc README needs fixes and context</li>
+<li>STORM-1044: Setting dop to zero does not raise an error</li>
+<li>STORM-1050: Topologies with same name run on one cluster</li>
+<li>STORM-1005: Supervisor do not get running workers after restart.</li>
+<li>STORM-803: Cleanup travis-ci build and logs</li>
+<li>STORM-1027: Use overflow buffer for emitting metrics</li>
+<li>STORM-1024: log4j changes leaving ${sys:storm.log.dir} under STORM_HOME dir</li>
+<li>STORM-944: storm-hive pom.xml has a dependency conflict with calcite</li>
+<li>STORM-994: Connection leak between nimbus and supervisors</li>
+<li>STORM-1001: Undefined STORM_EXT_CLASSPATH adds &#39;::&#39; to classpath of workers</li>
+<li>STORM-977: Incorrect signal (-9) when as-user is true</li>
+<li>STORM-843: [storm-redis] Add Javadoc to storm-redis</li>
+<li>STORM-866: Use storm.log.dir instead of storm.home in log4j2 config</li>
+<li>STORM-810: PartitionManager in storm-kafka should commit latest offset before close</li>
+<li>STORM-928: Add sources-&gt;streams-&gt;fields map to Multi-Lang Handshake</li>
+<li>STORM-945: <DefaultRolloverStrategy> element is not a policy,and should not be putted in the <Policies> element.</li>
+<li>STORM-857: create logs metadata dir when running securely</li>
+<li>STORM-793: Made change to logviewer.clj in order to remove the invalid http 500 response</li>
+<li>STORM-139: hashCode does not work for byte[]</li>
+<li>STORM-860: UI: while topology is transitioned to killed, &quot;Activate&quot; button is enabled but not functioning</li>
+<li>STORM-966: ConfigValidation.DoubleValidator doesn&#39;t really validate whether the type of the object is a double</li>
+<li>STORM-742: Let ShellBolt treat all messages to update heartbeat</li>
+<li>STORM-992: A bug in the timer.clj might cause unexpected delay to schedule new event</li>
+</ul>
+
+                	    </div>
+                    </div>
+                </div>
+            </div>
+        </div>
+    </div>
+    <footer>
+    <div class="container-fluid">
+        <div class="row">
+            <div class="col-md-3">
+                <div class="footer-widget">
+                    <h5>Meetups</h5>
+                    <ul class="latest-news">
+                        
+                        <li><a href="http://www.meetup.com/Apache-Storm-Apache-Kafka/">Apache Storm & Apache Kafka</a> <span class="small">(Sunnyvale, CA)</span></li>
+                        
+                        <li><a href="http://www.meetup.com/Apache-Storm-Kafka-Users/">Apache Storm & Kafka Users</a> <span class="small">(Seattle, WA)</span></li>
+                        
+                        <li><a href="http://www.meetup.com/New-York-City-Storm-User-Group/">NYC Storm User Group</a> <span class="small">(New York, NY)</span></li>
+                        
+                        <li><a href="http://www.meetup.com/Bay-Area-Stream-Processing">Bay Area Stream Processing</a> <span class="small">(Emeryville, CA)</span></li>
+                        
+                        <li><a href="http://www.meetup.com/Boston-Storm-Users/">Boston Realtime Data</a> <span class="small">(Boston, MA)</span></li>
+                        
+                        <li><a href="http://www.meetup.com/storm-london">London Storm User Group</a> <span class="small">(London, UK)</span></li>
+                        
+                        <!-- <li><a href="http://www.meetup.com/Apache-Storm-Kafka-Users/">Seatle, WA</a> <span class="small">(27 Jun 2015)</span></li> -->
+                    </ul>
+                </div>
+            </div>
+            <div class="col-md-3">
+                <div class="footer-widget">
+                    <h5>About Storm</h5>
+                    <p>Storm integrates with any queueing system and any database system. Storm's spout abstraction makes it easy to integrate a new queuing system. Likewise, integrating Storm with database systems is easy.</p>
+               </div>
+            </div>
+            <div class="col-md-3">
+                <div class="footer-widget">
+                    <h5>First Look</h5>
+                    <ul class="footer-list">
+                        <li><a href="/documentation/Rationale.html">Rationale</a></li>
+                        <li><a href="/tutorial.html">Tutorial</a></li>
+                        <li><a href="/documentation/Setting-up-development-environment.html">Setting up development environment</a></li>
+                        <li><a href="/documentation/Creating-a-new-Storm-project.html">Creating a new Storm project</a></li>
+                    </ul>
+                </div>
+            </div>
+            <div class="col-md-3">
+                <div class="footer-widget">
+                    <h5>Documentation</h5>
+                    <ul class="footer-list">
+                        <li><a href="/doc-index.html">Index</a></li>
+                        <li><a href="/documentation.html">Manual</a></li>
+                        <li><a href="https://storm.apache.org/javadoc/apidocs/index.html">Javadoc</a></li>
+                        <li><a href="/documentation/FAQ.html">FAQ</a></li>
+                    </ul>
+                </div>
+            </div>
+        </div>
+        <hr/>
+        <div class="row">   
+            <div class="col-md-12">
+                <p align="center">Copyright © 2015 <a href="http://www.apache.org">Apache Software Foundation</a>. All Rights Reserved. 
+                    <br>Apache Storm, Apache, the Apache feather logo, and the Apache Storm project logos are trademarks of The Apache Software Foundation. 
+                    <br>All other marks mentioned may be trademarks or registered trademarks of their respective owners.</p>
+            </div>
+        </div>
+    </div>
+</footer>
+<!--Footer End-->
+<!-- Scroll to top -->
+<span class="totop"><a href="#"><i class="fa fa-angle-up"></i></a></span> 
+
+    </body>
+
+</html>
+
diff --git a/_site/2015/11/05/storm096-released.html b/_site/2015/11/05/storm096-released.html
new file mode 100644
index 0000000..df04bcb
--- /dev/null
+++ b/_site/2015/11/05/storm096-released.html
@@ -0,0 +1,254 @@
+<!DOCTYPE html>
+<html>
+
+    <head>
+    <meta charset="utf-8">
+    <meta http-equiv="X-UA-Compatible" content="IE=edge">
+    <meta name="viewport" content="width=device-width, initial-scale=1">
+
+    <link rel="shortcut icon" href="/favicon.ico" type="image/x-icon">
+    <link rel="icon" href="/favicon.ico" type="image/x-icon">
+
+    <title>Storm 0.9.6 released</title>
+
+    <!-- Bootstrap core CSS -->
+    <link href="/assets/css/bootstrap.min.css" rel="stylesheet">
+    <!-- Bootstrap theme -->
+    <link href="/assets/css/bootstrap-theme.min.css" rel="stylesheet">
+
+    <!-- Custom styles for this template -->
+    <link rel="stylesheet" href="http://fortawesome.github.io/Font-Awesome/assets/font-awesome/css/font-awesome.css">
+    <link href="/css/style.css" rel="stylesheet">
+    <link href="/assets/css/owl.theme.css" rel="stylesheet">
+    <link href="/assets/css/owl.carousel.css" rel="stylesheet">
+    <script type="text/javascript" src="/assets/js/jquery.min.js"></script>
+    <script type="text/javascript" src="/assets/js/bootstrap.min.js"></script>
+    <script type="text/javascript" src="/assets/js/owl.carousel.min.js"></script>
+    <script type="text/javascript" src="/assets/js/storm.js"></script>
+    <!-- Just for debugging purposes. Don't actually copy these 2 lines! -->
+    <!--[if lt IE 9]><script src="../../assets/js/ie8-responsive-file-warning.js"></script><![endif]-->
+    
+    <!-- HTML5 shim and Respond.js for IE8 support of HTML5 elements and media queries -->
+    <!--[if lt IE 9]>
+      <script src="https://oss.maxcdn.com/html5shiv/3.7.2/html5shiv.min.js"></script>
+      <script src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js"></script>
+    <![endif]-->
+  </head>
+
+
+
+    <body>
+
+    <header>
+  <div class="container-fluid">
+      <div class="row">
+          <div class="col-md-10">
+              <a href="/index.html"><img src="/images/logo.png" class="logo" /></a>
+            </div>
+            <div class="col-md-2">
+              <a href="/downloads.html" class="btn-std btn-block btn-download">Download</a>
+            </div>
+        </div>
+    </div>
+</header>
+<!--Header End-->
+<!--Navigation Begin-->
+<div class="navbar" role="banner">
+  <div class="container-fluid">
+      <div class="navbar-header">
+          <button class="navbar-toggle" type="button" data-toggle="collapse" data-target=".bs-navbar-collapse">
+                <span class="icon-bar"></span>
+                <span class="icon-bar"></span>
+                <span class="icon-bar"></span>
+            </button>
+        </div>
+        <nav class="collapse navbar-collapse bs-navbar-collapse" role="navigation">
+          <ul class="nav navbar-nav">
+              <li><a href="/index.html" id="home">Home</a></li>
+                <li><a href="/getting-help.html" id="getting-help">Getting Help</a></li>
+                <li><a href="/about/integrates.html" id="project-info">Project Information</a></li>
+                <li><a href="/documentation.html" id="documentation">Documentation</a></li>
+                <li><a href="/talksAndVideos.html">Talks and Slideshows</a></li>
+                <li class="dropdown">
+                    <a href="#" class="dropdown-toggle" data-toggle="dropdown" id="contribute">Community <b class="caret"></b></a>
+                    <ul class="dropdown-menu">
+                        <li><a href="/contribute/Contributing-to-Storm.html">Contributing</a></li>
+                        <li><a href="/contribute/People.html">People</a></li>
+                        <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
+                    </ul>
+                </li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
+            </ul>
+        </nav>
+    </div>
+</div>
+
+
+
+    <div class="container-fluid">
+        <div class="row">
+            <div class="col-md-12">
+                <div class="row">
+                    <div class="col-md-3">
+                        <ul class="news" id="news-list">
+                            
+                      		<li><a href="/2015/11/05/storm096-released.html">Storm 0.9.6 released</a></li>
+                    		
+                      		<li><a href="/2015/11/05/storm0100-released.html">Storm 0.10.0 released</a></li>
+                    		
+                      		<li><a href="/2015/06/15/storm0100-beta-released.html">Storm 0.10.0 Beta Released</a></li>
+                    		
+                      		<li><a href="/2015/06/04/storm095-released.html">Storm 0.9.5 released</a></li>
+                    		
+                      		<li><a href="/2015/03/25/storm094-released.html">Storm 0.9.4 released</a></li>
+                    		
+                      		<li><a href="/2014/11/25/storm093-released.html">Storm 0.9.3 released</a></li>
+                    		
+                      		<li><a href="/2014/10/20/storm093-release-candidate.html">Storm 0.9.3 release candidate 1 available</a></li>
+                    		
+                      		<li><a href="/2014/06/25/storm092-released.html">Storm 0.9.2 released</a></li>
+                    		
+                      		<li><a href="/2014/06/17/contest-results.html">Storm Logo Contest Results</a></li>
+                    		
+                      		<li><a href="/2014/04/10/storm-logo-contest.html">Apache Storm Logo Contest</a></li>
+                    		
+                      		<li><a href="/2013/12/08/storm090-released.html">Storm 0.9.0 Released</a></li>
+                    		
+                      		<li><a href="/2013/01/11/storm082-released.html">Storm 0.8.2 released</a></li>
+                    		
+                      		<li><a href="/2012/09/06/storm081-released.html">Storm 0.8.1 released</a></li>
+                    		
+                      		<li><a href="/2012/08/02/storm080-released.html">Storm 0.8.0 and Trident released</a></li>
+                    		
+                        </ul>
+                    </div>
+                    <div class="col-md-9" id="news-content">
+                            <h1 class="page-title">
+                               Storm 0.9.6 released
+                            </h1>
+                                
+                            <div class="row" style="margin: -15px;">
+                                <div class="col-md-12">
+                                    <p class="text-muted credit pull-left">Posted on Nov 5, 2015 by P. Taylor Goetz</p>
+                                    <div class="pull-right">
+                                        <a 
+                                                href="https://twitter.com/share" 
+                                                class="twitter-share-button"
+                                                data-count=none
+                                        >Tweet</a>
+                                        <script> !function(d,s,id){
+                                                var js,
+                                                fjs=d.getElementsByTagName(s)[0],
+                                                p=/^http:/.test(d.location)?'http':'https';
+                                                if(!d.getElementById(id)){
+                                                    js=d.createElement(s);
+                                                    js.id=id;
+                                                    js.src=p+'://platform.twitter.com/widgets.js';
+                                                    fjs.parentNode.insertBefore(js,fjs);
+                                                }
+                                            }(document, 'script', 'twitter-wjs');
+                                        </script>
+                                    </div>
+                                </div>
+                            </div>
+                        <div>
+                	        <p>The Apache Storm community is pleased to announce that version 0.9.6 has been released and is available from <a href="/downloads.html">the downloads page</a>.</p>
+
+<p>This is a maintenance release that includes a number of important bug fixes that improve Storm&#39;s stability and fault tolerance. We encourage users of previous versions to upgrade to this latest release.</p>
+
+<h2 id="thanks">Thanks</h2>
+
+<p>Special thanks are due to all those who have contributed to Apache Storm -- whether through direct code contributions, documentation, bug reports, or helping other users on the mailing lists. Your efforts are much appreciated.</p>
+
+<h2 id="full-changelog">Full Changelog</h2>
+
+<ul>
+<li>STORM-1027: Use overflow buffer for emitting metrics</li>
+<li>STORM-996: netty-unit-tests/test-batch demonstrates out-of-order delivery</li>
+<li>STORM-1056: allow supervisor log filename to be configurable via ENV variable</li>
+<li>STORM-1051: Netty Client.java&#39;s flushMessages produces a NullPointerException</li>
+<li>STORM-763: nimbus reassigned worker A to another machine, but other worker&#39;s netty client can&#39;t connect to the new worker A</li>
+<li>STORM-935: Update Disruptor queue version to 2.10.4</li>
+<li>STORM-503: Short disruptor queue wait time leads to high CPU usage when idle</li>
+<li>STORM-728: Put emitted and transferred stats under correct columns</li>
+<li>STORM-643: KafkaUtils repeatedly fetches messages whose offset is out of range</li>
+<li>STORM-933: NullPointerException during KafkaSpout deactivation</li>
+</ul>
+
+                	    </div>
+                    </div>
+                </div>
+            </div>
+        </div>
+    </div>
+    <footer>
+    <div class="container-fluid">
+        <div class="row">
+            <div class="col-md-3">
+                <div class="footer-widget">
+                    <h5>Meetups</h5>
+                    <ul class="latest-news">
+                        
+                        <li><a href="http://www.meetup.com/Apache-Storm-Apache-Kafka/">Apache Storm & Apache Kafka</a> <span class="small">(Sunnyvale, CA)</span></li>
+                        
+                        <li><a href="http://www.meetup.com/Apache-Storm-Kafka-Users/">Apache Storm & Kafka Users</a> <span class="small">(Seattle, WA)</span></li>
+                        
+                        <li><a href="http://www.meetup.com/New-York-City-Storm-User-Group/">NYC Storm User Group</a> <span class="small">(New York, NY)</span></li>
+                        
+                        <li><a href="http://www.meetup.com/Bay-Area-Stream-Processing">Bay Area Stream Processing</a> <span class="small">(Emeryville, CA)</span></li>
+                        
+                        <li><a href="http://www.meetup.com/Boston-Storm-Users/">Boston Realtime Data</a> <span class="small">(Boston, MA)</span></li>
+                        
+                        <li><a href="http://www.meetup.com/storm-london">London Storm User Group</a> <span class="small">(London, UK)</span></li>
+                        
+                        <!-- <li><a href="http://www.meetup.com/Apache-Storm-Kafka-Users/">Seatle, WA</a> <span class="small">(27 Jun 2015)</span></li> -->
+                    </ul>
+                </div>
+            </div>
+            <div class="col-md-3">
+                <div class="footer-widget">
+                    <h5>About Storm</h5>
+                    <p>Storm integrates with any queueing system and any database system. Storm's spout abstraction makes it easy to integrate a new queuing system. Likewise, integrating Storm with database systems is easy.</p>
+               </div>
+            </div>
+            <div class="col-md-3">
+                <div class="footer-widget">
+                    <h5>First Look</h5>
+                    <ul class="footer-list">
+                        <li><a href="/documentation/Rationale.html">Rationale</a></li>
+                        <li><a href="/tutorial.html">Tutorial</a></li>
+                        <li><a href="/documentation/Setting-up-development-environment.html">Setting up development environment</a></li>
+                        <li><a href="/documentation/Creating-a-new-Storm-project.html">Creating a new Storm project</a></li>
+                    </ul>
+                </div>
+            </div>
+            <div class="col-md-3">
+                <div class="footer-widget">
+                    <h5>Documentation</h5>
+                    <ul class="footer-list">
+                        <li><a href="/doc-index.html">Index</a></li>
+                        <li><a href="/documentation.html">Manual</a></li>
+                        <li><a href="https://storm.apache.org/javadoc/apidocs/index.html">Javadoc</a></li>
+                        <li><a href="/documentation/FAQ.html">FAQ</a></li>
+                    </ul>
+                </div>
+            </div>
+        </div>
+        <hr/>
+        <div class="row">   
+            <div class="col-md-12">
+                <p align="center">Copyright © 2015 <a href="http://www.apache.org">Apache Software Foundation</a>. All Rights Reserved. 
+                    <br>Apache Storm, Apache, the Apache feather logo, and the Apache Storm project logos are trademarks of The Apache Software Foundation. 
+                    <br>All other marks mentioned may be trademarks or registered trademarks of their respective owners.</p>
+            </div>
+        </div>
+    </div>
+</footer>
+<!--Footer End-->
+<!-- Scroll to top -->
+<span class="totop"><a href="#"><i class="fa fa-angle-up"></i></a></span> 
+
+    </body>
+
+</html>
+
diff --git a/_site/about.html b/_site/about.html
index 82b33eb..6d13b84 100644
--- a/_site/about.html
+++ b/_site/about.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
diff --git a/_site/about/deployment.html b/_site/about/deployment.html
index 145aa67..b97be14 100644
--- a/_site/about/deployment.html
+++ b/_site/about/deployment.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
diff --git a/_site/about/fault-tolerant.html b/_site/about/fault-tolerant.html
index 9063e3e..d862fa2 100644
--- a/_site/about/fault-tolerant.html
+++ b/_site/about/fault-tolerant.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
diff --git a/_site/about/free-and-open-source.html b/_site/about/free-and-open-source.html
index f516e75..b89bcc4 100644
--- a/_site/about/free-and-open-source.html
+++ b/_site/about/free-and-open-source.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
diff --git a/_site/about/guarantees-data-processing.html b/_site/about/guarantees-data-processing.html
index 263fe69..5f97e63 100644
--- a/_site/about/guarantees-data-processing.html
+++ b/_site/about/guarantees-data-processing.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
diff --git a/_site/about/integrates.html b/_site/about/integrates.html
index 8e3f05b..b6aa303 100644
--- a/_site/about/integrates.html
+++ b/_site/about/integrates.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
diff --git a/_site/about/multi-language.html b/_site/about/multi-language.html
index edd086c..6b6c865 100644
--- a/_site/about/multi-language.html
+++ b/_site/about/multi-language.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
diff --git a/_site/about/scalable.html b/_site/about/scalable.html
index c6299ee..d4bd442 100644
--- a/_site/about/scalable.html
+++ b/_site/about/scalable.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
diff --git a/_site/about/simple-api.html b/_site/about/simple-api.html
index d05ce47..c8521b0 100644
--- a/_site/about/simple-api.html
+++ b/_site/about/simple-api.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
diff --git a/_site/contribute/BYLAWS.html b/_site/contribute/BYLAWS.html
index 6694a13..ffe2f75 100644
--- a/_site/contribute/BYLAWS.html
+++ b/_site/contribute/BYLAWS.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -94,17 +94,17 @@
 
 <p>Apache projects define a set of roles with associated rights and responsibilities. These roles govern what tasks an individual may perform within the project. The roles are defined in the following sections:</p>
 
-<h3 id="users:">Users:</h3>
+<h3 id="users">Users:</h3>
 
 <p>The most important participants in the project are people who use our software. The majority of our developers start out as users and guide their development efforts from the user&#39;s perspective.</p>
 
 <p>Users contribute to the Apache projects by providing feedback to developers in the form of bug reports and feature suggestions. As well, users participate in the Apache community by helping other users on mailing lists and user support forums.</p>
 
-<h3 id="contributors:">Contributors:</h3>
+<h3 id="contributors">Contributors:</h3>
 
 <p>Contributors are all of the volunteers who are contributing time, code, documentation, or resources to the Storm Project. A contributor that makes sustained, welcome contributions to the project may be invited to become a Committer, though the exact timing of such invitations depends on many factors.</p>
 
-<h3 id="committers:">Committers:</h3>
+<h3 id="committers">Committers:</h3>
 
 <p>The project&#39;s Committers are responsible for the project&#39;s technical management. Committers have access to all project source repositories. Committers may cast binding votes on any technical discussion regarding storm.</p>
 
@@ -114,7 +114,7 @@
 
 <p>A Committer who makes a sustained contribution to the project may be invited to become a member of the PMC. The form of contribution is not limited to code. It can also include code review, helping out users on the mailing lists, documentation, testing, etc.</p>
 
-<h3 id="project-management-committee(pmc):">Project Management Committee(PMC):</h3>
+<h3 id="project-management-committee-pmc">Project Management Committee(PMC):</h3>
 
 <p>The PMC is responsible to the board and the ASF for the management and oversight of the Apache Storm codebase. The responsibilities of the PMC include:</p>
 
diff --git a/_site/contribute/Contributing-to-Storm.html b/_site/contribute/Contributing-to-Storm.html
index a191ed2..217e646 100644
--- a/_site/contribute/Contributing-to-Storm.html
+++ b/_site/contribute/Contributing-to-Storm.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -108,6 +108,7 @@
 <li>Open an issue on the <a href="https://issues.apache.org/jira/browse/STORM">JIRA issue tracker</a> if one doesn&#39;t exist already</li>
 <li>Comment on the issue with your plan for implementing the issue. Explain what pieces of the codebase you&#39;re going to touch and how everything is going to fit together.</li>
 <li>Storm committers will iterate with you on the design to make sure you&#39;re on the right track</li>
+<li>Read through the developer documentation on how to build, code style, testing, etc <a href="https://github.com/apache/storm/blob/master/DEVELOPER.md">DEVELOPER.md</a> </li>
 <li>Implement your issue, submit a pull request prefixed with the JIRA ID (e.g. &quot;STORM-123: add new feature foo&quot;), and iterate from there.</li>
 </ol>
 
diff --git a/_site/contribute/People.html b/_site/contribute/People.html
index 197e631..10e0b79 100644
--- a/_site/contribute/People.html
+++ b/_site/contribute/People.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -240,6 +240,34 @@
     <td class=""><a href="https://github.com/clockfly">@clockfly</td>
   </tr>
 
+  <tr>
+    <td class="">Matthias J. Sax</td>
+    <td class="">Committer, PMC</td>
+    <td class="">mjsax</td>
+    <td class=""><a href="https://github.com/mjsax">@mjsax</td>
+  </tr>
+
+  <tr>
+    <td class="">Boyang Jerry Peng</td>
+    <td class="">Committer, PMC</td>
+    <td class="">jerrypeng</td>
+    <td class=""><a href="https://github.com/jerrypeng">@jerrypeng</td>
+  </tr>
+
+  <tr>
+    <td class="">Zhuo Liu</td>
+    <td class="">Committer, PMC</td>
+    <td class="">zhuoliu</td>
+    <td class=""><a href="https://github.com/zhuoliu">@zhuoliu</td>
+  </tr>
+
+  <tr>
+    <td class="">Haohui Mai</td>
+    <td class="">Committer, PMC</td>
+    <td class="">wheat9</td>
+    <td class=""><a href="https://github.com/haohui">@haohui</td>
+  </tr>
+
 </table>
 
 <h2 id="contributors">Contributors</h2>
@@ -945,6 +973,11 @@
     <td class=""><a href="https://github.com/abhishekagarwal87">@abhishekagarwal87</td>
   </tr>
 
+  <tr>
+    <td class="">Priyank Shah</td>
+    <td class=""><a href="https://github.com/priyank5485">@priyank5485</td>
+  </tr>
+
 </table>
 
 
diff --git a/_site/doc-index.html b/_site/doc-index.html
index b28f94b..0525227 100644
--- a/_site/doc-index.html
+++ b/_site/doc-index.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -341,6 +341,14 @@
    <!-- resource-p -->
 
 
+<li><a href="/documentation/storm-sql-internal.html">The Internals of Storm SQL</a></li>
+   <!-- resource-p -->
+
+
+<li><a href="/documentation/storm-sql.html">Storm SQL integration</a></li>
+   <!-- resource-p -->
+
+
 <li><a href="/talksAndVideos.html">Resources</a></li>
    <!-- resource-p -->
 
diff --git a/_site/documentation.html b/_site/documentation.html
index 787cbc4..ddb54bb 100644
--- a/_site/documentation.html
+++ b/_site/documentation.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -133,8 +133,8 @@
                             </ul>
                         </div>
                         <div role="tabpanel" class="tab-pane" id="integration">
-                            <p>The following modules are included in the Apache Storm distribution and are not required for storm to operate, 
-                            but are useful for extending Storm in order to provide additional functionality such as integration with other 
+                            <p>The following modules are included in the Apache Storm distribution and are not required for storm to operate,
+                            but are useful for extending Storm in order to provide additional functionality such as integration with other
                             technologies frequently used in combination with Storm.</p>
                             <ul>
                                 <li><a href="documentation/storm-kafka.html">Kafka</a></li>
@@ -146,6 +146,7 @@
                                 <li><a href="documentation/storm-solr.html">Solr</a></li>
                                 <li><a href="documentation/storm-eventhubs.html">Azure EventHubs</a></li>
                                 <li><a href="documentation/flux.html">Flux</a> (declarative wiring/configuration of Topologies)</li>
+                                <li><a href="documentation/storm-sql.html">SQL</a> (writing topologies in SQL)</li>
                             </ul>
                         </div>
                         <div role="tabpanel" class="tab-pane" id="intermediate">
diff --git a/_site/documentation/Acking-framework-implementation.html b/_site/documentation/Acking-framework-implementation.html
index 2a7336a..6de92ff 100644
--- a/_site/documentation/Acking-framework-implementation.html
+++ b/_site/documentation/Acking-framework-implementation.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -94,7 +94,7 @@
 
 <p>You can read a bit more about the <a href="Guaranteeing-message-processing.html#what-is-storms-reliability-api">reliability mechanism</a> elsewhere on the wiki -- this explains the internal details.</p>
 
-<h3 id="acker-execute()">acker <code>execute()</code></h3>
+<h3 id="acker-execute">acker <code>execute()</code></h3>
 
 <p>The acker is actually a regular bolt, with its  <a href="https://github.com/apache/storm/blob/46c3ba7/storm-core/src/clj/backtype/storm/daemon/acker.clj#L36">execute method</a> defined withing <code>mk-acker-bolt</code>.  When a new tupletree is born, the spout sends the XORed edge-ids of each tuple recipient, which the acker records in its <code>pending</code> ledger. Every time an executor acks a tuple, the acker receives a partial checksum that is the XOR of the tuple&#39;s own edge-id (clearing it from the ledger) and the edge-id of each downstream tuple the executor emitted (thus entering them into the ledger).</p>
 
diff --git a/_site/documentation/Clojure-DSL.html b/_site/documentation/Clojure-DSL.html
index 29b4fdc..d7ba55e 100644
--- a/_site/documentation/Clojure-DSL.html
+++ b/_site/documentation/Clojure-DSL.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -107,19 +107,19 @@
 <p>To define a topology, use the <code>topology</code> function. <code>topology</code> takes in two arguments: a map of &quot;spout specs&quot; and a map of &quot;bolt specs&quot;. Each spout and bolt spec wires the code for the component into the topology by specifying things like inputs and parallelism.</p>
 
 <p>Let&#39;s take a look at an example topology definition <a href="https://github.com/apache/storm/blob/master/examples/storm-starter/src/clj/storm/starter/clj/word_count.clj">from the storm-starter project</a>:</p>
-<div class="highlight"><pre><code class="language-clojure" data-lang="clojure"><span class="p">(</span><span class="nf">topology</span>
- <span class="p">{</span><span class="s">&quot;1&quot;</span> <span class="p">(</span><span class="nf">spout-spec</span> <span class="nv">sentence-spout</span><span class="p">)</span>
-  <span class="s">&quot;2&quot;</span> <span class="p">(</span><span class="nf">spout-spec</span> <span class="p">(</span><span class="nf">sentence-spout-parameterized</span>
-                   <span class="p">[</span><span class="s">&quot;the cat jumped over the door&quot;</span>
-                    <span class="s">&quot;greetings from a faraway land&quot;</span><span class="p">])</span>
-                   <span class="ss">:p</span> <span class="mi">2</span><span class="p">)}</span>
- <span class="p">{</span><span class="s">&quot;3&quot;</span> <span class="p">(</span><span class="nf">bolt-spec</span> <span class="p">{</span><span class="s">&quot;1&quot;</span> <span class="ss">:shuffle</span> <span class="s">&quot;2&quot;</span> <span class="ss">:shuffle</span><span class="p">}</span>
-                 <span class="nv">split-sentence</span>
-                 <span class="ss">:p</span> <span class="mi">5</span><span class="p">)</span>
-  <span class="s">&quot;4&quot;</span> <span class="p">(</span><span class="nf">bolt-spec</span> <span class="p">{</span><span class="s">&quot;3&quot;</span> <span class="p">[</span><span class="s">&quot;word&quot;</span><span class="p">]}</span>
-                 <span class="nv">word-count</span>
-                 <span class="ss">:p</span> <span class="mi">6</span><span class="p">)})</span>
-</code></pre></div>
+<div class="highlight"><pre><code class="language-clojure" data-lang="clojure"><span class="p">(</span><span class="nf">topology</span><span class="w">
+ </span><span class="p">{</span><span class="s">"1"</span><span class="w"> </span><span class="p">(</span><span class="nf">spout-spec</span><span class="w"> </span><span class="n">sentence-spout</span><span class="p">)</span><span class="w">
+  </span><span class="s">"2"</span><span class="w"> </span><span class="p">(</span><span class="nf">spout-spec</span><span class="w"> </span><span class="p">(</span><span class="nf">sentence-spout-parameterized</span><span class="w">
+                   </span><span class="p">[</span><span class="s">"the cat jumped over the door"</span><span class="w">
+                    </span><span class="s">"greetings from a faraway land"</span><span class="p">])</span><span class="w">
+                   </span><span class="no">:p</span><span class="w"> </span><span class="mi">2</span><span class="p">)}</span><span class="w">
+ </span><span class="p">{</span><span class="s">"3"</span><span class="w"> </span><span class="p">(</span><span class="nf">bolt-spec</span><span class="w"> </span><span class="p">{</span><span class="s">"1"</span><span class="w"> </span><span class="no">:shuffle</span><span class="w"> </span><span class="s">"2"</span><span class="w"> </span><span class="no">:shuffle</span><span class="p">}</span><span class="w">
+                 </span><span class="n">split-sentence</span><span class="w">
+                 </span><span class="no">:p</span><span class="w"> </span><span class="mi">5</span><span class="p">)</span><span class="w">
+  </span><span class="s">"4"</span><span class="w"> </span><span class="p">(</span><span class="nf">bolt-spec</span><span class="w"> </span><span class="p">{</span><span class="s">"3"</span><span class="w"> </span><span class="p">[</span><span class="s">"word"</span><span class="p">]}</span><span class="w">
+                 </span><span class="n">word-count</span><span class="w">
+                 </span><span class="no">:p</span><span class="w"> </span><span class="mi">6</span><span class="p">)})</span><span class="w">
+</span></code></pre></div>
 <p>The maps of spout and bolt specs are maps from the component id to the corresponding spec. The component ids must be unique across the maps. Just like defining topologies in Java, component ids are used when declaring inputs for bolts in the topology.</p>
 
 <h4 id="spout-spec">spout-spec</h4>
@@ -148,10 +148,10 @@
 </ol>
 
 <p>See <a href="Concepts.html">Concepts</a> for more info on stream groupings. Here&#39;s an example input declaration showcasing the various ways to declare inputs:</p>
-<div class="highlight"><pre><code class="language-clojure" data-lang="clojure"><span class="p">{[</span><span class="s">&quot;2&quot;</span> <span class="s">&quot;1&quot;</span><span class="p">]</span> <span class="ss">:shuffle</span>
- <span class="s">&quot;3&quot;</span> <span class="p">[</span><span class="s">&quot;field1&quot;</span> <span class="s">&quot;field2&quot;</span><span class="p">]</span>
- <span class="p">[</span><span class="s">&quot;4&quot;</span> <span class="s">&quot;2&quot;</span><span class="p">]</span> <span class="ss">:global</span><span class="p">}</span>
-</code></pre></div>
+<div class="highlight"><pre><code class="language-clojure" data-lang="clojure"><span class="p">{[</span><span class="s">"2"</span><span class="w"> </span><span class="s">"1"</span><span class="p">]</span><span class="w"> </span><span class="no">:shuffle</span><span class="w">
+ </span><span class="s">"3"</span><span class="w"> </span><span class="p">[</span><span class="s">"field1"</span><span class="w"> </span><span class="s">"field2"</span><span class="p">]</span><span class="w">
+ </span><span class="p">[</span><span class="s">"4"</span><span class="w"> </span><span class="s">"2"</span><span class="p">]</span><span class="w"> </span><span class="no">:global</span><span class="p">}</span><span class="w">
+</span></code></pre></div>
 <p>This input declaration subscribes to three streams total. It subscribes to stream &quot;1&quot; on component &quot;2&quot; with a shuffle grouping, subscribes to the default stream on component &quot;3&quot; with a fields grouping on the fields &quot;field1&quot; and &quot;field2&quot;, and subscribes to stream &quot;2&quot; on component &quot;4&quot; with a global grouping.</p>
 
 <p>Like <code>spout-spec</code>, the only current supported keyword argument for <code>bolt-spec</code> is <code>:p</code> which specifies the parallelism for the bolt.</p>
@@ -161,12 +161,12 @@
 <p><code>shell-bolt-spec</code> is used for defining bolts that are implemented in a non-JVM language. It takes as arguments the input declaration, the command line program to run, the name of the file implementing the bolt, an output specification, and then the same keyword arguments that <code>bolt-spec</code> accepts.</p>
 
 <p>Here&#39;s an example <code>shell-bolt-spec</code>:</p>
-<div class="highlight"><pre><code class="language-clojure" data-lang="clojure"><span class="p">(</span><span class="nf">shell-bolt-spec</span> <span class="p">{</span><span class="s">&quot;1&quot;</span> <span class="ss">:shuffle</span> <span class="s">&quot;2&quot;</span> <span class="p">[</span><span class="s">&quot;id&quot;</span><span class="p">]}</span>
-                 <span class="s">&quot;python&quot;</span>
-                 <span class="s">&quot;mybolt.py&quot;</span>
-                 <span class="p">[</span><span class="s">&quot;outfield1&quot;</span> <span class="s">&quot;outfield2&quot;</span><span class="p">]</span>
-                 <span class="ss">:p</span> <span class="mi">25</span><span class="p">)</span>
-</code></pre></div>
+<div class="highlight"><pre><code class="language-clojure" data-lang="clojure"><span class="p">(</span><span class="nf">shell-bolt-spec</span><span class="w"> </span><span class="p">{</span><span class="s">"1"</span><span class="w"> </span><span class="no">:shuffle</span><span class="w"> </span><span class="s">"2"</span><span class="w"> </span><span class="p">[</span><span class="s">"id"</span><span class="p">]}</span><span class="w">
+                 </span><span class="s">"python"</span><span class="w">
+                 </span><span class="s">"mybolt.py"</span><span class="w">
+                 </span><span class="p">[</span><span class="s">"outfield1"</span><span class="w"> </span><span class="s">"outfield2"</span><span class="p">]</span><span class="w">
+                 </span><span class="no">:p</span><span class="w"> </span><span class="mi">25</span><span class="p">)</span><span class="w">
+</span></code></pre></div>
 <p>The syntax of output declarations is described in more detail in the <code>defbolt</code> section below. See <a href="Using-non-JVM-languages-with-Storm.html">Using non JVM languages with Storm</a> for more details on how multilang works within Storm.</p>
 
 <h3 id="defbolt">defbolt</h3>
@@ -182,47 +182,47 @@
 <h4 id="simple-bolts">Simple bolts</h4>
 
 <p>Let&#39;s start with the simplest form of <code>defbolt</code>. Here&#39;s an example bolt that splits a tuple containing a sentence into a tuple for each word:</p>
-<div class="highlight"><pre><code class="language-clojure" data-lang="clojure"><span class="p">(</span><span class="nf">defbolt</span> <span class="nv">split-sentence</span> <span class="p">[</span><span class="s">&quot;word&quot;</span><span class="p">]</span> <span class="p">[</span><span class="nv">tuple</span> <span class="nv">collector</span><span class="p">]</span>
-  <span class="p">(</span><span class="k">let </span><span class="p">[</span><span class="nv">words</span> <span class="p">(</span><span class="nf">.split</span> <span class="p">(</span><span class="nf">.getString</span> <span class="nv">tuple</span> <span class="mi">0</span><span class="p">)</span> <span class="s">&quot; &quot;</span><span class="p">)]</span>
-    <span class="p">(</span><span class="nb">doseq </span><span class="p">[</span><span class="nv">w</span> <span class="nv">words</span><span class="p">]</span>
-      <span class="p">(</span><span class="nf">emit-bolt!</span> <span class="nv">collector</span> <span class="p">[</span><span class="nv">w</span><span class="p">]</span> <span class="ss">:anchor</span> <span class="nv">tuple</span><span class="p">))</span>
-    <span class="p">(</span><span class="nf">ack!</span> <span class="nv">collector</span> <span class="nv">tuple</span><span class="p">)</span>
-    <span class="p">))</span>
-</code></pre></div>
+<div class="highlight"><pre><code class="language-clojure" data-lang="clojure"><span class="p">(</span><span class="nf">defbolt</span><span class="w"> </span><span class="n">split-sentence</span><span class="w"> </span><span class="p">[</span><span class="s">"word"</span><span class="p">]</span><span class="w"> </span><span class="p">[</span><span class="n">tuple</span><span class="w"> </span><span class="n">collector</span><span class="p">]</span><span class="w">
+  </span><span class="p">(</span><span class="k">let</span><span class="w"> </span><span class="p">[</span><span class="n">words</span><span class="w"> </span><span class="p">(</span><span class="nf">.split</span><span class="w"> </span><span class="p">(</span><span class="nf">.getString</span><span class="w"> </span><span class="n">tuple</span><span class="w"> </span><span class="mi">0</span><span class="p">)</span><span class="w"> </span><span class="s">" "</span><span class="p">)]</span><span class="w">
+    </span><span class="p">(</span><span class="nb">doseq</span><span class="w"> </span><span class="p">[</span><span class="n">w</span><span class="w"> </span><span class="n">words</span><span class="p">]</span><span class="w">
+      </span><span class="p">(</span><span class="nf">emit-bolt!</span><span class="w"> </span><span class="n">collector</span><span class="w"> </span><span class="p">[</span><span class="n">w</span><span class="p">]</span><span class="w"> </span><span class="no">:anchor</span><span class="w"> </span><span class="n">tuple</span><span class="p">))</span><span class="w">
+    </span><span class="p">(</span><span class="nf">ack!</span><span class="w"> </span><span class="n">collector</span><span class="w"> </span><span class="n">tuple</span><span class="p">)</span><span class="w">
+    </span><span class="p">))</span><span class="w">
+</span></code></pre></div>
 <p>Since the option map is omitted, this is a non-prepared bolt. The DSL simply expects an implementation for the <code>execute</code> method of <code>IRichBolt</code>. The implementation takes two parameters, the tuple and the <code>OutputCollector</code>, and is followed by the body of the <code>execute</code> function. The DSL automatically type-hints the parameters for you so you don&#39;t need to worry about reflection if you use Java interop.</p>
 
 <p>This implementation binds <code>split-sentence</code> to an actual <code>IRichBolt</code> object that you can use in topologies, like so:</p>
-<div class="highlight"><pre><code class="language-clojure" data-lang="clojure"><span class="p">(</span><span class="nf">bolt-spec</span> <span class="p">{</span><span class="s">&quot;1&quot;</span> <span class="ss">:shuffle</span><span class="p">}</span>
-           <span class="nv">split-sentence</span>
-           <span class="ss">:p</span> <span class="mi">5</span><span class="p">)</span>
-</code></pre></div>
+<div class="highlight"><pre><code class="language-clojure" data-lang="clojure"><span class="p">(</span><span class="nf">bolt-spec</span><span class="w"> </span><span class="p">{</span><span class="s">"1"</span><span class="w"> </span><span class="no">:shuffle</span><span class="p">}</span><span class="w">
+           </span><span class="n">split-sentence</span><span class="w">
+           </span><span class="no">:p</span><span class="w"> </span><span class="mi">5</span><span class="p">)</span><span class="w">
+</span></code></pre></div>
 <h4 id="parameterized-bolts">Parameterized bolts</h4>
 
 <p>Many times you want to parameterize your bolts with other arguments. For example, let&#39;s say you wanted to have a bolt that appends a suffix to every input string it receives, and you want that suffix to be set at runtime. You do this with <code>defbolt</code> by including a <code>:params</code> option in the option map, like so:</p>
-<div class="highlight"><pre><code class="language-clojure" data-lang="clojure"><span class="p">(</span><span class="nf">defbolt</span> <span class="nv">suffix-appender</span> <span class="p">[</span><span class="s">&quot;word&quot;</span><span class="p">]</span> <span class="p">{</span><span class="ss">:params</span> <span class="p">[</span><span class="nv">suffix</span><span class="p">]}</span>
-  <span class="p">[</span><span class="nv">tuple</span> <span class="nv">collector</span><span class="p">]</span>
-  <span class="p">(</span><span class="nf">emit-bolt!</span> <span class="nv">collector</span> <span class="p">[(</span><span class="nb">str </span><span class="p">(</span><span class="nf">.getString</span> <span class="nv">tuple</span> <span class="mi">0</span><span class="p">)</span> <span class="nv">suffix</span><span class="p">)]</span> <span class="ss">:anchor</span> <span class="nv">tuple</span><span class="p">)</span>
-  <span class="p">)</span>
-</code></pre></div>
+<div class="highlight"><pre><code class="language-clojure" data-lang="clojure"><span class="p">(</span><span class="nf">defbolt</span><span class="w"> </span><span class="n">suffix-appender</span><span class="w"> </span><span class="p">[</span><span class="s">"word"</span><span class="p">]</span><span class="w"> </span><span class="p">{</span><span class="no">:params</span><span class="w"> </span><span class="p">[</span><span class="n">suffix</span><span class="p">]}</span><span class="w">
+  </span><span class="p">[</span><span class="n">tuple</span><span class="w"> </span><span class="n">collector</span><span class="p">]</span><span class="w">
+  </span><span class="p">(</span><span class="nf">emit-bolt!</span><span class="w"> </span><span class="n">collector</span><span class="w"> </span><span class="p">[(</span><span class="nb">str</span><span class="w"> </span><span class="p">(</span><span class="nf">.getString</span><span class="w"> </span><span class="n">tuple</span><span class="w"> </span><span class="mi">0</span><span class="p">)</span><span class="w"> </span><span class="n">suffix</span><span class="p">)]</span><span class="w"> </span><span class="no">:anchor</span><span class="w"> </span><span class="n">tuple</span><span class="p">)</span><span class="w">
+  </span><span class="p">)</span><span class="w">
+</span></code></pre></div>
 <p>Unlike the previous example, <code>suffix-appender</code> will be bound to a function that returns an <code>IRichBolt</code> rather than be an <code>IRichBolt</code> object directly. This is caused by specifying <code>:params</code> in its option map. So to use <code>suffix-appender</code> in a topology, you would do something like:</p>
-<div class="highlight"><pre><code class="language-clojure" data-lang="clojure"><span class="p">(</span><span class="nf">bolt-spec</span> <span class="p">{</span><span class="s">&quot;1&quot;</span> <span class="ss">:shuffle</span><span class="p">}</span>
-           <span class="p">(</span><span class="nf">suffix-appender</span> <span class="s">&quot;-suffix&quot;</span><span class="p">)</span>
-           <span class="ss">:p</span> <span class="mi">10</span><span class="p">)</span>
-</code></pre></div>
+<div class="highlight"><pre><code class="language-clojure" data-lang="clojure"><span class="p">(</span><span class="nf">bolt-spec</span><span class="w"> </span><span class="p">{</span><span class="s">"1"</span><span class="w"> </span><span class="no">:shuffle</span><span class="p">}</span><span class="w">
+           </span><span class="p">(</span><span class="nf">suffix-appender</span><span class="w"> </span><span class="s">"-suffix"</span><span class="p">)</span><span class="w">
+           </span><span class="no">:p</span><span class="w"> </span><span class="mi">10</span><span class="p">)</span><span class="w">
+</span></code></pre></div>
 <h4 id="prepared-bolts">Prepared bolts</h4>
 
 <p>To do more complex bolts, such as ones that do joins and streaming aggregations, the bolt needs to store state. You can do this by creating a prepared bolt which is specified by including <code>{:prepare true}</code> in the option map. Consider, for example, this bolt that implements word counting:</p>
-<div class="highlight"><pre><code class="language-clojure" data-lang="clojure"><span class="p">(</span><span class="nf">defbolt</span> <span class="nv">word-count</span> <span class="p">[</span><span class="s">&quot;word&quot;</span> <span class="s">&quot;count&quot;</span><span class="p">]</span> <span class="p">{</span><span class="ss">:prepare</span> <span class="nv">true</span><span class="p">}</span>
-  <span class="p">[</span><span class="nv">conf</span> <span class="nv">context</span> <span class="nv">collector</span><span class="p">]</span>
-  <span class="p">(</span><span class="k">let </span><span class="p">[</span><span class="nv">counts</span> <span class="p">(</span><span class="nf">atom</span> <span class="p">{})]</span>
-    <span class="p">(</span><span class="nf">bolt</span>
-     <span class="p">(</span><span class="nf">execute</span> <span class="p">[</span><span class="nv">tuple</span><span class="p">]</span>
-       <span class="p">(</span><span class="k">let </span><span class="p">[</span><span class="nv">word</span> <span class="p">(</span><span class="nf">.getString</span> <span class="nv">tuple</span> <span class="mi">0</span><span class="p">)]</span>
-         <span class="p">(</span><span class="nf">swap!</span> <span class="nv">counts</span> <span class="p">(</span><span class="nb">partial merge-with </span><span class="nv">+</span><span class="p">)</span> <span class="p">{</span><span class="nv">word</span> <span class="mi">1</span><span class="p">})</span>
-         <span class="p">(</span><span class="nf">emit-bolt!</span> <span class="nv">collector</span> <span class="p">[</span><span class="nv">word</span> <span class="p">(</span><span class="o">@</span><span class="nv">counts</span> <span class="nv">word</span><span class="p">)]</span> <span class="ss">:anchor</span> <span class="nv">tuple</span><span class="p">)</span>
-         <span class="p">(</span><span class="nf">ack!</span> <span class="nv">collector</span> <span class="nv">tuple</span><span class="p">)</span>
-         <span class="p">)))))</span>
-</code></pre></div>
+<div class="highlight"><pre><code class="language-clojure" data-lang="clojure"><span class="p">(</span><span class="nf">defbolt</span><span class="w"> </span><span class="n">word-count</span><span class="w"> </span><span class="p">[</span><span class="s">"word"</span><span class="w"> </span><span class="s">"count"</span><span class="p">]</span><span class="w"> </span><span class="p">{</span><span class="no">:prepare</span><span class="w"> </span><span class="n">true</span><span class="p">}</span><span class="w">
+  </span><span class="p">[</span><span class="n">conf</span><span class="w"> </span><span class="n">context</span><span class="w"> </span><span class="n">collector</span><span class="p">]</span><span class="w">
+  </span><span class="p">(</span><span class="k">let</span><span class="w"> </span><span class="p">[</span><span class="n">counts</span><span class="w"> </span><span class="p">(</span><span class="nf">atom</span><span class="w"> </span><span class="p">{})]</span><span class="w">
+    </span><span class="p">(</span><span class="nf">bolt</span><span class="w">
+     </span><span class="p">(</span><span class="nf">execute</span><span class="w"> </span><span class="p">[</span><span class="n">tuple</span><span class="p">]</span><span class="w">
+       </span><span class="p">(</span><span class="k">let</span><span class="w"> </span><span class="p">[</span><span class="n">word</span><span class="w"> </span><span class="p">(</span><span class="nf">.getString</span><span class="w"> </span><span class="n">tuple</span><span class="w"> </span><span class="mi">0</span><span class="p">)]</span><span class="w">
+         </span><span class="p">(</span><span class="nf">swap!</span><span class="w"> </span><span class="n">counts</span><span class="w"> </span><span class="p">(</span><span class="nb">partial</span><span class="w"> </span><span class="nb">merge-with</span><span class="w"> </span><span class="nb">+</span><span class="p">)</span><span class="w"> </span><span class="p">{</span><span class="n">word</span><span class="w"> </span><span class="mi">1</span><span class="p">})</span><span class="w">
+         </span><span class="p">(</span><span class="nf">emit-bolt!</span><span class="w"> </span><span class="n">collector</span><span class="w"> </span><span class="p">[</span><span class="n">word</span><span class="w"> </span><span class="p">(</span><span class="err">@</span><span class="n">counts</span><span class="w"> </span><span class="n">word</span><span class="p">)]</span><span class="w"> </span><span class="no">:anchor</span><span class="w"> </span><span class="n">tuple</span><span class="p">)</span><span class="w">
+         </span><span class="p">(</span><span class="nf">ack!</span><span class="w"> </span><span class="n">collector</span><span class="w"> </span><span class="n">tuple</span><span class="p">)</span><span class="w">
+         </span><span class="p">)))))</span><span class="w">
+</span></code></pre></div>
 <p>The implementation for a prepared bolt is a function that takes as input the topology config, <code>TopologyContext</code>, and <code>OutputCollector</code>, and returns an implementation of the <code>IBolt</code> interface. This design allows you to have a closure around the implementation of <code>execute</code> and <code>cleanup</code>. </p>
 
 <p>In this example, the word counts are stored in the closure in a map called <code>counts</code>. The <code>bolt</code> macro is used to create the <code>IBolt</code> implementation. The <code>bolt</code> macro is a more concise way to implement the interface than reifying, and it automatically type-hints all of the method parameters. This bolt implements the execute method which updates the count in the map and emits the new word count.</p>
@@ -234,18 +234,18 @@
 <h4 id="output-declarations">Output declarations</h4>
 
 <p>The Clojure DSL has a concise syntax for declaring the outputs of a bolt. The most general way to declare the outputs is as a map from stream id a stream spec. For example:</p>
-<div class="highlight"><pre><code class="language-clojure" data-lang="clojure"><span class="p">{</span><span class="s">&quot;1&quot;</span> <span class="p">[</span><span class="s">&quot;field1&quot;</span> <span class="s">&quot;field2&quot;</span><span class="p">]</span>
- <span class="s">&quot;2&quot;</span> <span class="p">(</span><span class="nf">direct-stream</span> <span class="p">[</span><span class="s">&quot;f1&quot;</span> <span class="s">&quot;f2&quot;</span> <span class="s">&quot;f3&quot;</span><span class="p">])</span>
- <span class="s">&quot;3&quot;</span> <span class="p">[</span><span class="s">&quot;f1&quot;</span><span class="p">]}</span>
-</code></pre></div>
+<div class="highlight"><pre><code class="language-clojure" data-lang="clojure"><span class="p">{</span><span class="s">"1"</span><span class="w"> </span><span class="p">[</span><span class="s">"field1"</span><span class="w"> </span><span class="s">"field2"</span><span class="p">]</span><span class="w">
+ </span><span class="s">"2"</span><span class="w"> </span><span class="p">(</span><span class="nf">direct-stream</span><span class="w"> </span><span class="p">[</span><span class="s">"f1"</span><span class="w"> </span><span class="s">"f2"</span><span class="w"> </span><span class="s">"f3"</span><span class="p">])</span><span class="w">
+ </span><span class="s">"3"</span><span class="w"> </span><span class="p">[</span><span class="s">"f1"</span><span class="p">]}</span><span class="w">
+</span></code></pre></div>
 <p>The stream id is a string, while the stream spec is either a vector of fields or a vector of fields wrapped by <code>direct-stream</code>. <code>direct stream</code> marks the stream as a direct stream (See <a href="Concepts.html">Concepts</a> and <a href="">Direct groupings</a> for more details on direct streams).</p>
 
 <p>If the bolt only has one output stream, you can define the default stream of the bolt by using a vector instead of a map for the output declaration. For example:</p>
-<div class="highlight"><pre><code class="language-clojure" data-lang="clojure"><span class="p">[</span><span class="s">&quot;word&quot;</span> <span class="s">&quot;count&quot;</span><span class="p">]</span>
-</code></pre></div>
+<div class="highlight"><pre><code class="language-clojure" data-lang="clojure"><span class="p">[</span><span class="s">"word"</span><span class="w"> </span><span class="s">"count"</span><span class="p">]</span><span class="w">
+</span></code></pre></div>
 <p>This declares the output of the bolt as the fields [&quot;word&quot; &quot;count&quot;] on the default stream id.</p>
 
-<h4 id="emitting,-acking,-and-failing">Emitting, acking, and failing</h4>
+<h4 id="emitting-acking-and-failing">Emitting, acking, and failing</h4>
 
 <p>Rather than use the Java methods on <code>OutputCollector</code> directly, the DSL provides a nicer set of functions for using <code>OutputCollector</code>: <code>emit-bolt!</code>, <code>emit-direct-bolt!</code>, <code>ack!</code>, and <code>fail!</code>.</p>
 
@@ -269,23 +269,23 @@
 <p>If you leave out the option map, it defaults to {:prepare true}. The output declaration for <code>defspout</code> has the same syntax as <code>defbolt</code>.</p>
 
 <p>Here&#39;s an example <code>defspout</code> implementation from <a href="https://github.com/apache/storm/blob/master/examples/storm-starter/src/clj/storm/starter/clj/word_count.clj">storm-starter</a>:</p>
-<div class="highlight"><pre><code class="language-clojure" data-lang="clojure"><span class="p">(</span><span class="nf">defspout</span> <span class="nv">sentence-spout</span> <span class="p">[</span><span class="s">&quot;sentence&quot;</span><span class="p">]</span>
-  <span class="p">[</span><span class="nv">conf</span> <span class="nv">context</span> <span class="nv">collector</span><span class="p">]</span>
-  <span class="p">(</span><span class="k">let </span><span class="p">[</span><span class="nv">sentences</span> <span class="p">[</span><span class="s">&quot;a little brown dog&quot;</span>
-                   <span class="s">&quot;the man petted the dog&quot;</span>
-                   <span class="s">&quot;four score and seven years ago&quot;</span>
-                   <span class="s">&quot;an apple a day keeps the doctor away&quot;</span><span class="p">]]</span>
-    <span class="p">(</span><span class="nf">spout</span>
-     <span class="p">(</span><span class="nf">nextTuple</span> <span class="p">[]</span>
-       <span class="p">(</span><span class="nf">Thread/sleep</span> <span class="mi">100</span><span class="p">)</span>
-       <span class="p">(</span><span class="nf">emit-spout!</span> <span class="nv">collector</span> <span class="p">[(</span><span class="nf">rand-nth</span> <span class="nv">sentences</span><span class="p">)])</span>         
-       <span class="p">)</span>
-     <span class="p">(</span><span class="nf">ack</span> <span class="p">[</span><span class="nv">id</span><span class="p">]</span>
-        <span class="c1">;; You only need to define this method for reliable spouts</span>
-        <span class="c1">;; (such as one that reads off of a queue like Kestrel)</span>
-        <span class="c1">;; This is an unreliable spout, so it does nothing here</span>
-        <span class="p">))))</span>
-</code></pre></div>
+<div class="highlight"><pre><code class="language-clojure" data-lang="clojure"><span class="p">(</span><span class="nf">defspout</span><span class="w"> </span><span class="n">sentence-spout</span><span class="w"> </span><span class="p">[</span><span class="s">"sentence"</span><span class="p">]</span><span class="w">
+  </span><span class="p">[</span><span class="n">conf</span><span class="w"> </span><span class="n">context</span><span class="w"> </span><span class="n">collector</span><span class="p">]</span><span class="w">
+  </span><span class="p">(</span><span class="k">let</span><span class="w"> </span><span class="p">[</span><span class="n">sentences</span><span class="w"> </span><span class="p">[</span><span class="s">"a little brown dog"</span><span class="w">
+                   </span><span class="s">"the man petted the dog"</span><span class="w">
+                   </span><span class="s">"four score and seven years ago"</span><span class="w">
+                   </span><span class="s">"an apple a day keeps the doctor away"</span><span class="p">]]</span><span class="w">
+    </span><span class="p">(</span><span class="nf">spout</span><span class="w">
+     </span><span class="p">(</span><span class="nf">nextTuple</span><span class="w"> </span><span class="p">[]</span><span class="w">
+       </span><span class="p">(</span><span class="nf">Thread/sleep</span><span class="w"> </span><span class="mi">100</span><span class="p">)</span><span class="w">
+       </span><span class="p">(</span><span class="nf">emit-spout!</span><span class="w"> </span><span class="n">collector</span><span class="w"> </span><span class="p">[(</span><span class="nf">rand-nth</span><span class="w"> </span><span class="n">sentences</span><span class="p">)])</span><span class="w">         
+       </span><span class="p">)</span><span class="w">
+     </span><span class="p">(</span><span class="nf">ack</span><span class="w"> </span><span class="p">[</span><span class="n">id</span><span class="p">]</span><span class="w">
+        </span><span class="c1">;; You only need to define this method for reliable spouts
+</span><span class="w">        </span><span class="c1">;; (such as one that reads off of a queue like Kestrel)
+</span><span class="w">        </span><span class="c1">;; This is an unreliable spout, so it does nothing here
+</span><span class="w">        </span><span class="p">))))</span><span class="w">
+</span></code></pre></div>
 <p>The implementation takes in as input the topology config, <code>TopologyContext</code>, and <code>SpoutOutputCollector</code>. The implementation returns an <code>ISpout</code> object. Here, the <code>nextTuple</code> function emits a random sentence from <code>sentences</code>. </p>
 
 <p>This spout isn&#39;t reliable, so the <code>ack</code> and <code>fail</code> methods will never be called. A reliable spout will add a message id when emitting tuples, and then <code>ack</code> or <code>fail</code> will be called when the tuple is completed or failed respectively. See <a href="Guaranteeing-message-processing.html">Guaranteeing message processing</a> for more info on how reliability works within Storm.</p>
@@ -295,25 +295,25 @@
 <p>There is also a <code>emit-direct-spout!</code> function that emits a tuple to a direct stream and takes an additional argument as the second parameter of the task id to send the tuple to.</p>
 
 <p>Spouts can be parameterized just like bolts, in which case the symbol is bound to a function returning <code>IRichSpout</code> instead of the <code>IRichSpout</code> itself. You can also declare an unprepared spout which only defines the <code>nextTuple</code> method. Here is an example of an unprepared spout that emits random sentences parameterized at runtime:</p>
-<div class="highlight"><pre><code class="language-clojure" data-lang="clojure"><span class="p">(</span><span class="nf">defspout</span> <span class="nv">sentence-spout-parameterized</span> <span class="p">[</span><span class="s">&quot;word&quot;</span><span class="p">]</span> <span class="p">{</span><span class="ss">:params</span> <span class="p">[</span><span class="nv">sentences</span><span class="p">]</span> <span class="ss">:prepare</span> <span class="nv">false</span><span class="p">}</span>
-  <span class="p">[</span><span class="nv">collector</span><span class="p">]</span>
-  <span class="p">(</span><span class="nf">Thread/sleep</span> <span class="mi">500</span><span class="p">)</span>
-  <span class="p">(</span><span class="nf">emit-spout!</span> <span class="nv">collector</span> <span class="p">[(</span><span class="nf">rand-nth</span> <span class="nv">sentences</span><span class="p">)]))</span>
-</code></pre></div>
+<div class="highlight"><pre><code class="language-clojure" data-lang="clojure"><span class="p">(</span><span class="nf">defspout</span><span class="w"> </span><span class="n">sentence-spout-parameterized</span><span class="w"> </span><span class="p">[</span><span class="s">"word"</span><span class="p">]</span><span class="w"> </span><span class="p">{</span><span class="no">:params</span><span class="w"> </span><span class="p">[</span><span class="n">sentences</span><span class="p">]</span><span class="w"> </span><span class="no">:prepare</span><span class="w"> </span><span class="n">false</span><span class="p">}</span><span class="w">
+  </span><span class="p">[</span><span class="n">collector</span><span class="p">]</span><span class="w">
+  </span><span class="p">(</span><span class="nf">Thread/sleep</span><span class="w"> </span><span class="mi">500</span><span class="p">)</span><span class="w">
+  </span><span class="p">(</span><span class="nf">emit-spout!</span><span class="w"> </span><span class="n">collector</span><span class="w"> </span><span class="p">[(</span><span class="nf">rand-nth</span><span class="w"> </span><span class="n">sentences</span><span class="p">)]))</span><span class="w">
+</span></code></pre></div>
 <p>The following example illustrates how to use this spout in a <code>spout-spec</code>:</p>
-<div class="highlight"><pre><code class="language-clojure" data-lang="clojure"><span class="p">(</span><span class="nf">spout-spec</span> <span class="p">(</span><span class="nf">sentence-spout-parameterized</span>
-                   <span class="p">[</span><span class="s">&quot;the cat jumped over the door&quot;</span>
-                    <span class="s">&quot;greetings from a faraway land&quot;</span><span class="p">])</span>
-            <span class="ss">:p</span> <span class="mi">2</span><span class="p">)</span>
-</code></pre></div>
+<div class="highlight"><pre><code class="language-clojure" data-lang="clojure"><span class="p">(</span><span class="nf">spout-spec</span><span class="w"> </span><span class="p">(</span><span class="nf">sentence-spout-parameterized</span><span class="w">
+                   </span><span class="p">[</span><span class="s">"the cat jumped over the door"</span><span class="w">
+                    </span><span class="s">"greetings from a faraway land"</span><span class="p">])</span><span class="w">
+            </span><span class="no">:p</span><span class="w"> </span><span class="mi">2</span><span class="p">)</span><span class="w">
+</span></code></pre></div>
 <h3 id="running-topologies-in-local-mode-or-on-a-cluster">Running topologies in local mode or on a cluster</h3>
 
 <p>That&#39;s all there is to the Clojure DSL. To submit topologies in remote mode or local mode, just use the <code>StormSubmitter</code> or <code>LocalCluster</code> classes just like you would from Java.</p>
 
 <p>To create topology configs, it&#39;s easiest to use the <code>backtype.storm.config</code> namespace which defines constants for all of the possible configs. The constants are the same as the static constants in the <code>Config</code> class, except with dashes instead of underscores. For example, here&#39;s a topology config that sets the number of workers to 15 and configures the topology in debug mode:</p>
-<div class="highlight"><pre><code class="language-clojure" data-lang="clojure"><span class="p">{</span><span class="nv">TOPOLOGY-DEBUG</span> <span class="nv">true</span>
- <span class="nv">TOPOLOGY-WORKERS</span> <span class="mi">15</span><span class="p">}</span>
-</code></pre></div>
+<div class="highlight"><pre><code class="language-clojure" data-lang="clojure"><span class="p">{</span><span class="n">TOPOLOGY-DEBUG</span><span class="w"> </span><span class="n">true</span><span class="w">
+ </span><span class="n">TOPOLOGY-WORKERS</span><span class="w"> </span><span class="mi">15</span><span class="p">}</span><span class="w">
+</span></code></pre></div>
 <h3 id="testing-topologies">Testing topologies</h3>
 
 <p><a href="http://www.pixelmachine.org/2011/12/17/Testing-Storm-Topologies.html">This blog post</a> and its <a href="http://www.pixelmachine.org/2011/12/21/Testing-Storm-Topologies-Part-2.html">follow-up</a> give a good overview of Storm&#39;s powerful built-in facilities for testing topologies in Clojure.</p>
diff --git a/_site/documentation/Command-line-client.html b/_site/documentation/Command-line-client.html
index c117678..e973ca6 100644
--- a/_site/documentation/Command-line-client.html
+++ b/_site/documentation/Command-line-client.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
diff --git a/_site/documentation/Common-patterns.html b/_site/documentation/Common-patterns.html
index 298c360..3a4a7ff 100644
--- a/_site/documentation/Common-patterns.html
+++ b/_site/documentation/Common-patterns.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -107,10 +107,10 @@
 <p>A streaming join combines two or more data streams together based on some common field. Whereas a normal database join has finite input and clear semantics for a join, a streaming join has infinite input and unclear semantics for what a join should be.</p>
 
 <p>The join type you need will vary per application. Some applications join all tuples for two streams over a finite window of time, whereas other applications expect exactly one tuple for each side of the join for each join field. Other applications may do the join completely differently. The common pattern among all these join types is partitioning multiple input streams in the same way. This is easily accomplished in Storm by using a fields grouping on the same fields for many input streams to the joiner bolt. For example:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">&quot;join&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="nf">MyJoiner</span><span class="o">(),</span> <span class="n">parallelism</span><span class="o">)</span>
-  <span class="o">.</span><span class="na">fieldsGrouping</span><span class="o">(</span><span class="s">&quot;1&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;joinfield1&quot;</span><span class="o">,</span> <span class="s">&quot;joinfield2&quot;</span><span class="o">))</span>
-  <span class="o">.</span><span class="na">fieldsGrouping</span><span class="o">(</span><span class="s">&quot;2&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;joinfield1&quot;</span><span class="o">,</span> <span class="s">&quot;joinfield2&quot;</span><span class="o">))</span>
-  <span class="o">.</span><span class="na">fieldsGrouping</span><span class="o">(</span><span class="s">&quot;3&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;joinfield1&quot;</span><span class="o">,</span> <span class="s">&quot;joinfield2&quot;</span><span class="o">));</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">"join"</span><span class="o">,</span> <span class="k">new</span> <span class="n">MyJoiner</span><span class="o">(),</span> <span class="n">parallelism</span><span class="o">)</span>
+  <span class="o">.</span><span class="na">fieldsGrouping</span><span class="o">(</span><span class="s">"1"</span><span class="o">,</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"joinfield1"</span><span class="o">,</span> <span class="s">"joinfield2"</span><span class="o">))</span>
+  <span class="o">.</span><span class="na">fieldsGrouping</span><span class="o">(</span><span class="s">"2"</span><span class="o">,</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"joinfield1"</span><span class="o">,</span> <span class="s">"joinfield2"</span><span class="o">))</span>
+  <span class="o">.</span><span class="na">fieldsGrouping</span><span class="o">(</span><span class="s">"3"</span><span class="o">,</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"joinfield1"</span><span class="o">,</span> <span class="s">"joinfield2"</span><span class="o">));</span>
 </code></pre></div>
 <p>The different streams don&#39;t have to have the same field names, of course.</p>
 
@@ -126,13 +126,13 @@
 
 <p>Many bolts follow a similar pattern of reading an input tuple, emitting zero or more tuples based on that input tuple, and then acking that input tuple immediately at the end of the execute method. Bolts that match this pattern are things like functions and filters. This is such a common pattern that Storm exposes an interface called <a href="/javadoc/apidocs/backtype/storm/topology/IBasicBolt.html">IBasicBolt</a> that automates this pattern for you. See <a href="Guaranteeing-message-processing.html">Guaranteeing message processing</a> for more information.</p>
 
-<h3 id="in-memory-caching-+-fields-grouping-combo">In-memory caching + fields grouping combo</h3>
+<h3 id="in-memory-caching-fields-grouping-combo">In-memory caching + fields grouping combo</h3>
 
 <p>It&#39;s common to keep caches in-memory in Storm bolts. Caching becomes particularly powerful when you combine it with a fields grouping. For example, suppose you have a bolt that expands short URLs (like bit.ly, t.co, etc.) into long URLs. You can increase performance by keeping an LRU cache of short URL to long URL expansions to avoid doing the same HTTP requests over and over. Suppose component &quot;urls&quot; emits short URLS, and component &quot;expand&quot; expands short URLs into long URLs and keeps a cache internally. Consider the difference between the two following snippets of code:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">&quot;expand&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="nf">ExpandUrl</span><span class="o">(),</span> <span class="n">parallelism</span><span class="o">)</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">"expand"</span><span class="o">,</span> <span class="k">new</span> <span class="n">ExpandUrl</span><span class="o">(),</span> <span class="n">parallelism</span><span class="o">)</span>
   <span class="o">.</span><span class="na">shuffleGrouping</span><span class="o">(</span><span class="mi">1</span><span class="o">);</span>
-</code></pre></div><div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">&quot;expand&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="nf">ExpandUrl</span><span class="o">(),</span> <span class="n">parallelism</span><span class="o">)</span>
-  <span class="o">.</span><span class="na">fieldsGrouping</span><span class="o">(</span><span class="s">&quot;urls&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;url&quot;</span><span class="o">));</span>
+</code></pre></div><div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">"expand"</span><span class="o">,</span> <span class="k">new</span> <span class="n">ExpandUrl</span><span class="o">(),</span> <span class="n">parallelism</span><span class="o">)</span>
+  <span class="o">.</span><span class="na">fieldsGrouping</span><span class="o">(</span><span class="s">"urls"</span><span class="o">,</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"url"</span><span class="o">));</span>
 </code></pre></div>
 <p>The second approach will have vastly more effective caches, since the same URL will always go to the same task. This avoids having duplication across any of the caches in the tasks and makes it much more likely that a short URL will hit the cache.</p>
 
@@ -141,20 +141,20 @@
 <p>A common continuous computation done on Storm is a &quot;streaming top N&quot; of some sort. Suppose you have a bolt that emits tuples of the form [&quot;value&quot;, &quot;count&quot;] and you want a bolt that emits the top N tuples based on count. The simplest way to do this is to have a bolt that does a global grouping on the stream and maintains a list in memory of the top N items.</p>
 
 <p>This approach obviously doesn&#39;t scale to large streams since the entire stream has to go through one task. A better way to do the computation is to do many top N&#39;s in parallel across partitions of the stream, and then merge those top N&#39;s together to get the global top N. The pattern looks like this:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">&quot;rank&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="nf">RankObjects</span><span class="o">(),</span> <span class="n">parallelism</span><span class="o">)</span>
-  <span class="o">.</span><span class="na">fieldsGrouping</span><span class="o">(</span><span class="s">&quot;objects&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;value&quot;</span><span class="o">));</span>
-<span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">&quot;merge&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="nf">MergeObjects</span><span class="o">())</span>
-  <span class="o">.</span><span class="na">globalGrouping</span><span class="o">(</span><span class="s">&quot;rank&quot;</span><span class="o">);</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">"rank"</span><span class="o">,</span> <span class="k">new</span> <span class="n">RankObjects</span><span class="o">(),</span> <span class="n">parallelism</span><span class="o">)</span>
+  <span class="o">.</span><span class="na">fieldsGrouping</span><span class="o">(</span><span class="s">"objects"</span><span class="o">,</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"value"</span><span class="o">));</span>
+<span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">"merge"</span><span class="o">,</span> <span class="k">new</span> <span class="n">MergeObjects</span><span class="o">())</span>
+  <span class="o">.</span><span class="na">globalGrouping</span><span class="o">(</span><span class="s">"rank"</span><span class="o">);</span>
 </code></pre></div>
 <p>This pattern works because of the fields grouping done by the first bolt which gives the partitioning you need for this to be semantically correct. You can see an example of this pattern in storm-starter <a href="https://github.com/apache/storm/blob/master/examples/storm-starter/src/jvm/storm/starter/RollingTopWords.java">here</a>.</p>
 
 <p>If however you have a known skew in the data being processed it can be advantageous to use partialKeyGrouping instead of fieldsGrouping.  This will distribute the load for each key between two downstream bolts instead of a single one.</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">&quot;count&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="nf">CountObjects</span><span class="o">(),</span> <span class="n">parallelism</span><span class="o">)</span>
-  <span class="o">.</span><span class="na">partialKeyGrouping</span><span class="o">(</span><span class="s">&quot;objects&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;value&quot;</span><span class="o">));</span>
-<span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">&quot;rank&quot;</span> <span class="k">new</span> <span class="nf">AggregateCountsAndRank</span><span class="o">(),</span> <span class="n">parallelism</span><span class="o">)</span>
-  <span class="o">.</span><span class="na">fieldsGrouping</span><span class="o">(</span><span class="s">&quot;count&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;key&quot;</span><span class="o">))</span>
-<span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">&quot;merge&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="nf">MergeRanksObjects</span><span class="o">())</span>
-  <span class="o">.</span><span class="na">globalGrouping</span><span class="o">(</span><span class="s">&quot;rank&quot;</span><span class="o">);</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">"count"</span><span class="o">,</span> <span class="k">new</span> <span class="n">CountObjects</span><span class="o">(),</span> <span class="n">parallelism</span><span class="o">)</span>
+  <span class="o">.</span><span class="na">partialKeyGrouping</span><span class="o">(</span><span class="s">"objects"</span><span class="o">,</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"value"</span><span class="o">));</span>
+<span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">"rank"</span> <span class="k">new</span> <span class="n">AggregateCountsAndRank</span><span class="o">(),</span> <span class="n">parallelism</span><span class="o">)</span>
+  <span class="o">.</span><span class="na">fieldsGrouping</span><span class="o">(</span><span class="s">"count"</span><span class="o">,</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"key"</span><span class="o">))</span>
+<span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">"merge"</span><span class="o">,</span> <span class="k">new</span> <span class="n">MergeRanksObjects</span><span class="o">())</span>
+  <span class="o">.</span><span class="na">globalGrouping</span><span class="o">(</span><span class="s">"rank"</span><span class="o">);</span>
 </code></pre></div>
 <p>The topology needs an extra layer of processing to aggregate the partial counts from the upstream bolts but this only processes aggregated values now so the bolt it is not subject to the load caused by the skewed data. You can see an example of this pattern in storm-starter <a href="https://github.com/apache/storm/blob/master/examples/storm-starter/src/jvm/storm/starter/SkewedRollingTopWords.java">here</a>.</p>
 
diff --git a/_site/documentation/Concepts.html b/_site/documentation/Concepts.html
index 54e3c3f..53b1e28 100644
--- a/_site/documentation/Concepts.html
+++ b/_site/documentation/Concepts.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
diff --git a/_site/documentation/Configuration.html b/_site/documentation/Configuration.html
index 23f32db..74f0a2c 100644
--- a/_site/documentation/Configuration.html
+++ b/_site/documentation/Configuration.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
diff --git a/_site/documentation/Creating-a-new-Storm-project.html b/_site/documentation/Creating-a-new-Storm-project.html
index e837785..7007612 100644
--- a/_site/documentation/Creating-a-new-Storm-project.html
+++ b/_site/documentation/Creating-a-new-Storm-project.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -105,7 +105,7 @@
 
 <p>To set up the classpath in Eclipse, create a new Java project, include <code>src/jvm/</code> as a source path, and make sure all the jars in <code>lib/</code> and <code>lib/dev/</code> are in the <code>Referenced Libraries</code> section of the project.</p>
 
-<h3 id="if-using-multilang,-add-multilang-dir-to-classpath">If using multilang, add multilang dir to classpath</h3>
+<h3 id="if-using-multilang-add-multilang-dir-to-classpath">If using multilang, add multilang dir to classpath</h3>
 
 <p>If you implement spouts or bolts in languages other than Java, then those implementations should be under the <code>multilang/resources/</code> directory of the project. For Storm to find these files in local mode, the <code>resources/</code> dir needs to be on the classpath. You can do this in Eclipse by adding <code>multilang/</code> as a source folder. You may also need to add multilang/resources as a source directory.</p>
 
diff --git a/_site/documentation/DSLs-and-multilang-adapters.html b/_site/documentation/DSLs-and-multilang-adapters.html
index f748a70..7359871 100644
--- a/_site/documentation/DSLs-and-multilang-adapters.html
+++ b/_site/documentation/DSLs-and-multilang-adapters.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
diff --git a/_site/documentation/Defining-a-non-jvm-language-dsl-for-storm.html b/_site/documentation/Defining-a-non-jvm-language-dsl-for-storm.html
index bcb21d1..1621534 100644
--- a/_site/documentation/Defining-a-non-jvm-language-dsl-for-storm.html
+++ b/_site/documentation/Defining-a-non-jvm-language-dsl-for-storm.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -93,7 +93,7 @@
 <p>The right place to start to learn how to make a non-JVM DSL for Storm is <a href="https://github.com/apache/storm/blob/master/storm-core/src/storm.thrift">storm-core/src/storm.thrift</a>. Since Storm topologies are just Thrift structures, and Nimbus is a Thrift daemon, you can create and submit topologies in any language.</p>
 
 <p>When you create the Thrift structs for spouts and bolts, the code for the spout or bolt is specified in the ComponentObject struct:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">union ComponentObject {
+<div class="highlight"><pre><code class="language-" data-lang="">union ComponentObject {
   1: binary serialized_java;
   2: ShellComponent shell;
   3: JavaObject java_object;
@@ -102,13 +102,13 @@
 <p>For a Python DSL, you would want to make use of &quot;2&quot; and &quot;3&quot;. ShellComponent lets you specify a script to run that component (e.g., your python code). And JavaObject lets you specify native java spouts and bolts for the component (and Storm will use reflection to create that spout or bolt).</p>
 
 <p>There&#39;s a &quot;storm shell&quot; command that will help with submitting a topology. Its usage is like this:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">storm shell resources/ python topology.py arg1 arg2
+<div class="highlight"><pre><code class="language-" data-lang="">storm shell resources/ python topology.py arg1 arg2
 </code></pre></div>
 <p>storm shell will then package resources/ into a jar, upload the jar to Nimbus, and call your topology.py script like this:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">python topology.py arg1 arg2 {nimbus-host} {nimbus-port} {uploaded-jar-location}
+<div class="highlight"><pre><code class="language-" data-lang="">python topology.py arg1 arg2 {nimbus-host} {nimbus-port} {uploaded-jar-location}
 </code></pre></div>
 <p>Then you can connect to Nimbus using the Thrift API and submit the topology, passing {uploaded-jar-location} into the submitTopology method. For reference, here&#39;s the submitTopology definition:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kt">void</span> <span class="nf">submitTopology</span><span class="o">(</span><span class="mi">1</span><span class="o">:</span> <span class="n">string</span> <span class="n">name</span><span class="o">,</span> <span class="mi">2</span><span class="o">:</span> <span class="n">string</span> <span class="n">uploadedJarLocation</span><span class="o">,</span> <span class="mi">3</span><span class="o">:</span> <span class="n">string</span> <span class="n">jsonConf</span><span class="o">,</span> <span class="mi">4</span><span class="o">:</span> <span class="n">StormTopology</span> <span class="n">topology</span><span class="o">)</span> <span class="kd">throws</span> <span class="o">(</span><span class="mi">1</span><span class="o">:</span> <span class="n">AlreadyAliveException</span> <span class="n">e</span><span class="o">,</span> <span class="mi">2</span><span class="o">:</span> <span class="n">InvalidTopologyException</span> <span class="n">ite</span><span class="o">);</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kt">void</span> <span class="nf">submitTopology</span><span class="p">(</span><span class="mi">1</span><span class="o">:</span> <span class="n">string</span> <span class="n">name</span><span class="o">,</span> <span class="mi">2</span><span class="o">:</span> <span class="n">string</span> <span class="n">uploadedJarLocation</span><span class="o">,</span> <span class="mi">3</span><span class="o">:</span> <span class="n">string</span> <span class="n">jsonConf</span><span class="o">,</span> <span class="mi">4</span><span class="o">:</span> <span class="n">StormTopology</span> <span class="n">topology</span><span class="o">)</span> <span class="kd">throws</span> <span class="o">(</span><span class="mi">1</span><span class="o">:</span> <span class="n">AlreadyAliveException</span> <span class="n">e</span><span class="o">,</span> <span class="mi">2</span><span class="o">:</span> <span class="n">InvalidTopologyException</span> <span class="n">ite</span><span class="o">);</span>
 </code></pre></div>
 <p>Finally, one of the key things to do in a non-JVM DSL is make it easy to define the entire topology in one file (the bolts, spouts, and the definition of the topology).</p>
 
diff --git a/_site/documentation/Distributed-RPC.html b/_site/documentation/Distributed-RPC.html
index f9ea837..b2bbdd2 100644
--- a/_site/documentation/Distributed-RPC.html
+++ b/_site/documentation/Distributed-RPC.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -97,8 +97,8 @@
 <h3 id="high-level-overview">High level overview</h3>
 
 <p>Distributed RPC is coordinated by a &quot;DRPC server&quot; (Storm comes packaged with an implementation of this). The DRPC server coordinates receiving an RPC request, sending the request to the Storm topology, receiving the results from the Storm topology, and sending the results back to the waiting client. From a client&#39;s perspective, a distributed RPC call looks just like a regular RPC call. For example, here&#39;s how a client would compute the results for the &quot;reach&quot; function with the argument &quot;<a href="http://twitter.com%22:">http://twitter.com&quot;:</a></p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">DRPCClient</span> <span class="n">client</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">DRPCClient</span><span class="o">(</span><span class="s">&quot;drpc-host&quot;</span><span class="o">,</span> <span class="mi">3772</span><span class="o">);</span>
-<span class="n">String</span> <span class="n">result</span> <span class="o">=</span> <span class="n">client</span><span class="o">.</span><span class="na">execute</span><span class="o">(</span><span class="s">&quot;reach&quot;</span><span class="o">,</span> <span class="s">&quot;http://twitter.com&quot;</span><span class="o">);</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">DRPCClient</span> <span class="n">client</span> <span class="o">=</span> <span class="k">new</span> <span class="n">DRPCClient</span><span class="o">(</span><span class="s">"drpc-host"</span><span class="o">,</span> <span class="mi">3772</span><span class="o">);</span>
+<span class="n">String</span> <span class="n">result</span> <span class="o">=</span> <span class="n">client</span><span class="o">.</span><span class="na">execute</span><span class="o">(</span><span class="s">"reach"</span><span class="o">,</span> <span class="s">"http://twitter.com"</span><span class="o">);</span>
 </code></pre></div>
 <p>The distributed RPC workflow looks like this:</p>
 
@@ -118,19 +118,19 @@
 
 <p>Let&#39;s look at a simple example. Here&#39;s the implementation of a DRPC topology that returns its input argument with a &quot;!&quot; appended:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">static</span> <span class="kd">class</span> <span class="nc">ExclaimBolt</span> <span class="kd">extends</span> <span class="n">BaseBasicBolt</span> <span class="o">{</span>
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">execute</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">tuple</span><span class="o">,</span> <span class="n">BasicOutputCollector</span> <span class="n">collector</span><span class="o">)</span> <span class="o">{</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">execute</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">tuple</span><span class="o">,</span> <span class="n">BasicOutputCollector</span> <span class="n">collector</span><span class="o">)</span> <span class="o">{</span>
         <span class="n">String</span> <span class="n">input</span> <span class="o">=</span> <span class="n">tuple</span><span class="o">.</span><span class="na">getString</span><span class="o">(</span><span class="mi">1</span><span class="o">);</span>
-        <span class="n">collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="k">new</span> <span class="nf">Values</span><span class="o">(</span><span class="n">tuple</span><span class="o">.</span><span class="na">getValue</span><span class="o">(</span><span class="mi">0</span><span class="o">),</span> <span class="n">input</span> <span class="o">+</span> <span class="s">&quot;!&quot;</span><span class="o">));</span>
+        <span class="n">collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="k">new</span> <span class="n">Values</span><span class="o">(</span><span class="n">tuple</span><span class="o">.</span><span class="na">getValue</span><span class="o">(</span><span class="mi">0</span><span class="o">),</span> <span class="n">input</span> <span class="o">+</span> <span class="s">"!"</span><span class="o">));</span>
     <span class="o">}</span>
 
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">declareOutputFields</span><span class="o">(</span><span class="n">OutputFieldsDeclarer</span> <span class="n">declarer</span><span class="o">)</span> <span class="o">{</span>
-        <span class="n">declarer</span><span class="o">.</span><span class="na">declare</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;id&quot;</span><span class="o">,</span> <span class="s">&quot;result&quot;</span><span class="o">));</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">declareOutputFields</span><span class="o">(</span><span class="n">OutputFieldsDeclarer</span> <span class="n">declarer</span><span class="o">)</span> <span class="o">{</span>
+        <span class="n">declarer</span><span class="o">.</span><span class="na">declare</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"id"</span><span class="o">,</span> <span class="s">"result"</span><span class="o">));</span>
     <span class="o">}</span>
 <span class="o">}</span>
 
-<span class="kd">public</span> <span class="kd">static</span> <span class="kt">void</span> <span class="nf">main</span><span class="o">(</span><span class="n">String</span><span class="o">[]</span> <span class="n">args</span><span class="o">)</span> <span class="kd">throws</span> <span class="n">Exception</span> <span class="o">{</span>
-    <span class="n">LinearDRPCTopologyBuilder</span> <span class="n">builder</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">LinearDRPCTopologyBuilder</span><span class="o">(</span><span class="s">&quot;exclamation&quot;</span><span class="o">);</span>
-    <span class="n">builder</span><span class="o">.</span><span class="na">addBolt</span><span class="o">(</span><span class="k">new</span> <span class="nf">ExclaimBolt</span><span class="o">(),</span> <span class="mi">3</span><span class="o">);</span>
+<span class="kd">public</span> <span class="kd">static</span> <span class="kt">void</span> <span class="nf">main</span><span class="p">(</span><span class="n">String</span><span class="o">[]</span> <span class="n">args</span><span class="o">)</span> <span class="kd">throws</span> <span class="n">Exception</span> <span class="o">{</span>
+    <span class="n">LinearDRPCTopologyBuilder</span> <span class="n">builder</span> <span class="o">=</span> <span class="k">new</span> <span class="n">LinearDRPCTopologyBuilder</span><span class="o">(</span><span class="s">"exclamation"</span><span class="o">);</span>
+    <span class="n">builder</span><span class="o">.</span><span class="na">addBolt</span><span class="o">(</span><span class="k">new</span> <span class="n">ExclaimBolt</span><span class="o">(),</span> <span class="mi">3</span><span class="o">);</span>
     <span class="c1">// ...</span>
 <span class="o">}</span>
 </code></pre></div>
@@ -141,12 +141,12 @@
 <h3 id="local-mode-drpc">Local mode DRPC</h3>
 
 <p>DRPC can be run in local mode. Here&#39;s how to run the above example in local mode:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">LocalDRPC</span> <span class="n">drpc</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">LocalDRPC</span><span class="o">();</span>
-<span class="n">LocalCluster</span> <span class="n">cluster</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">LocalCluster</span><span class="o">();</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">LocalDRPC</span> <span class="n">drpc</span> <span class="o">=</span> <span class="k">new</span> <span class="n">LocalDRPC</span><span class="o">();</span>
+<span class="n">LocalCluster</span> <span class="n">cluster</span> <span class="o">=</span> <span class="k">new</span> <span class="n">LocalCluster</span><span class="o">();</span>
 
-<span class="n">cluster</span><span class="o">.</span><span class="na">submitTopology</span><span class="o">(</span><span class="s">&quot;drpc-demo&quot;</span><span class="o">,</span> <span class="n">conf</span><span class="o">,</span> <span class="n">builder</span><span class="o">.</span><span class="na">createLocalTopology</span><span class="o">(</span><span class="n">drpc</span><span class="o">));</span>
+<span class="n">cluster</span><span class="o">.</span><span class="na">submitTopology</span><span class="o">(</span><span class="s">"drpc-demo"</span><span class="o">,</span> <span class="n">conf</span><span class="o">,</span> <span class="n">builder</span><span class="o">.</span><span class="na">createLocalTopology</span><span class="o">(</span><span class="n">drpc</span><span class="o">));</span>
 
-<span class="n">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="s">&quot;Results for &#39;hello&#39;:&quot;</span> <span class="o">+</span> <span class="n">drpc</span><span class="o">.</span><span class="na">execute</span><span class="o">(</span><span class="s">&quot;exclamation&quot;</span><span class="o">,</span> <span class="s">&quot;hello&quot;</span><span class="o">));</span>
+<span class="n">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="s">"Results for 'hello':"</span> <span class="o">+</span> <span class="n">drpc</span><span class="o">.</span><span class="na">execute</span><span class="o">(</span><span class="s">"exclamation"</span><span class="o">,</span> <span class="s">"hello"</span><span class="o">));</span>
 
 <span class="n">cluster</span><span class="o">.</span><span class="na">shutdown</span><span class="o">();</span>
 <span class="n">drpc</span><span class="o">.</span><span class="na">shutdown</span><span class="o">();</span>
@@ -166,15 +166,15 @@
 </ol>
 
 <p>Launching a DRPC server can be done with the <code>storm</code> script and is just like launching Nimbus or the UI:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">bin/storm drpc
+<div class="highlight"><pre><code class="language-" data-lang="">bin/storm drpc
 </code></pre></div>
 <p>Next, you need to configure your Storm cluster to know the locations of the DRPC server(s). This is how <code>DRPCSpout</code> knows from where to read function invocations. This can be done through the <code>storm.yaml</code> file or the topology configurations. Configuring this through the <code>storm.yaml</code> looks something like this:</p>
-<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="l-Scalar-Plain">drpc.servers</span><span class="p-Indicator">:</span>
-  <span class="p-Indicator">-</span> <span class="s">&quot;drpc1.foo.com&quot;</span>
-  <span class="p-Indicator">-</span> <span class="s">&quot;drpc2.foo.com&quot;</span>
+<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="s">drpc.servers</span><span class="pi">:</span>
+  <span class="pi">-</span> <span class="s2">"</span><span class="s">drpc1.foo.com"</span>
+  <span class="pi">-</span> <span class="s2">"</span><span class="s">drpc2.foo.com"</span>
 </code></pre></div>
 <p>Finally, you launch DRPC topologies using <code>StormSubmitter</code> just like you launch any other topology. To run the above example in remote mode, you do something like this:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">StormSubmitter</span><span class="o">.</span><span class="na">submitTopology</span><span class="o">(</span><span class="s">&quot;exclamation-drpc&quot;</span><span class="o">,</span> <span class="n">conf</span><span class="o">,</span> <span class="n">builder</span><span class="o">.</span><span class="na">createRemoteTopology</span><span class="o">());</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">StormSubmitter</span><span class="o">.</span><span class="na">submitTopology</span><span class="o">(</span><span class="s">"exclamation-drpc"</span><span class="o">,</span> <span class="n">conf</span><span class="o">,</span> <span class="n">builder</span><span class="o">.</span><span class="na">createRemoteTopology</span><span class="o">());</span>
 </code></pre></div>
 <p><code>createRemoteTopology</code> is used to create topologies suitable for Storm clusters.</p>
 
@@ -194,14 +194,14 @@
 <p>A single reach computation can involve thousands of database calls and tens of millions of follower records during the computation. It&#39;s a really, really intense computation. As you&#39;re about to see, implementing this function on top of Storm is dead simple. On a single machine, reach can take minutes to compute; on a Storm cluster, you can compute reach for even the hardest URLs in a couple seconds.</p>
 
 <p>A sample reach topology is defined in storm-starter <a href="https://github.com/apache/storm/blob/master/examples/storm-starter/src/jvm/storm/starter/ReachTopology.java">here</a>. Here&#39;s how you define the reach topology:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">LinearDRPCTopologyBuilder</span> <span class="n">builder</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">LinearDRPCTopologyBuilder</span><span class="o">(</span><span class="s">&quot;reach&quot;</span><span class="o">);</span>
-<span class="n">builder</span><span class="o">.</span><span class="na">addBolt</span><span class="o">(</span><span class="k">new</span> <span class="nf">GetTweeters</span><span class="o">(),</span> <span class="mi">3</span><span class="o">);</span>
-<span class="n">builder</span><span class="o">.</span><span class="na">addBolt</span><span class="o">(</span><span class="k">new</span> <span class="nf">GetFollowers</span><span class="o">(),</span> <span class="mi">12</span><span class="o">)</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">LinearDRPCTopologyBuilder</span> <span class="n">builder</span> <span class="o">=</span> <span class="k">new</span> <span class="n">LinearDRPCTopologyBuilder</span><span class="o">(</span><span class="s">"reach"</span><span class="o">);</span>
+<span class="n">builder</span><span class="o">.</span><span class="na">addBolt</span><span class="o">(</span><span class="k">new</span> <span class="n">GetTweeters</span><span class="o">(),</span> <span class="mi">3</span><span class="o">);</span>
+<span class="n">builder</span><span class="o">.</span><span class="na">addBolt</span><span class="o">(</span><span class="k">new</span> <span class="n">GetFollowers</span><span class="o">(),</span> <span class="mi">12</span><span class="o">)</span>
         <span class="o">.</span><span class="na">shuffleGrouping</span><span class="o">();</span>
-<span class="n">builder</span><span class="o">.</span><span class="na">addBolt</span><span class="o">(</span><span class="k">new</span> <span class="nf">PartialUniquer</span><span class="o">(),</span> <span class="mi">6</span><span class="o">)</span>
-        <span class="o">.</span><span class="na">fieldsGrouping</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;id&quot;</span><span class="o">,</span> <span class="s">&quot;follower&quot;</span><span class="o">));</span>
-<span class="n">builder</span><span class="o">.</span><span class="na">addBolt</span><span class="o">(</span><span class="k">new</span> <span class="nf">CountAggregator</span><span class="o">(),</span> <span class="mi">2</span><span class="o">)</span>
-        <span class="o">.</span><span class="na">fieldsGrouping</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;id&quot;</span><span class="o">));</span>
+<span class="n">builder</span><span class="o">.</span><span class="na">addBolt</span><span class="o">(</span><span class="k">new</span> <span class="n">PartialUniquer</span><span class="o">(),</span> <span class="mi">6</span><span class="o">)</span>
+        <span class="o">.</span><span class="na">fieldsGrouping</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"id"</span><span class="o">,</span> <span class="s">"follower"</span><span class="o">));</span>
+<span class="n">builder</span><span class="o">.</span><span class="na">addBolt</span><span class="o">(</span><span class="k">new</span> <span class="n">CountAggregator</span><span class="o">(),</span> <span class="mi">2</span><span class="o">)</span>
+        <span class="o">.</span><span class="na">fieldsGrouping</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"id"</span><span class="o">));</span>
 </code></pre></div>
 <p>The topology executes as four steps:</p>
 
@@ -219,24 +219,24 @@
     <span class="n">Set</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;</span> <span class="n">_followers</span> <span class="o">=</span> <span class="k">new</span> <span class="n">HashSet</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;();</span>
 
     <span class="nd">@Override</span>
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">prepare</span><span class="o">(</span><span class="n">Map</span> <span class="n">conf</span><span class="o">,</span> <span class="n">TopologyContext</span> <span class="n">context</span><span class="o">,</span> <span class="n">BatchOutputCollector</span> <span class="n">collector</span><span class="o">,</span> <span class="n">Object</span> <span class="n">id</span><span class="o">)</span> <span class="o">{</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">prepare</span><span class="o">(</span><span class="n">Map</span> <span class="n">conf</span><span class="o">,</span> <span class="n">TopologyContext</span> <span class="n">context</span><span class="o">,</span> <span class="n">BatchOutputCollector</span> <span class="n">collector</span><span class="o">,</span> <span class="n">Object</span> <span class="n">id</span><span class="o">)</span> <span class="o">{</span>
         <span class="n">_collector</span> <span class="o">=</span> <span class="n">collector</span><span class="o">;</span>
         <span class="n">_id</span> <span class="o">=</span> <span class="n">id</span><span class="o">;</span>
     <span class="o">}</span>
 
     <span class="nd">@Override</span>
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">execute</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">tuple</span><span class="o">)</span> <span class="o">{</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">execute</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">tuple</span><span class="o">)</span> <span class="o">{</span>
         <span class="n">_followers</span><span class="o">.</span><span class="na">add</span><span class="o">(</span><span class="n">tuple</span><span class="o">.</span><span class="na">getString</span><span class="o">(</span><span class="mi">1</span><span class="o">));</span>
     <span class="o">}</span>
 
     <span class="nd">@Override</span>
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">finishBatch</span><span class="o">()</span> <span class="o">{</span>
-        <span class="n">_collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="k">new</span> <span class="nf">Values</span><span class="o">(</span><span class="n">_id</span><span class="o">,</span> <span class="n">_followers</span><span class="o">.</span><span class="na">size</span><span class="o">()));</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">finishBatch</span><span class="o">()</span> <span class="o">{</span>
+        <span class="n">_collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="k">new</span> <span class="n">Values</span><span class="o">(</span><span class="n">_id</span><span class="o">,</span> <span class="n">_followers</span><span class="o">.</span><span class="na">size</span><span class="o">()));</span>
     <span class="o">}</span>
 
     <span class="nd">@Override</span>
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">declareOutputFields</span><span class="o">(</span><span class="n">OutputFieldsDeclarer</span> <span class="n">declarer</span><span class="o">)</span> <span class="o">{</span>
-        <span class="n">declarer</span><span class="o">.</span><span class="na">declare</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;id&quot;</span><span class="o">,</span> <span class="s">&quot;partial-count&quot;</span><span class="o">));</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">declareOutputFields</span><span class="o">(</span><span class="n">OutputFieldsDeclarer</span> <span class="n">declarer</span><span class="o">)</span> <span class="o">{</span>
+        <span class="n">declarer</span><span class="o">.</span><span class="na">declare</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"id"</span><span class="o">,</span> <span class="s">"partial-count"</span><span class="o">));</span>
     <span class="o">}</span>
 <span class="o">}</span>
 </code></pre></div>
diff --git a/_site/documentation/FAQ.html b/_site/documentation/FAQ.html
index 3c33159..e9781a0 100644
--- a/_site/documentation/FAQ.html
+++ b/_site/documentation/FAQ.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -92,7 +92,7 @@
 
 <h2 id="best-practices">Best Practices</h2>
 
-<h3 id="what-rules-of-thumb-can-you-give-me-for-configuring-storm+trident?">What rules of thumb can you give me for configuring Storm+Trident?</h3>
+<h3 id="what-rules-of-thumb-can-you-give-me-for-configuring-storm-trident">What rules of thumb can you give me for configuring Storm+Trident?</h3>
 
 <ul>
 <li>number of workers a multiple of number of machines; parallelism a multiple of number of workers; number of kafka partitions a multiple of number of spout parallelism</li>
@@ -105,7 +105,7 @@
 <li>Start with a max spout pending that is for sure too small -- one for trident, or the number of executors for storm -- and increase it until you stop seeing changes in the flow. You&#39;ll probably end up with something near <code>2*(throughput in recs/sec)*(end-to-end latency)</code> (2x the Little&#39;s law capacity).</li>
 </ul>
 
-<h3 id="what-are-some-of-the-best-ways-to-get-a-worker-to-mysteriously-and-bafflingly-die?">What are some of the best ways to get a worker to mysteriously and bafflingly die?</h3>
+<h3 id="what-are-some-of-the-best-ways-to-get-a-worker-to-mysteriously-and-bafflingly-die">What are some of the best ways to get a worker to mysteriously and bafflingly die?</h3>
 
 <ul>
 <li>Do you have write access to the log directory</li>
@@ -116,7 +116,7 @@
 <li>Have you opened firewall/securitygroup permissions <em>bidirectionally</em> among a) all the workers, b) the storm master, c) zookeeper? Also, from the workers to any kafka/kestrel/database/etc that your topology accesses? Use netcat to poke the appropriate ports and be sure. </li>
 </ul>
 
-<h3 id="halp!-i-cannot-see:">Halp! I cannot see:</h3>
+<h3 id="halp-i-cannot-see">Halp! I cannot see:</h3>
 
 <ul>
 <li><strong>my logs</strong> Logs by default go to $STORM_HOME/logs. Check that you have write permissions to that directory. They are configured in 
@@ -130,7 +130,7 @@
 <li><strong>final Java system properties</strong> Add <code>Properties props = System.getProperties(); props.list(System.out);</code> near where you build your topology.</li>
 </ul>
 
-<h3 id="how-many-workers-should-i-use?">How many Workers should I use?</h3>
+<h3 id="how-many-workers-should-i-use">How many Workers should I use?</h3>
 
 <p>The total number of workers is set by the supervisors -- there&#39;s some number of JVM slots each supervisor will superintend. The thing you set on the topology is how many worker slots it will try to claim.</p>
 
@@ -146,14 +146,14 @@
 
 <h2 id="topology">Topology</h2>
 
-<h3 id="can-a-trident-topology-have-multiple-streams?">Can a Trident topology have Multiple Streams?</h3>
+<h3 id="can-a-trident-topology-have-multiple-streams">Can a Trident topology have Multiple Streams?</h3>
 
 <blockquote>
 <p>Can a Trident Topology work like a workflow with conditional paths (if-else)? e.g. A Spout (S1) connects to a bolt (B0) which based on certain values in the incoming tuple routes them to either bolt (B1) or bolt (B2) but not both.</p>
 </blockquote>
 
 <p>A Trident &quot;each&quot; operator returns a Stream object, which you can store in a variable. You can then run multiple eaches on the same Stream to split it, e.g.: </p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">    Stream s = topology.each(...).groupBy(...).aggregate(...) 
+<div class="highlight"><pre><code class="language-" data-lang="">    Stream s = topology.each(...).groupBy(...).aggregate(...) 
     Stream branch1 = s.each(..., FilterA) 
     Stream branch2 = s.each(..., FilterB) 
 </code></pre></div>
@@ -163,23 +163,23 @@
 
 <h2 id="spouts">Spouts</h2>
 
-<h3 id="what-is-a-coordinator,-and-why-are-there-several?">What is a coordinator, and why are there several?</h3>
+<h3 id="what-is-a-coordinator-and-why-are-there-several">What is a coordinator, and why are there several?</h3>
 
 <p>A trident-spout is actually run within a storm <em>bolt</em>. The storm-spout of a trident topology is the MasterBatchCoordinator -- it coordinates trident batches and is the same no matter what spouts you use. A batch is born when the MBC dispenses a seed tuple to each of the spout-coordinators. The spout-coordinator bolts know how your particular spouts should cooperate -- so in the kafka case, it&#39;s what helps figure out what partition and offset range each spout should pull from.</p>
 
-<h3 id="what-can-i-store-into-the-spout&#39;s-metadata-record?">What can I store into the spout&#39;s metadata record?</h3>
+<h3 id="what-can-i-store-into-the-spout-39-s-metadata-record">What can I store into the spout&#39;s metadata record?</h3>
 
 <p>You should only store static data, and as little of it as possible, into the metadata record (note: maybe you <em>can</em> store more interesting things; you shouldn&#39;t, though)</p>
 
-<h3 id="how-often-is-the-&#39;emitpartitionbatchnew&#39;-function-called?">How often is the &#39;emitPartitionBatchNew&#39; function called?</h3>
+<h3 id="how-often-is-the-39-emitpartitionbatchnew-39-function-called">How often is the &#39;emitPartitionBatchNew&#39; function called?</h3>
 
 <p>Since the MBC is the actual spout, all the tuples in a batch are just members of its tupletree. That means storm&#39;s &quot;max spout pending&quot; config effectively defines the number of concurrent batches trident runs. The MBC emits a new batch if it has fewer than max-spending tuples pending and if at least one <a href="https://github.com/apache/storm/blob/master/conf/defaults.yaml#L115">trident batch interval</a>&#39;s worth of seconds has passed since the last batch.</p>
 
-<h3 id="if-nothing-was-emitted-does-trident-slow-down-the-calls?">If nothing was emitted does Trident slow down the calls?</h3>
+<h3 id="if-nothing-was-emitted-does-trident-slow-down-the-calls">If nothing was emitted does Trident slow down the calls?</h3>
 
 <p>Yes, there&#39;s a pluggable &quot;spout wait strategy&quot;; the default is to sleep for a <a href="https://github.com/apache/storm/blob/master/conf/defaults.yaml#L110">configurable amount of time</a></p>
 
-<h3 id="ok,-then-what-is-the-trident-batch-interval-for?">OK, then what is the trident batch interval for?</h3>
+<h3 id="ok-then-what-is-the-trident-batch-interval-for">OK, then what is the trident batch interval for?</h3>
 
 <p>You know how computers of the 486 era had a <a href="http://en.wikipedia.org/wiki/Turbo_button">turbo button</a> on them? It&#39;s like that. </p>
 
@@ -189,11 +189,11 @@
 
 <p>Note that this is a cap, not an additional delay -- with a period of 300ms, if your batch takes 258ms Trident will only delay an additional 42ms.</p>
 
-<h3 id="how-do-you-set-the-batch-size?">How do you set the batch size?</h3>
+<h3 id="how-do-you-set-the-batch-size">How do you set the batch size?</h3>
 
 <p>Trident doesn&#39;t place its own limits on the batch count. In the case of the Kafka spout, the max fetch bytes size divided by the average record size defines an effective records per subbatch partition.</p>
 
-<h3 id="how-do-i-resize-a-batch?">How do I resize a batch?</h3>
+<h3 id="how-do-i-resize-a-batch">How do I resize a batch?</h3>
 
 <p>The trident batch is a somewhat overloaded facility. Together with the number of partitions, the batch size is constrained by or serves to define</p>
 
@@ -208,13 +208,13 @@
 
 <h2 id="time-series">Time Series</h2>
 
-<h3 id="how-do-i-aggregate-events-by-time?">How do I aggregate events by time?</h3>
+<h3 id="how-do-i-aggregate-events-by-time">How do I aggregate events by time?</h3>
 
 <p>If have records with an immutable timestamp, and you would like to count, average or otherwise aggregate them into discrete time buckets, Trident is an excellent and scalable solution.</p>
 
 <p>Write an <code>Each</code> function that turns the timestamp into a time bucket: if the bucket size was &quot;by hour&quot;, then the timestamp <code>2013-08-08 12:34:56</code> would be mapped to the <code>2013-08-08 12:00:00</code> time bucket, and so would everything else in the twelve o&#39;clock hour. Then group on that timebucket and use a grouped persistentAggregate. The persistentAggregate uses a local cacheMap backed by a data store. Groups with many records require very few reads from the data store, and use efficient bulk reads and writes; as long as your data feed is relatively prompt Trident will make very efficient use of memory and network. Even if a server drops off line for a day, then delivers that full day&#39;s worth of data in a rush, the old results will be calmly retrieved and updated -- and without interfering with calculating the current results.</p>
 
-<h3 id="how-can-i-know-that-all-records-for-a-time-bucket-have-been-received?">How can I know that all records for a time bucket have been received?</h3>
+<h3 id="how-can-i-know-that-all-records-for-a-time-bucket-have-been-received">How can I know that all records for a time bucket have been received?</h3>
 
 <p>You cannot know that all events are collected -- this is an epistemological challenge, not a distributed systems challenge. You can:</p>
 
diff --git a/_site/documentation/Fault-tolerance.html b/_site/documentation/Fault-tolerance.html
index b3335dc..9ce6412 100644
--- a/_site/documentation/Fault-tolerance.html
+++ b/_site/documentation/Fault-tolerance.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -92,27 +92,27 @@
 
 <p>This page explains the design details of Storm that make it a fault-tolerant system.</p>
 
-<h2 id="what-happens-when-a-worker-dies?">What happens when a worker dies?</h2>
+<h2 id="what-happens-when-a-worker-dies">What happens when a worker dies?</h2>
 
 <p>When a worker dies, the supervisor will restart it. If it continuously fails on startup and is unable to heartbeat to Nimbus, Nimbus will reassign the worker to another machine.</p>
 
-<h2 id="what-happens-when-a-node-dies?">What happens when a node dies?</h2>
+<h2 id="what-happens-when-a-node-dies">What happens when a node dies?</h2>
 
 <p>The tasks assigned to that machine will time-out and Nimbus will reassign those tasks to other machines.</p>
 
-<h2 id="what-happens-when-nimbus-or-supervisor-daemons-die?">What happens when Nimbus or Supervisor daemons die?</h2>
+<h2 id="what-happens-when-nimbus-or-supervisor-daemons-die">What happens when Nimbus or Supervisor daemons die?</h2>
 
 <p>The Nimbus and Supervisor daemons are designed to be fail-fast (process self-destructs whenever any unexpected situation is encountered) and stateless (all state is kept in Zookeeper or on disk). As described in <a href="Setting-up-a-Storm-cluster.html">Setting up a Storm cluster</a>, the Nimbus and Supervisor daemons must be run under supervision using a tool like daemontools or monit. So if the Nimbus or Supervisor daemons die, they restart like nothing happened.</p>
 
 <p>Most notably, no worker processes are affected by the death of Nimbus or the Supervisors. This is in contrast to Hadoop, where if the JobTracker dies, all the running jobs are lost. </p>
 
-<h2 id="is-nimbus-a-single-point-of-failure?">Is Nimbus a single point of failure?</h2>
+<h2 id="is-nimbus-a-single-point-of-failure">Is Nimbus a single point of failure?</h2>
 
 <p>If you lose the Nimbus node, the workers will still continue to function. Additionally, supervisors will continue to restart workers if they die. However, without Nimbus, workers won&#39;t be reassigned to other machines when necessary (like if you lose a worker machine). </p>
 
 <p>So the answer is that Nimbus is &quot;sort of&quot; a SPOF. In practice, it&#39;s not a big deal since nothing catastrophic happens when the Nimbus daemon dies. There are plans to make Nimbus highly available in the future.</p>
 
-<h2 id="how-does-storm-guarantee-data-processing?">How does Storm guarantee data processing?</h2>
+<h2 id="how-does-storm-guarantee-data-processing">How does Storm guarantee data processing?</h2>
 
 <p>Storm provides mechanisms to guarantee data processing even if nodes die or messages are lost. See <a href="Guaranteeing-message-processing.html">Guaranteeing message processing</a> for the details.</p>
 
diff --git a/_site/documentation/Guaranteeing-message-processing.html b/_site/documentation/Guaranteeing-message-processing.html
index 05fdfb4..1ec0ce1 100644
--- a/_site/documentation/Guaranteeing-message-processing.html
+++ b/_site/documentation/Guaranteeing-message-processing.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -92,18 +92,18 @@
 
 <p>Storm guarantees that each message coming off a spout will be fully processed. This page describes how Storm accomplishes this guarantee and what you have to do as a user to benefit from Storm&#39;s reliability capabilities.</p>
 
-<h3 id="what-does-it-mean-for-a-message-to-be-&quot;fully-processed&quot;?">What does it mean for a message to be &quot;fully processed&quot;?</h3>
+<h3 id="what-does-it-mean-for-a-message-to-be-quot-fully-processed-quot">What does it mean for a message to be &quot;fully processed&quot;?</h3>
 
 <p>A tuple coming off a spout can trigger thousands of tuples to be created based on it. Consider, for example, the streaming word count topology:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">TopologyBuilder</span> <span class="n">builder</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">TopologyBuilder</span><span class="o">();</span>
-<span class="n">builder</span><span class="o">.</span><span class="na">setSpout</span><span class="o">(</span><span class="s">&quot;sentences&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="nf">KestrelSpout</span><span class="o">(</span><span class="s">&quot;kestrel.backtype.com&quot;</span><span class="o">,</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">TopologyBuilder</span> <span class="n">builder</span> <span class="o">=</span> <span class="k">new</span> <span class="n">TopologyBuilder</span><span class="o">();</span>
+<span class="n">builder</span><span class="o">.</span><span class="na">setSpout</span><span class="o">(</span><span class="s">"sentences"</span><span class="o">,</span> <span class="k">new</span> <span class="n">KestrelSpout</span><span class="o">(</span><span class="s">"kestrel.backtype.com"</span><span class="o">,</span>
                                                <span class="mi">22133</span><span class="o">,</span>
-                                               <span class="s">&quot;sentence_queue&quot;</span><span class="o">,</span>
-                                               <span class="k">new</span> <span class="nf">StringScheme</span><span class="o">()));</span>
-<span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">&quot;split&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="nf">SplitSentence</span><span class="o">(),</span> <span class="mi">10</span><span class="o">)</span>
-        <span class="o">.</span><span class="na">shuffleGrouping</span><span class="o">(</span><span class="s">&quot;sentences&quot;</span><span class="o">);</span>
-<span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">&quot;count&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="nf">WordCount</span><span class="o">(),</span> <span class="mi">20</span><span class="o">)</span>
-        <span class="o">.</span><span class="na">fieldsGrouping</span><span class="o">(</span><span class="s">&quot;split&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;word&quot;</span><span class="o">));</span>
+                                               <span class="s">"sentence_queue"</span><span class="o">,</span>
+                                               <span class="k">new</span> <span class="n">StringScheme</span><span class="o">()));</span>
+<span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">"split"</span><span class="o">,</span> <span class="k">new</span> <span class="n">SplitSentence</span><span class="o">(),</span> <span class="mi">10</span><span class="o">)</span>
+        <span class="o">.</span><span class="na">shuffleGrouping</span><span class="o">(</span><span class="s">"sentences"</span><span class="o">);</span>
+<span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">"count"</span><span class="o">,</span> <span class="k">new</span> <span class="n">WordCount</span><span class="o">(),</span> <span class="mi">20</span><span class="o">)</span>
+        <span class="o">.</span><span class="na">fieldsGrouping</span><span class="o">(</span><span class="s">"split"</span><span class="o">,</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"word"</span><span class="o">));</span>
 </code></pre></div>
 <p>This topology reads sentences off of a Kestrel queue, splits the sentences into its constituent words, and then emits for each word the number of times it has seen that word before. A tuple coming off the spout triggers many tuples being created based on it: a tuple for each word in the sentence and a tuple for the updated count for each word. The tree of messages looks something like this:</p>
 
@@ -111,25 +111,25 @@
 
 <p>Storm considers a tuple coming off a spout &quot;fully processed&quot; when the tuple tree has been exhausted and every message in the tree has been processed. A tuple is considered failed when its tree of messages fails to be fully processed within a specified timeout. This timeout can be configured on a topology-specific basis using the <a href="/javadoc/apidocs/backtype/storm/Config.html#TOPOLOGY_MESSAGE_TIMEOUT_SECS">Config.TOPOLOGY_MESSAGE_TIMEOUT_SECS</a> configuration and defaults to 30 seconds.</p>
 
-<h3 id="what-happens-if-a-message-is-fully-processed-or-fails-to-be-fully-processed?">What happens if a message is fully processed or fails to be fully processed?</h3>
+<h3 id="what-happens-if-a-message-is-fully-processed-or-fails-to-be-fully-processed">What happens if a message is fully processed or fails to be fully processed?</h3>
 
 <p>To understand this question, let&#39;s take a look at the lifecycle of a tuple coming off of a spout. For reference, here is the interface that spouts implement (see the <a href="/javadoc/apidocs/backtype/storm/spout/ISpout.html">Javadoc</a> for more information):</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">interface</span> <span class="nc">ISpout</span> <span class="kd">extends</span> <span class="n">Serializable</span> <span class="o">{</span>
-    <span class="kt">void</span> <span class="nf">open</span><span class="o">(</span><span class="n">Map</span> <span class="n">conf</span><span class="o">,</span> <span class="n">TopologyContext</span> <span class="n">context</span><span class="o">,</span> <span class="n">SpoutOutputCollector</span> <span class="n">collector</span><span class="o">);</span>
-    <span class="kt">void</span> <span class="nf">close</span><span class="o">();</span>
-    <span class="kt">void</span> <span class="nf">nextTuple</span><span class="o">();</span>
-    <span class="kt">void</span> <span class="nf">ack</span><span class="o">(</span><span class="n">Object</span> <span class="n">msgId</span><span class="o">);</span>
-    <span class="kt">void</span> <span class="nf">fail</span><span class="o">(</span><span class="n">Object</span> <span class="n">msgId</span><span class="o">);</span>
+    <span class="kt">void</span> <span class="n">open</span><span class="o">(</span><span class="n">Map</span> <span class="n">conf</span><span class="o">,</span> <span class="n">TopologyContext</span> <span class="n">context</span><span class="o">,</span> <span class="n">SpoutOutputCollector</span> <span class="n">collector</span><span class="o">);</span>
+    <span class="kt">void</span> <span class="n">close</span><span class="o">();</span>
+    <span class="kt">void</span> <span class="n">nextTuple</span><span class="o">();</span>
+    <span class="kt">void</span> <span class="n">ack</span><span class="o">(</span><span class="n">Object</span> <span class="n">msgId</span><span class="o">);</span>
+    <span class="kt">void</span> <span class="n">fail</span><span class="o">(</span><span class="n">Object</span> <span class="n">msgId</span><span class="o">);</span>
 <span class="o">}</span>
 </code></pre></div>
 <p>First, Storm requests a tuple from the <code>Spout</code> by calling the <code>nextTuple</code> method on the <code>Spout</code>. The <code>Spout</code> uses the <code>SpoutOutputCollector</code> provided in the <code>open</code> method to emit a tuple to one of its output streams. When emitting a tuple, the <code>Spout</code> provides a &quot;message id&quot; that will be used to identify the tuple later. For example, the <code>KestrelSpout</code> reads a message off of the kestrel queue and emits as the &quot;message id&quot; the id provided by Kestrel for the message. Emitting a message to the <code>SpoutOutputCollector</code> looks like this:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">_collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="k">new</span> <span class="nf">Values</span><span class="o">(</span><span class="s">&quot;field1&quot;</span><span class="o">,</span> <span class="s">&quot;field2&quot;</span><span class="o">,</span> <span class="mi">3</span><span class="o">)</span> <span class="o">,</span> <span class="n">msgId</span><span class="o">);</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">_collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="k">new</span> <span class="n">Values</span><span class="o">(</span><span class="s">"field1"</span><span class="o">,</span> <span class="s">"field2"</span><span class="o">,</span> <span class="mi">3</span><span class="o">)</span> <span class="o">,</span> <span class="n">msgId</span><span class="o">);</span>
 </code></pre></div>
 <p>Next, the tuple gets sent to consuming bolts and Storm takes care of tracking the tree of messages that is created. If Storm detects that a tuple is fully processed, Storm will call the <code>ack</code> method on the originating <code>Spout</code> task with the message id that the <code>Spout</code> provided to Storm. Likewise, if the tuple times-out Storm will call the <code>fail</code> method on the <code>Spout</code>. Note that a tuple will be acked or failed by the exact same <code>Spout</code> task that created it. So if a <code>Spout</code> is executing as many tasks across the cluster, a tuple won&#39;t be acked or failed by a different task than the one that created it.</p>
 
 <p>Let&#39;s use <code>KestrelSpout</code> again to see what a <code>Spout</code> needs to do to guarantee message processing. When <code>KestrelSpout</code> takes a message off the Kestrel queue, it &quot;opens&quot; the message. This means the message is not actually taken off the queue yet, but instead placed in a &quot;pending&quot; state waiting for acknowledgement that the message is completed. While in the pending state, a message will not be sent to other consumers of the queue. Additionally, if a client disconnects all pending messages for that client are put back on the queue. When a message is opened, Kestrel provides the client with the data for the message as well as a unique id for the message. The <code>KestrelSpout</code> uses that exact id as the &quot;message id&quot; for the tuple when emitting the tuple to the <code>SpoutOutputCollector</code>. Sometime later on, when <code>ack</code> or <code>fail</code> are called on the <code>KestrelSpout</code>, the <code>KestrelSpout</code> sends an ack or fail message to Kestrel with the message id to take the message off the queue or have it put back on.</p>
 
-<h3 id="what-is-storm&#39;s-reliability-api?">What is Storm&#39;s reliability API?</h3>
+<h3 id="what-is-storm-39-s-reliability-api">What is Storm&#39;s reliability API?</h3>
 
 <p>There&#39;s two things you have to do as a user to benefit from Storm&#39;s reliability capabilities. First, you need to tell Storm whenever you&#39;re creating a new link in the tree of tuples. Second, you need to tell Storm when you have finished processing an individual tuple. By doing both these things, Storm can detect when the tree of tuples is fully processed and can ack or fail the spout tuple appropriately. Storm&#39;s API provides a concise way of doing both of these tasks. </p>
 
@@ -137,25 +137,25 @@
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">class</span> <span class="nc">SplitSentence</span> <span class="kd">extends</span> <span class="n">BaseRichBolt</span> <span class="o">{</span>
         <span class="n">OutputCollector</span> <span class="n">_collector</span><span class="o">;</span>
 
-        <span class="kd">public</span> <span class="kt">void</span> <span class="nf">prepare</span><span class="o">(</span><span class="n">Map</span> <span class="n">conf</span><span class="o">,</span> <span class="n">TopologyContext</span> <span class="n">context</span><span class="o">,</span> <span class="n">OutputCollector</span> <span class="n">collector</span><span class="o">)</span> <span class="o">{</span>
+        <span class="kd">public</span> <span class="kt">void</span> <span class="n">prepare</span><span class="o">(</span><span class="n">Map</span> <span class="n">conf</span><span class="o">,</span> <span class="n">TopologyContext</span> <span class="n">context</span><span class="o">,</span> <span class="n">OutputCollector</span> <span class="n">collector</span><span class="o">)</span> <span class="o">{</span>
             <span class="n">_collector</span> <span class="o">=</span> <span class="n">collector</span><span class="o">;</span>
         <span class="o">}</span>
 
-        <span class="kd">public</span> <span class="kt">void</span> <span class="nf">execute</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">tuple</span><span class="o">)</span> <span class="o">{</span>
+        <span class="kd">public</span> <span class="kt">void</span> <span class="n">execute</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">tuple</span><span class="o">)</span> <span class="o">{</span>
             <span class="n">String</span> <span class="n">sentence</span> <span class="o">=</span> <span class="n">tuple</span><span class="o">.</span><span class="na">getString</span><span class="o">(</span><span class="mi">0</span><span class="o">);</span>
-            <span class="k">for</span><span class="o">(</span><span class="n">String</span> <span class="nl">word:</span> <span class="n">sentence</span><span class="o">.</span><span class="na">split</span><span class="o">(</span><span class="s">&quot; &quot;</span><span class="o">))</span> <span class="o">{</span>
-                <span class="n">_collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="n">tuple</span><span class="o">,</span> <span class="k">new</span> <span class="nf">Values</span><span class="o">(</span><span class="n">word</span><span class="o">));</span>
+            <span class="k">for</span><span class="o">(</span><span class="n">String</span> <span class="nl">word:</span> <span class="n">sentence</span><span class="o">.</span><span class="na">split</span><span class="o">(</span><span class="s">" "</span><span class="o">))</span> <span class="o">{</span>
+                <span class="n">_collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="n">tuple</span><span class="o">,</span> <span class="k">new</span> <span class="n">Values</span><span class="o">(</span><span class="n">word</span><span class="o">));</span>
             <span class="o">}</span>
             <span class="n">_collector</span><span class="o">.</span><span class="na">ack</span><span class="o">(</span><span class="n">tuple</span><span class="o">);</span>
         <span class="o">}</span>
 
-        <span class="kd">public</span> <span class="kt">void</span> <span class="nf">declareOutputFields</span><span class="o">(</span><span class="n">OutputFieldsDeclarer</span> <span class="n">declarer</span><span class="o">)</span> <span class="o">{</span>
-            <span class="n">declarer</span><span class="o">.</span><span class="na">declare</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;word&quot;</span><span class="o">));</span>
+        <span class="kd">public</span> <span class="kt">void</span> <span class="n">declareOutputFields</span><span class="o">(</span><span class="n">OutputFieldsDeclarer</span> <span class="n">declarer</span><span class="o">)</span> <span class="o">{</span>
+            <span class="n">declarer</span><span class="o">.</span><span class="na">declare</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"word"</span><span class="o">));</span>
         <span class="o">}</span>        
     <span class="o">}</span>
 </code></pre></div>
 <p>Each word tuple is <em>anchored</em> by specifying the input tuple as the first argument to <code>emit</code>. Since the word tuple is anchored, the spout tuple at the root of the tree will be replayed later on if the word tuple failed to be processed downstream. In contrast, let&#39;s look at what happens if the word tuple is emitted like this:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">_collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="k">new</span> <span class="nf">Values</span><span class="o">(</span><span class="n">word</span><span class="o">));</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">_collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="k">new</span> <span class="n">Values</span><span class="o">(</span><span class="n">word</span><span class="o">));</span>
 </code></pre></div>
 <p>Emitting the word tuple this way causes it to be <em>unanchored</em>. If the tuple fails be processed downstream, the root tuple will not be replayed. Depending on the fault-tolerance guarantees you need in your topology, sometimes it&#39;s appropriate to emit an unanchored tuple.</p>
 
@@ -163,7 +163,7 @@
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">List</span><span class="o">&lt;</span><span class="n">Tuple</span><span class="o">&gt;</span> <span class="n">anchors</span> <span class="o">=</span> <span class="k">new</span> <span class="n">ArrayList</span><span class="o">&lt;</span><span class="n">Tuple</span><span class="o">&gt;();</span>
 <span class="n">anchors</span><span class="o">.</span><span class="na">add</span><span class="o">(</span><span class="n">tuple1</span><span class="o">);</span>
 <span class="n">anchors</span><span class="o">.</span><span class="na">add</span><span class="o">(</span><span class="n">tuple2</span><span class="o">);</span>
-<span class="n">_collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="n">anchors</span><span class="o">,</span> <span class="k">new</span> <span class="nf">Values</span><span class="o">(</span><span class="mi">1</span><span class="o">,</span> <span class="mi">2</span><span class="o">,</span> <span class="mi">3</span><span class="o">));</span>
+<span class="n">_collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="n">anchors</span><span class="o">,</span> <span class="k">new</span> <span class="n">Values</span><span class="o">(</span><span class="mi">1</span><span class="o">,</span> <span class="mi">2</span><span class="o">,</span> <span class="mi">3</span><span class="o">));</span>
 </code></pre></div>
 <p>Multi-anchoring adds the output tuple into multiple tuple trees. Note that it&#39;s also possible for multi-anchoring to break the tree structure and create tuple DAGs, like so:</p>
 
@@ -179,15 +179,15 @@
 
 <p>A lot of bolts follow a common pattern of reading an input tuple, emitting tuples based on it, and then acking the tuple at the end of the <code>execute</code> method. These bolts fall into the categories of filters and simple functions. Storm has an interface called <code>BasicBolt</code> that encapsulates this pattern for you. The <code>SplitSentence</code> example can be written as a <code>BasicBolt</code> like follows:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">class</span> <span class="nc">SplitSentence</span> <span class="kd">extends</span> <span class="n">BaseBasicBolt</span> <span class="o">{</span>
-        <span class="kd">public</span> <span class="kt">void</span> <span class="nf">execute</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">tuple</span><span class="o">,</span> <span class="n">BasicOutputCollector</span> <span class="n">collector</span><span class="o">)</span> <span class="o">{</span>
+        <span class="kd">public</span> <span class="kt">void</span> <span class="n">execute</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">tuple</span><span class="o">,</span> <span class="n">BasicOutputCollector</span> <span class="n">collector</span><span class="o">)</span> <span class="o">{</span>
             <span class="n">String</span> <span class="n">sentence</span> <span class="o">=</span> <span class="n">tuple</span><span class="o">.</span><span class="na">getString</span><span class="o">(</span><span class="mi">0</span><span class="o">);</span>
-            <span class="k">for</span><span class="o">(</span><span class="n">String</span> <span class="nl">word:</span> <span class="n">sentence</span><span class="o">.</span><span class="na">split</span><span class="o">(</span><span class="s">&quot; &quot;</span><span class="o">))</span> <span class="o">{</span>
-                <span class="n">collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="k">new</span> <span class="nf">Values</span><span class="o">(</span><span class="n">word</span><span class="o">));</span>
+            <span class="k">for</span><span class="o">(</span><span class="n">String</span> <span class="nl">word:</span> <span class="n">sentence</span><span class="o">.</span><span class="na">split</span><span class="o">(</span><span class="s">" "</span><span class="o">))</span> <span class="o">{</span>
+                <span class="n">collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="k">new</span> <span class="n">Values</span><span class="o">(</span><span class="n">word</span><span class="o">));</span>
             <span class="o">}</span>
         <span class="o">}</span>
 
-        <span class="kd">public</span> <span class="kt">void</span> <span class="nf">declareOutputFields</span><span class="o">(</span><span class="n">OutputFieldsDeclarer</span> <span class="n">declarer</span><span class="o">)</span> <span class="o">{</span>
-            <span class="n">declarer</span><span class="o">.</span><span class="na">declare</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;word&quot;</span><span class="o">));</span>
+        <span class="kd">public</span> <span class="kt">void</span> <span class="n">declareOutputFields</span><span class="o">(</span><span class="n">OutputFieldsDeclarer</span> <span class="n">declarer</span><span class="o">)</span> <span class="o">{</span>
+            <span class="n">declarer</span><span class="o">.</span><span class="na">declare</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"word"</span><span class="o">));</span>
         <span class="o">}</span>        
     <span class="o">}</span>
 </code></pre></div>
@@ -195,11 +195,11 @@
 
 <p>In contrast, bolts that do aggregations or joins may delay acking a tuple until after it has computed a result based on a bunch of tuples. Aggregations and joins will commonly multi-anchor their output tuples as well. These things fall outside the simpler pattern of <code>IBasicBolt</code>.</p>
 
-<h3 id="how-do-i-make-my-applications-work-correctly-given-that-tuples-can-be-replayed?">How do I make my applications work correctly given that tuples can be replayed?</h3>
+<h3 id="how-do-i-make-my-applications-work-correctly-given-that-tuples-can-be-replayed">How do I make my applications work correctly given that tuples can be replayed?</h3>
 
 <p>As always in software design, the answer is &quot;it depends.&quot; Storm 0.7.0 introduced the &quot;transactional topologies&quot; feature, which enables you to get fully fault-tolerant exactly-once messaging semantics for most computations. Read more about transactional topologies <a href="Transactional-topologies.html">here</a>. </p>
 
-<h3 id="how-does-storm-implement-reliability-in-an-efficient-way?">How does Storm implement reliability in an efficient way?</h3>
+<h3 id="how-does-storm-implement-reliability-in-an-efficient-way">How does Storm implement reliability in an efficient way?</h3>
 
 <p>A Storm topology has a set of special &quot;acker&quot; tasks that track the DAG of tuples for every spout tuple. When an acker sees that a DAG is complete, it sends a message to the spout task that created the spout tuple to ack the message. You can set the number of acker executors for a topology in the topology configuration using <a href="/javadoc/apidocs/backtype/storm/Config.html#TOPOLOGY_ACKER_EXECUTORS">Config.TOPOLOGY_ACKER_EXECUTORS</a>. Storm defaults TOPOLOGY_ACKER_EXECUTORS to be equal to the number of workers configured in the topology -- you will need to increase this number for topologies processing large amounts of messages.</p>
 
diff --git a/_site/documentation/Home.html b/_site/documentation/Home.html
index 10df930b..c3813f4 100644
--- a/_site/documentation/Home.html
+++ b/_site/documentation/Home.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -130,13 +130,13 @@
 
 <p>You can view the archives of the mailing list <a href="http://mail-archives.apache.org/mod_mbox/storm-dev/">here</a>.</p>
 
-<h4 id="which-list-should-i-send/subscribe-to?">Which list should I send/subscribe to?</h4>
+<h4 id="which-list-should-i-send-subscribe-to">Which list should I send/subscribe to?</h4>
 
 <p>If you are using a pre-built binary distribution of Storm, then chances are you should send questions, comments, storm-related announcements, etc. to <a href="user@storm.apache.org">user@storm.apache.org</a>. </p>
 
 <p>If you are building storm from source, developing new features, or otherwise hacking storm source code, then <a href="dev@storm.apache.org">dev@storm.apache.org</a> is more appropriate. </p>
 
-<h4 id="what-will-happen-with-storm-user@googlegroups.com?">What will happen with <a href="mailto:storm-user@googlegroups.com">storm-user@googlegroups.com</a>?</h4>
+<h4 id="what-will-happen-with-storm-user-googlegroups-com">What will happen with <a href="mailto:storm-user@googlegroups.com">storm-user@googlegroups.com</a>?</h4>
 
 <p>All existing messages will remain archived there, and can be accessed/searched <a href="https://groups.google.com/forum/#!forum/storm-user">here</a>.</p>
 
diff --git a/_site/documentation/Hooks.html b/_site/documentation/Hooks.html
index e8c48a2..9d9d0e4 100644
--- a/_site/documentation/Hooks.html
+++ b/_site/documentation/Hooks.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
diff --git a/_site/documentation/Implementation-docs.html b/_site/documentation/Implementation-docs.html
index 835d7d4..fe7dd48 100644
--- a/_site/documentation/Implementation-docs.html
+++ b/_site/documentation/Implementation-docs.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
diff --git a/_site/documentation/Installing-native-dependencies.html b/_site/documentation/Installing-native-dependencies.html
index f1894aa..4378012 100644
--- a/_site/documentation/Installing-native-dependencies.html
+++ b/_site/documentation/Installing-native-dependencies.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -95,7 +95,7 @@
 <p>Installing ZeroMQ and JZMQ is usually straightforward. Sometimes, however, people run into issues with autoconf and get strange errors. If you run into any issues, please email the <a href="http://groups.google.com/group/storm-user">Storm mailing list</a> or come get help in the #storm-user room on freenode. </p>
 
 <p>Storm has been tested with ZeroMQ 2.1.7, and this is the recommended ZeroMQ release that you install. You can download a ZeroMQ release <a href="http://download.zeromq.org/">here</a>. Installing ZeroMQ should look something like this:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">wget http://download.zeromq.org/zeromq-2.1.7.tar.gz
+<div class="highlight"><pre><code class="language-" data-lang="">wget http://download.zeromq.org/zeromq-2.1.7.tar.gz
 tar -xzf zeromq-2.1.7.tar.gz
 cd zeromq-2.1.7
 ./configure
@@ -103,7 +103,7 @@
 sudo make install
 </code></pre></div>
 <p>JZMQ is the Java bindings for ZeroMQ. JZMQ doesn&#39;t have any releases (we&#39;re working with them on that), so there is risk of a regression if you always install from the master branch. To prevent a regression from happening, you should instead install from <a href="http://github.com/nathanmarz/jzmq">this fork</a> which is tested to work with Storm. Installing JZMQ should look something like this:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">#install jzmq
+<div class="highlight"><pre><code class="language-" data-lang="">#install jzmq
 git clone https://github.com/nathanmarz/jzmq.git
 cd jzmq
 ./autogen.sh
diff --git a/_site/documentation/Kestrel-and-Storm.html b/_site/documentation/Kestrel-and-Storm.html
index 8ea72ff..424783e 100644
--- a/_site/documentation/Kestrel-and-Storm.html
+++ b/_site/documentation/Kestrel-and-Storm.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -121,15 +121,15 @@
 <h2 id="add-items-to-kestrel">Add items to Kestrel</h2>
 
 <p>At first, we need to have a program that can add items to a Kestrel queue. The following method takes benefit of the KestrelClient implementation in <a href="https://github.com/nathanmarz/storm-kestrel">storm-kestrel</a>. It adds sentences into a Kestrel queue randomly chosen out of an array that holds five possible sentences.</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">    private static void queueSentenceItems(KestrelClient kestrelClient, String queueName)
+<div class="highlight"><pre><code class="language-" data-lang="">    private static void queueSentenceItems(KestrelClient kestrelClient, String queueName)
             throws ParseError, IOException {
 
         String[] sentences = new String[] {
-                &quot;the cow jumped over the moon&quot;,
-                &quot;an apple a day keeps the doctor away&quot;,
-                &quot;four score and seven years ago&quot;,
-                &quot;snow white and the seven dwarfs&quot;,
-                &quot;i am at two with nature&quot;};
+                "the cow jumped over the moon",
+                "an apple a day keeps the doctor away",
+                "four score and seven years ago",
+                "snow white and the seven dwarfs",
+                "i am at two with nature"};
 
         Random _rand = new Random();
 
@@ -137,11 +137,11 @@
 
             String sentence = sentences[_rand.nextInt(sentences.length)];
 
-            String val = &quot;ID &quot; + i + &quot; &quot; + sentence;
+            String val = "ID " + i + " " + sentence;
 
             boolean queueSucess = kestrelClient.queue(queueName, val);
 
-            System.out.println(&quot;queueSucess=&quot; +queueSucess+ &quot; [&quot; + val +&quot;]&quot;);
+            System.out.println("queueSucess=" +queueSucess+ " [" + val +"]");
         }
     }
 </code></pre></div>
@@ -152,10 +152,10 @@
     private static void dequeueItems(KestrelClient kestrelClient, String queueName) throws IOException, ParseError
              {
         for(int i=1; i&lt;=12; i++){</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">        Item item = kestrelClient.dequeue(queueName);
+<div class="highlight"><pre><code class="language-" data-lang="">        Item item = kestrelClient.dequeue(queueName);
 
         if(item==null){
-            System.out.println(&quot;The queue (&quot; + queueName + &quot;) contains no items.&quot;);
+            System.out.println("The queue (" + queueName + ") contains no items.");
         }
         else
         {
@@ -163,11 +163,12 @@
 
             String receivedVal = new String(data);
 
-            System.out.println(&quot;receivedItem=&quot; + receivedVal);
+            System.out.println("receivedItem=" + receivedVal);
         }
     }
-</code></pre></div><div class="highlight"><pre><code class="language-text" data-lang="text">This method dequeues items from a queue and then removes them.
-</code></pre></div><div class="highlight"><pre><code class="language-text" data-lang="text">private static void dequeueAndRemoveItems(KestrelClient kestrelClient, String queueName)
+</code></pre></div><div class="highlight"><pre><code class="language-" data-lang="">
+This method dequeues items from a queue and then removes them.
+</code></pre></div><div class="highlight"><pre><code class="language-" data-lang="">private static void dequeueAndRemoveItems(KestrelClient kestrelClient, String queueName)
 throws IOException, ParseError
      {
         for(int i=1; i&lt;=12; i++){
@@ -176,7 +177,7 @@
 
 
             if(item==null){
-                System.out.println(&quot;The queue (&quot; + queueName + &quot;) contains no items.&quot;);
+                System.out.println("The queue (" + queueName + ") contains no items.");
             }
             else
             {
@@ -189,16 +190,18 @@
 
                 kestrelClient.ack(queueName, itemID);
 
-                System.out.println(&quot;receivedItem=&quot; + receivedVal);
+                System.out.println("receivedItem=" + receivedVal);
             }
         }
 }
-</code></pre></div><div class="highlight"><pre><code class="language-text" data-lang="text">## Add Items continuously to Kestrel
+</code></pre></div><div class="highlight"><pre><code class="language-" data-lang="">
+## Add Items continuously to Kestrel
 
 This is our final program to run in order to add continuously sentence items to a queue called **sentence_queue** of a locally running Kestrel server.
 
-In order to stop it type a closing bracket char &#39;]&#39; in console and hit &#39;Enter&#39;.
-</code></pre></div><div class="highlight"><pre><code class="language-text" data-lang="text">import java.io.IOException;
+In order to stop it type a closing bracket char ']' in console and hit 'Enter'.
+
+</code></pre></div><div class="highlight"><pre><code class="language-" data-lang="">import java.io.IOException;
 import java.io.InputStream;
 import java.util.Random;
 
@@ -215,7 +218,7 @@
 
         InputStream is = System.in;
 
-        char closing_bracket = &#39;]&#39;;
+        char closing_bracket = ']';
 
         int val = closing_bracket;
 
@@ -224,11 +227,11 @@
         try {
 
             KestrelClient kestrelClient = null;
-            String queueName = &quot;sentence_queue&quot;;
+            String queueName = "sentence_queue";
 
             while(aux){
 
-                kestrelClient = new KestrelClient(&quot;localhost&quot;,22133);
+                kestrelClient = new KestrelClient("localhost",22133);
 
                 queueSentenceItems(kestrelClient, queueName);
 
@@ -253,20 +256,22 @@
             e.printStackTrace();
         }
 
-        System.out.println(&quot;end&quot;);
+        System.out.println("end");
 
     }
 }
-</code></pre></div><div class="highlight"><pre><code class="language-text" data-lang="text">## Using KestrelSpout
+</code></pre></div><div class="highlight"><pre><code class="language-" data-lang="">## Using KestrelSpout
 
 This topology reads sentences off of a Kestrel queue using KestrelSpout, splits the sentences into its constituent words (Bolt: SplitSentence), and then emits for each word the number of times it has seen that word before (Bolt: WordCount). How data is processed is described in detail in [Guaranteeing message processing](Guaranteeing-message-processing.html).
-</code></pre></div><div class="highlight"><pre><code class="language-text" data-lang="text">TopologyBuilder builder = new TopologyBuilder();
-builder.setSpout(&quot;sentences&quot;, new KestrelSpout(&quot;localhost&quot;,22133,&quot;sentence_queue&quot;,new StringScheme()));
-builder.setBolt(&quot;split&quot;, new SplitSentence(), 10)
-            .shuffleGrouping(&quot;sentences&quot;);
-builder.setBolt(&quot;count&quot;, new WordCount(), 20)
-        .fieldsGrouping(&quot;split&quot;, new Fields(&quot;word&quot;));
-</code></pre></div><div class="highlight"><pre><code class="language-text" data-lang="text">## Execution
+
+</code></pre></div><div class="highlight"><pre><code class="language-" data-lang="">TopologyBuilder builder = new TopologyBuilder();
+builder.setSpout("sentences", new KestrelSpout("localhost",22133,"sentence_queue",new StringScheme()));
+builder.setBolt("split", new SplitSentence(), 10)
+            .shuffleGrouping("sentences");
+builder.setBolt("count", new WordCount(), 20)
+        .fieldsGrouping("split", new Fields("word"));
+</code></pre></div><div class="highlight"><pre><code class="language-" data-lang="">
+## Execution
 
 At first, start your local kestrel server in production or development mode.
 
diff --git a/_site/documentation/Lifecycle-of-a-topology.html b/_site/documentation/Lifecycle-of-a-topology.html
index a5d1f2b..236f967 100644
--- a/_site/documentation/Lifecycle-of-a-topology.html
+++ b/_site/documentation/Lifecycle-of-a-topology.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
diff --git a/_site/documentation/Local-mode.html b/_site/documentation/Local-mode.html
index f163974..6502363 100644
--- a/_site/documentation/Local-mode.html
+++ b/_site/documentation/Local-mode.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -95,7 +95,7 @@
 <p>To create an in-process cluster, simply use the <code>LocalCluster</code> class. For example:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kn">import</span> <span class="nn">backtype.storm.LocalCluster</span><span class="o">;</span>
 
-<span class="n">LocalCluster</span> <span class="n">cluster</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">LocalCluster</span><span class="o">();</span>
+<span class="n">LocalCluster</span> <span class="n">cluster</span> <span class="o">=</span> <span class="k">new</span> <span class="n">LocalCluster</span><span class="o">();</span>
 </code></pre></div>
 <p>You can then submit topologies using the <code>submitTopology</code> method on the <code>LocalCluster</code> object. Just like the corresponding method on <a href="/javadoc/apidocs/backtype/storm/StormSubmitter.html">StormSubmitter</a>, <code>submitTopology</code> takes a name, a topology configuration, and the topology object. You can then kill a topology using the <code>killTopology</code> method which takes the topology name as an argument.</p>
 
diff --git a/_site/documentation/Maven.html b/_site/documentation/Maven.html
index f769c67..778e724 100644
--- a/_site/documentation/Maven.html
+++ b/_site/documentation/Maven.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
diff --git a/_site/documentation/Message-passing-implementation.html b/_site/documentation/Message-passing-implementation.html
index 7d18a36..9f84128 100644
--- a/_site/documentation/Message-passing-implementation.html
+++ b/_site/documentation/Message-passing-implementation.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
diff --git a/_site/documentation/Metrics.html b/_site/documentation/Metrics.html
index 6aa0792..8bf5e09 100644
--- a/_site/documentation/Metrics.html
+++ b/_site/documentation/Metrics.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
diff --git a/_site/documentation/Multilang-protocol.html b/_site/documentation/Multilang-protocol.html
index 2608db8..18e65b8 100644
--- a/_site/documentation/Multilang-protocol.html
+++ b/_site/documentation/Multilang-protocol.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -141,44 +141,44 @@
 <ul>
 <li>STDIN: Setup info. This is a JSON object with the Storm configuration, a PID directory, and a topology context, like this:</li>
 </ul>
-<div class="highlight"><pre><code class="language-text" data-lang="text">{
-    &quot;conf&quot;: {
-        &quot;topology.message.timeout.secs&quot;: 3,
-        // etc
-    },
-    &quot;pidDir&quot;: &quot;...&quot;,
-    &quot;context&quot;: {
-        &quot;task-&gt;component&quot;: {
-            &quot;1&quot;: &quot;example-spout&quot;,
-            &quot;2&quot;: &quot;__acker&quot;,
-            &quot;3&quot;: &quot;example-bolt1&quot;,
-            &quot;4&quot;: &quot;example-bolt2&quot;
-        },
-        &quot;taskid&quot;: 3,
-        // Everything below this line is only available in Storm 0.10.0+
-        &quot;componentid&quot;: &quot;example-bolt&quot;
-        &quot;stream-&gt;target-&gt;grouping&quot;: {
-            &quot;default&quot;: {
-                &quot;example-bolt2&quot;: {
-                    &quot;type&quot;: &quot;SHUFFLE&quot;}}},
-        &quot;streams&quot;: [&quot;default&quot;],
-        &quot;stream-&gt;outputfields&quot;: {&quot;default&quot;: [&quot;word&quot;]},
-        &quot;source-&gt;stream-&gt;grouping&quot;: {
-            &quot;example-spout&quot;: {
-                &quot;default&quot;: {
-                    &quot;type&quot;: &quot;FIELDS&quot;,
-                    &quot;fields&quot;: [&quot;word&quot;]
-                }
-            }
-        }
-        &quot;source-&gt;stream-&gt;fields&quot;: {
-            &quot;example-spout&quot;: {
-                &quot;default&quot;: [&quot;word&quot;]
-            }
-        }
-    }
-}
-</code></pre></div>
+<div class="highlight"><pre><code class="language-" data-lang=""><span class="p">{</span><span class="w">
+    </span><span class="nt">"conf"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
+        </span><span class="nt">"topology.message.timeout.secs"</span><span class="p">:</span><span class="w"> </span><span class="mi">3</span><span class="p">,</span><span class="w">
+        </span><span class="err">//</span><span class="w"> </span><span class="err">etc</span><span class="w">
+    </span><span class="err">}</span><span class="p">,</span><span class="w">
+    </span><span class="nt">"pidDir"</span><span class="p">:</span><span class="w"> </span><span class="s2">"..."</span><span class="p">,</span><span class="w">
+    </span><span class="nt">"context"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
+        </span><span class="nt">"task-&gt;component"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
+            </span><span class="nt">"1"</span><span class="p">:</span><span class="w"> </span><span class="s2">"example-spout"</span><span class="p">,</span><span class="w">
+            </span><span class="nt">"2"</span><span class="p">:</span><span class="w"> </span><span class="s2">"__acker"</span><span class="p">,</span><span class="w">
+            </span><span class="nt">"3"</span><span class="p">:</span><span class="w"> </span><span class="s2">"example-bolt1"</span><span class="p">,</span><span class="w">
+            </span><span class="nt">"4"</span><span class="p">:</span><span class="w"> </span><span class="s2">"example-bolt2"</span><span class="w">
+        </span><span class="p">},</span><span class="w">
+        </span><span class="nt">"taskid"</span><span class="p">:</span><span class="w"> </span><span class="mi">3</span><span class="p">,</span><span class="w">
+        </span><span class="err">//</span><span class="w"> </span><span class="err">Everything</span><span class="w"> </span><span class="err">below</span><span class="w"> </span><span class="err">this</span><span class="w"> </span><span class="err">line</span><span class="w"> </span><span class="err">is</span><span class="w"> </span><span class="err">only</span><span class="w"> </span><span class="err">available</span><span class="w"> </span><span class="err">in</span><span class="w"> </span><span class="err">Storm</span><span class="w"> </span><span class="err">0.10.0+</span><span class="w">
+        </span><span class="nt">"componentid"</span><span class="p">:</span><span class="w"> </span><span class="s2">"example-bolt"</span><span class="w">
+        </span><span class="s2">"stream-&gt;target-&gt;grouping"</span><span class="err">:</span><span class="w"> </span><span class="p">{</span><span class="w">
+            </span><span class="nt">"default"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
+                </span><span class="nt">"example-bolt2"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
+                    </span><span class="nt">"type"</span><span class="p">:</span><span class="w"> </span><span class="s2">"SHUFFLE"</span><span class="p">}}},</span><span class="w">
+        </span><span class="nt">"streams"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">"default"</span><span class="p">],</span><span class="w">
+        </span><span class="nt">"stream-&gt;outputfields"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="nt">"default"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">"word"</span><span class="p">]},</span><span class="w">
+        </span><span class="nt">"source-&gt;stream-&gt;grouping"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
+            </span><span class="nt">"example-spout"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
+                </span><span class="nt">"default"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
+                    </span><span class="nt">"type"</span><span class="p">:</span><span class="w"> </span><span class="s2">"FIELDS"</span><span class="p">,</span><span class="w">
+                    </span><span class="nt">"fields"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">"word"</span><span class="p">]</span><span class="w">
+                </span><span class="p">}</span><span class="w">
+            </span><span class="p">}</span><span class="w">
+        </span><span class="p">}</span><span class="w">
+        </span><span class="s2">"source-&gt;stream-&gt;fields"</span><span class="err">:</span><span class="w"> </span><span class="p">{</span><span class="w">
+            </span><span class="nt">"example-spout"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
+                </span><span class="nt">"default"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">"word"</span><span class="p">]</span><span class="w">
+            </span><span class="p">}</span><span class="w">
+        </span><span class="p">}</span><span class="w">
+    </span><span class="p">}</span><span class="w">
+</span><span class="p">}</span><span class="w">
+</span></code></pre></div>
 <p>Your script should create an empty file named with its PID in this directory. e.g.
 the PID is 1234, so an empty file named 1234 is created in the directory. This
 file lets the supervisor know the PID so it can shutdown the process later on.</p>
@@ -207,47 +207,47 @@
 </ul>
 
 <p>&quot;next&quot; is the equivalent of ISpout&#39;s <code>nextTuple</code>. It looks like:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">{&quot;command&quot;: &quot;next&quot;}
-</code></pre></div>
+<div class="highlight"><pre><code class="language-" data-lang=""><span class="p">{</span><span class="nt">"command"</span><span class="p">:</span><span class="w"> </span><span class="s2">"next"</span><span class="p">}</span><span class="w">
+</span></code></pre></div>
 <p>&quot;ack&quot; looks like:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">{&quot;command&quot;: &quot;ack&quot;, &quot;id&quot;: &quot;1231231&quot;}
-</code></pre></div>
+<div class="highlight"><pre><code class="language-" data-lang=""><span class="p">{</span><span class="nt">"command"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ack"</span><span class="p">,</span><span class="w"> </span><span class="nt">"id"</span><span class="p">:</span><span class="w"> </span><span class="s2">"1231231"</span><span class="p">}</span><span class="w">
+</span></code></pre></div>
 <p>&quot;fail&quot; looks like:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">{&quot;command&quot;: &quot;fail&quot;, &quot;id&quot;: &quot;1231231&quot;}
-</code></pre></div>
+<div class="highlight"><pre><code class="language-" data-lang=""><span class="p">{</span><span class="nt">"command"</span><span class="p">:</span><span class="w"> </span><span class="s2">"fail"</span><span class="p">,</span><span class="w"> </span><span class="nt">"id"</span><span class="p">:</span><span class="w"> </span><span class="s2">"1231231"</span><span class="p">}</span><span class="w">
+</span></code></pre></div>
 <ul>
 <li>STDOUT: The results of your spout for the previous command. This can
 be a sequence of emits and logs.</li>
 </ul>
 
 <p>An emit looks like:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">{
-    &quot;command&quot;: &quot;emit&quot;,
-    // The id for the tuple. Leave this out for an unreliable emit. The id can
-    // be a string or a number.
-    &quot;id&quot;: &quot;1231231&quot;,
-    // The id of the stream this tuple was emitted to. Leave this empty to emit to default stream.
-    &quot;stream&quot;: &quot;1&quot;,
-    // If doing an emit direct, indicate the task to send the tuple to
-    &quot;task&quot;: 9,
-    // All the values in this tuple
-    &quot;tuple&quot;: [&quot;field1&quot;, 2, 3]
-}
-</code></pre></div>
+<div class="highlight"><pre><code class="language-" data-lang=""><span class="p">{</span><span class="w">
+    </span><span class="nt">"command"</span><span class="p">:</span><span class="w"> </span><span class="s2">"emit"</span><span class="p">,</span><span class="w">
+    </span><span class="err">//</span><span class="w"> </span><span class="err">The</span><span class="w"> </span><span class="err">id</span><span class="w"> </span><span class="err">for</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">tuple.</span><span class="w"> </span><span class="err">Leave</span><span class="w"> </span><span class="err">this</span><span class="w"> </span><span class="err">out</span><span class="w"> </span><span class="err">for</span><span class="w"> </span><span class="err">an</span><span class="w"> </span><span class="err">unreliable</span><span class="w"> </span><span class="err">emit.</span><span class="w"> </span><span class="err">The</span><span class="w"> </span><span class="err">id</span><span class="w"> </span><span class="err">can</span><span class="w">
+    </span><span class="err">//</span><span class="w"> </span><span class="err">be</span><span class="w"> </span><span class="err">a</span><span class="w"> </span><span class="err">string</span><span class="w"> </span><span class="err">or</span><span class="w"> </span><span class="err">a</span><span class="w"> </span><span class="err">number.</span><span class="w">
+    </span><span class="nt">"id"</span><span class="p">:</span><span class="w"> </span><span class="s2">"1231231"</span><span class="p">,</span><span class="w">
+    </span><span class="err">//</span><span class="w"> </span><span class="err">The</span><span class="w"> </span><span class="err">id</span><span class="w"> </span><span class="err">of</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">stream</span><span class="w"> </span><span class="err">this</span><span class="w"> </span><span class="err">tuple</span><span class="w"> </span><span class="err">was</span><span class="w"> </span><span class="err">emitted</span><span class="w"> </span><span class="err">to.</span><span class="w"> </span><span class="err">Leave</span><span class="w"> </span><span class="err">this</span><span class="w"> </span><span class="err">empty</span><span class="w"> </span><span class="err">to</span><span class="w"> </span><span class="err">emit</span><span class="w"> </span><span class="err">to</span><span class="w"> </span><span class="err">default</span><span class="w"> </span><span class="err">stream.</span><span class="w">
+    </span><span class="nt">"stream"</span><span class="p">:</span><span class="w"> </span><span class="s2">"1"</span><span class="p">,</span><span class="w">
+    </span><span class="err">//</span><span class="w"> </span><span class="err">If</span><span class="w"> </span><span class="err">doing</span><span class="w"> </span><span class="err">an</span><span class="w"> </span><span class="err">emit</span><span class="w"> </span><span class="err">direct,</span><span class="w"> </span><span class="err">indicate</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">task</span><span class="w"> </span><span class="err">to</span><span class="w"> </span><span class="err">send</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">tuple</span><span class="w"> </span><span class="err">to</span><span class="w">
+    </span><span class="nt">"task"</span><span class="p">:</span><span class="w"> </span><span class="mi">9</span><span class="p">,</span><span class="w">
+    </span><span class="err">//</span><span class="w"> </span><span class="err">All</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">values</span><span class="w"> </span><span class="err">in</span><span class="w"> </span><span class="err">this</span><span class="w"> </span><span class="err">tuple</span><span class="w">
+    </span><span class="nt">"tuple"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">"field1"</span><span class="p">,</span><span class="w"> </span><span class="mi">2</span><span class="p">,</span><span class="w"> </span><span class="mi">3</span><span class="p">]</span><span class="w">
+</span><span class="p">}</span><span class="w">
+</span></code></pre></div>
 <p>If not doing an emit direct, you will immediately receive the task ids to which the tuple was emitted on STDIN as a JSON array.</p>
 
 <p>A &quot;log&quot; will log a message in the worker log. It looks like:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">{
-    &quot;command&quot;: &quot;log&quot;,
-    // the message to log
-    &quot;msg&quot;: &quot;hello world!&quot;
-}
-</code></pre></div>
+<div class="highlight"><pre><code class="language-" data-lang=""><span class="p">{</span><span class="w">
+    </span><span class="nt">"command"</span><span class="p">:</span><span class="w"> </span><span class="s2">"log"</span><span class="p">,</span><span class="w">
+    </span><span class="err">//</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">message</span><span class="w"> </span><span class="err">to</span><span class="w"> </span><span class="err">log</span><span class="w">
+    </span><span class="nt">"msg"</span><span class="p">:</span><span class="w"> </span><span class="s2">"hello world!"</span><span class="w">
+</span><span class="p">}</span><span class="w">
+</span></code></pre></div>
 <ul>
 <li>STDOUT: a &quot;sync&quot; command ends the sequence of emits and logs. It looks like:</li>
 </ul>
-<div class="highlight"><pre><code class="language-text" data-lang="text">{&quot;command&quot;: &quot;sync&quot;}
-</code></pre></div>
+<div class="highlight"><pre><code class="language-" data-lang=""><span class="p">{</span><span class="nt">"command"</span><span class="p">:</span><span class="w"> </span><span class="s2">"sync"</span><span class="p">}</span><span class="w">
+</span></code></pre></div>
 <p>After you sync, ShellSpout will not read your output until it sends another next, ack, or fail command.</p>
 
 <p>Note that, similarly to ISpout, all of the spouts in the worker will be locked up after a next, ack, or fail, until you sync. Also like ISpout, if you have no tuples to emit for a next, you should sleep for a small amount of time before syncing. ShellSpout will not automatically sleep for you.</p>
@@ -259,34 +259,34 @@
 <ul>
 <li>STDIN: A tuple! This is a JSON encoded structure like this:</li>
 </ul>
-<div class="highlight"><pre><code class="language-text" data-lang="text">{
-    // The tuple&#39;s id - this is a string to support languages lacking 64-bit precision
-    &quot;id&quot;: &quot;-6955786537413359385&quot;,
-    // The id of the component that created this tuple
-    &quot;comp&quot;: &quot;1&quot;,
-    // The id of the stream this tuple was emitted to
-    &quot;stream&quot;: &quot;1&quot;,
-    // The id of the task that created this tuple
-    &quot;task&quot;: 9,
-    // All the values in this tuple
-    &quot;tuple&quot;: [&quot;snow white and the seven dwarfs&quot;, &quot;field2&quot;, 3]
-}
-</code></pre></div>
+<div class="highlight"><pre><code class="language-" data-lang=""><span class="p">{</span><span class="w">
+    </span><span class="err">//</span><span class="w"> </span><span class="err">The</span><span class="w"> </span><span class="err">tuple's</span><span class="w"> </span><span class="err">id</span><span class="w"> </span><span class="err">-</span><span class="w"> </span><span class="err">this</span><span class="w"> </span><span class="err">is</span><span class="w"> </span><span class="err">a</span><span class="w"> </span><span class="err">string</span><span class="w"> </span><span class="err">to</span><span class="w"> </span><span class="err">support</span><span class="w"> </span><span class="err">languages</span><span class="w"> </span><span class="err">lacking</span><span class="w"> </span><span class="err">64-bit</span><span class="w"> </span><span class="err">precision</span><span class="w">
+    </span><span class="nt">"id"</span><span class="p">:</span><span class="w"> </span><span class="s2">"-6955786537413359385"</span><span class="p">,</span><span class="w">
+    </span><span class="err">//</span><span class="w"> </span><span class="err">The</span><span class="w"> </span><span class="err">id</span><span class="w"> </span><span class="err">of</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">component</span><span class="w"> </span><span class="err">that</span><span class="w"> </span><span class="err">created</span><span class="w"> </span><span class="err">this</span><span class="w"> </span><span class="err">tuple</span><span class="w">
+    </span><span class="nt">"comp"</span><span class="p">:</span><span class="w"> </span><span class="s2">"1"</span><span class="p">,</span><span class="w">
+    </span><span class="err">//</span><span class="w"> </span><span class="err">The</span><span class="w"> </span><span class="err">id</span><span class="w"> </span><span class="err">of</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">stream</span><span class="w"> </span><span class="err">this</span><span class="w"> </span><span class="err">tuple</span><span class="w"> </span><span class="err">was</span><span class="w"> </span><span class="err">emitted</span><span class="w"> </span><span class="err">to</span><span class="w">
+    </span><span class="nt">"stream"</span><span class="p">:</span><span class="w"> </span><span class="s2">"1"</span><span class="p">,</span><span class="w">
+    </span><span class="err">//</span><span class="w"> </span><span class="err">The</span><span class="w"> </span><span class="err">id</span><span class="w"> </span><span class="err">of</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">task</span><span class="w"> </span><span class="err">that</span><span class="w"> </span><span class="err">created</span><span class="w"> </span><span class="err">this</span><span class="w"> </span><span class="err">tuple</span><span class="w">
+    </span><span class="nt">"task"</span><span class="p">:</span><span class="w"> </span><span class="mi">9</span><span class="p">,</span><span class="w">
+    </span><span class="err">//</span><span class="w"> </span><span class="err">All</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">values</span><span class="w"> </span><span class="err">in</span><span class="w"> </span><span class="err">this</span><span class="w"> </span><span class="err">tuple</span><span class="w">
+    </span><span class="nt">"tuple"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">"snow white and the seven dwarfs"</span><span class="p">,</span><span class="w"> </span><span class="s2">"field2"</span><span class="p">,</span><span class="w"> </span><span class="mi">3</span><span class="p">]</span><span class="w">
+</span><span class="p">}</span><span class="w">
+</span></code></pre></div>
 <ul>
 <li>STDOUT: An ack, fail, emit, or log. Emits look like:</li>
 </ul>
-<div class="highlight"><pre><code class="language-text" data-lang="text">{
-    &quot;command&quot;: &quot;emit&quot;,
-    // The ids of the tuples this output tuples should be anchored to
-    &quot;anchors&quot;: [&quot;1231231&quot;, &quot;-234234234&quot;],
-    // The id of the stream this tuple was emitted to. Leave this empty to emit to default stream.
-    &quot;stream&quot;: &quot;1&quot;,
-    // If doing an emit direct, indicate the task to send the tuple to
-    &quot;task&quot;: 9,
-    // All the values in this tuple
-    &quot;tuple&quot;: [&quot;field1&quot;, 2, 3]
-}
-</code></pre></div>
+<div class="highlight"><pre><code class="language-" data-lang=""><span class="p">{</span><span class="w">
+    </span><span class="nt">"command"</span><span class="p">:</span><span class="w"> </span><span class="s2">"emit"</span><span class="p">,</span><span class="w">
+    </span><span class="err">//</span><span class="w"> </span><span class="err">The</span><span class="w"> </span><span class="err">ids</span><span class="w"> </span><span class="err">of</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">tuples</span><span class="w"> </span><span class="err">this</span><span class="w"> </span><span class="err">output</span><span class="w"> </span><span class="err">tuples</span><span class="w"> </span><span class="err">should</span><span class="w"> </span><span class="err">be</span><span class="w"> </span><span class="err">anchored</span><span class="w"> </span><span class="err">to</span><span class="w">
+    </span><span class="nt">"anchors"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">"1231231"</span><span class="p">,</span><span class="w"> </span><span class="s2">"-234234234"</span><span class="p">],</span><span class="w">
+    </span><span class="err">//</span><span class="w"> </span><span class="err">The</span><span class="w"> </span><span class="err">id</span><span class="w"> </span><span class="err">of</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">stream</span><span class="w"> </span><span class="err">this</span><span class="w"> </span><span class="err">tuple</span><span class="w"> </span><span class="err">was</span><span class="w"> </span><span class="err">emitted</span><span class="w"> </span><span class="err">to.</span><span class="w"> </span><span class="err">Leave</span><span class="w"> </span><span class="err">this</span><span class="w"> </span><span class="err">empty</span><span class="w"> </span><span class="err">to</span><span class="w"> </span><span class="err">emit</span><span class="w"> </span><span class="err">to</span><span class="w"> </span><span class="err">default</span><span class="w"> </span><span class="err">stream.</span><span class="w">
+    </span><span class="nt">"stream"</span><span class="p">:</span><span class="w"> </span><span class="s2">"1"</span><span class="p">,</span><span class="w">
+    </span><span class="err">//</span><span class="w"> </span><span class="err">If</span><span class="w"> </span><span class="err">doing</span><span class="w"> </span><span class="err">an</span><span class="w"> </span><span class="err">emit</span><span class="w"> </span><span class="err">direct,</span><span class="w"> </span><span class="err">indicate</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">task</span><span class="w"> </span><span class="err">to</span><span class="w"> </span><span class="err">send</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">tuple</span><span class="w"> </span><span class="err">to</span><span class="w">
+    </span><span class="nt">"task"</span><span class="p">:</span><span class="w"> </span><span class="mi">9</span><span class="p">,</span><span class="w">
+    </span><span class="err">//</span><span class="w"> </span><span class="err">All</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">values</span><span class="w"> </span><span class="err">in</span><span class="w"> </span><span class="err">this</span><span class="w"> </span><span class="err">tuple</span><span class="w">
+    </span><span class="nt">"tuple"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">"field1"</span><span class="p">,</span><span class="w"> </span><span class="mi">2</span><span class="p">,</span><span class="w"> </span><span class="mi">3</span><span class="p">]</span><span class="w">
+</span><span class="p">}</span><span class="w">
+</span></code></pre></div>
 <p>If not doing an emit direct, you will receive the task ids to which
 the tuple was emitted on STDIN as a JSON array. Note that, due to the
 asynchronous nature of the shell bolt protocol, when you read after
@@ -296,32 +296,32 @@
 emits, however.</p>
 
 <p>An ack looks like:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">{
-    &quot;command&quot;: &quot;ack&quot;,
-    // the id of the tuple to ack
-    &quot;id&quot;: &quot;123123&quot;
-}
-</code></pre></div>
+<div class="highlight"><pre><code class="language-" data-lang=""><span class="p">{</span><span class="w">
+    </span><span class="nt">"command"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ack"</span><span class="p">,</span><span class="w">
+    </span><span class="err">//</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">id</span><span class="w"> </span><span class="err">of</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">tuple</span><span class="w"> </span><span class="err">to</span><span class="w"> </span><span class="err">ack</span><span class="w">
+    </span><span class="nt">"id"</span><span class="p">:</span><span class="w"> </span><span class="s2">"123123"</span><span class="w">
+</span><span class="p">}</span><span class="w">
+</span></code></pre></div>
 <p>A fail looks like:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">{
-    &quot;command&quot;: &quot;fail&quot;,
-    // the id of the tuple to fail
-    &quot;id&quot;: &quot;123123&quot;
-}
-</code></pre></div>
+<div class="highlight"><pre><code class="language-" data-lang=""><span class="p">{</span><span class="w">
+    </span><span class="nt">"command"</span><span class="p">:</span><span class="w"> </span><span class="s2">"fail"</span><span class="p">,</span><span class="w">
+    </span><span class="err">//</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">id</span><span class="w"> </span><span class="err">of</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">tuple</span><span class="w"> </span><span class="err">to</span><span class="w"> </span><span class="err">fail</span><span class="w">
+    </span><span class="nt">"id"</span><span class="p">:</span><span class="w"> </span><span class="s2">"123123"</span><span class="w">
+</span><span class="p">}</span><span class="w">
+</span></code></pre></div>
 <p>A &quot;log&quot; will log a message in the worker log. It looks like:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">{
-    &quot;command&quot;: &quot;log&quot;,
-    // the message to log
-    &quot;msg&quot;: &quot;hello world!&quot;
-}
-</code></pre></div>
+<div class="highlight"><pre><code class="language-" data-lang=""><span class="p">{</span><span class="w">
+    </span><span class="nt">"command"</span><span class="p">:</span><span class="w"> </span><span class="s2">"log"</span><span class="p">,</span><span class="w">
+    </span><span class="err">//</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">message</span><span class="w"> </span><span class="err">to</span><span class="w"> </span><span class="err">log</span><span class="w">
+    </span><span class="nt">"msg"</span><span class="p">:</span><span class="w"> </span><span class="s2">"hello world!"</span><span class="w">
+</span><span class="p">}</span><span class="w">
+</span></code></pre></div>
 <ul>
 <li>Note that, as of version 0.7.1, there is no longer any need for a
 shell bolt to &#39;sync&#39;.</li>
 </ul>
 
-<h3 id="handling-heartbeats-(0.9.3-and-later)">Handling Heartbeats (0.9.3 and later)</h3>
+<h3 id="handling-heartbeats-0-9-3-and-later">Handling Heartbeats (0.9.3 and later)</h3>
 
 <p>As of Storm 0.9.3, heartbeats have been between ShellSpout/ShellBolt and their
 multi-lang subprocesses to detect hanging/zombie subprocesses.  Any libraries
@@ -339,15 +339,15 @@
 
 <p>Shell bolts are asynchronous, so a ShellBolt will send heartbeat tuples to its
 subprocess periodically.  Heartbeat tuple looks like:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">{
-    &quot;id&quot;: &quot;-6955786537413359385&quot;,
-    &quot;comp&quot;: &quot;1&quot;,
-    &quot;stream&quot;: &quot;__heartbeat&quot;,
-    // this shell bolt&#39;s system task id
-    &quot;task&quot;: -1,
-    &quot;tuple&quot;: []
-}
-</code></pre></div>
+<div class="highlight"><pre><code class="language-" data-lang=""><span class="p">{</span><span class="w">
+    </span><span class="nt">"id"</span><span class="p">:</span><span class="w"> </span><span class="s2">"-6955786537413359385"</span><span class="p">,</span><span class="w">
+    </span><span class="nt">"comp"</span><span class="p">:</span><span class="w"> </span><span class="s2">"1"</span><span class="p">,</span><span class="w">
+    </span><span class="nt">"stream"</span><span class="p">:</span><span class="w"> </span><span class="s2">"__heartbeat"</span><span class="p">,</span><span class="w">
+    </span><span class="err">//</span><span class="w"> </span><span class="err">this</span><span class="w"> </span><span class="err">shell</span><span class="w"> </span><span class="err">bolt's</span><span class="w"> </span><span class="err">system</span><span class="w"> </span><span class="err">task</span><span class="w"> </span><span class="err">id</span><span class="w">
+    </span><span class="nt">"task"</span><span class="p">:</span><span class="w"> </span><span class="mi">-1</span><span class="p">,</span><span class="w">
+    </span><span class="nt">"tuple"</span><span class="p">:</span><span class="w"> </span><span class="p">[]</span><span class="w">
+</span><span class="p">}</span><span class="w">
+</span></code></pre></div>
 <p>When subprocess receives heartbeat tuple, it must send a <code>sync</code> command back to
 ShellBolt.</p>
 
diff --git a/_site/documentation/Powered-By.html b/_site/documentation/Powered-By.html
index cfb8630..85770b5 100644
--- a/_site/documentation/Powered-By.html
+++ b/_site/documentation/Powered-By.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
diff --git a/_site/documentation/Project-ideas.html b/_site/documentation/Project-ideas.html
index 402f64e..10f8e0b 100644
--- a/_site/documentation/Project-ideas.html
+++ b/_site/documentation/Project-ideas.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
diff --git a/_site/documentation/Rationale.html b/_site/documentation/Rationale.html
index 51a7db6..fe7974f 100644
--- a/_site/documentation/Rationale.html
+++ b/_site/documentation/Rationale.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
diff --git a/_site/documentation/Running-topologies-on-a-production-cluster.html b/_site/documentation/Running-topologies-on-a-production-cluster.html
index 66d88f7..8ada823 100644
--- a/_site/documentation/Running-topologies-on-a-production-cluster.html
+++ b/_site/documentation/Running-topologies-on-a-production-cluster.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -95,10 +95,10 @@
 <p>1) Define the topology (Use <a href="/javadoc/apidocs/backtype/storm/topology/TopologyBuilder.html">TopologyBuilder</a> if defining using Java)</p>
 
 <p>2) Use <a href="/javadoc/apidocs/backtype/storm/StormSubmitter.html">StormSubmitter</a> to submit the topology to the cluster. <code>StormSubmitter</code> takes as input the name of the topology, a configuration for the topology, and the topology itself. For example:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">Config</span> <span class="n">conf</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">Config</span><span class="o">();</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">Config</span> <span class="n">conf</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Config</span><span class="o">();</span>
 <span class="n">conf</span><span class="o">.</span><span class="na">setNumWorkers</span><span class="o">(</span><span class="mi">20</span><span class="o">);</span>
 <span class="n">conf</span><span class="o">.</span><span class="na">setMaxSpoutPending</span><span class="o">(</span><span class="mi">5000</span><span class="o">);</span>
-<span class="n">StormSubmitter</span><span class="o">.</span><span class="na">submitTopology</span><span class="o">(</span><span class="s">&quot;mytopology&quot;</span><span class="o">,</span> <span class="n">conf</span><span class="o">,</span> <span class="n">topology</span><span class="o">);</span>
+<span class="n">StormSubmitter</span><span class="o">.</span><span class="na">submitTopology</span><span class="o">(</span><span class="s">"mytopology"</span><span class="o">,</span> <span class="n">conf</span><span class="o">,</span> <span class="n">topology</span><span class="o">);</span>
 </code></pre></div>
 <p>3) Create a jar containing your code and all the dependencies of your code (except for Storm -- the Storm jars will be added to the classpath on the worker nodes).</p>
 
diff --git "a/_site/documentation/Serialization-\050prior-to-0.6.0\051.html" "b/_site/documentation/Serialization-\050prior-to-0.6.0\051.html"
index d8fe681..0780464 100644
--- "a/_site/documentation/Serialization-\050prior-to-0.6.0\051.html"
+++ "b/_site/documentation/Serialization-\050prior-to-0.6.0\051.html"
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -112,9 +112,9 @@
 
 <p>The interface looks like this:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">interface</span> <span class="nc">ISerialization</span><span class="o">&lt;</span><span class="n">T</span><span class="o">&gt;</span> <span class="o">{</span>
-    <span class="kd">public</span> <span class="kt">boolean</span> <span class="nf">accept</span><span class="o">(</span><span class="n">Class</span> <span class="n">c</span><span class="o">);</span>
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">serialize</span><span class="o">(</span><span class="n">T</span> <span class="n">object</span><span class="o">,</span> <span class="n">DataOutputStream</span> <span class="n">stream</span><span class="o">)</span> <span class="kd">throws</span> <span class="n">IOException</span><span class="o">;</span>
-    <span class="kd">public</span> <span class="n">T</span> <span class="nf">deserialize</span><span class="o">(</span><span class="n">DataInputStream</span> <span class="n">stream</span><span class="o">)</span> <span class="kd">throws</span> <span class="n">IOException</span><span class="o">;</span>
+    <span class="kd">public</span> <span class="kt">boolean</span> <span class="n">accept</span><span class="o">(</span><span class="n">Class</span> <span class="n">c</span><span class="o">);</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">serialize</span><span class="o">(</span><span class="n">T</span> <span class="n">object</span><span class="o">,</span> <span class="n">DataOutputStream</span> <span class="n">stream</span><span class="o">)</span> <span class="kd">throws</span> <span class="n">IOException</span><span class="o">;</span>
+    <span class="kd">public</span> <span class="n">T</span> <span class="n">deserialize</span><span class="o">(</span><span class="n">DataInputStream</span> <span class="n">stream</span><span class="o">)</span> <span class="kd">throws</span> <span class="n">IOException</span><span class="o">;</span>
 <span class="o">}</span>
 </code></pre></div>
 <p>Storm uses the <code>accept</code> method to determine if a type can be serialized by this serializer. Remember, Storm&#39;s tuples are dynamically typed so Storm determines what serializer to use at runtime.</p>
diff --git a/_site/documentation/Serialization.html b/_site/documentation/Serialization.html
index 8bc803d..9374f0a 100644
--- a/_site/documentation/Serialization.html
+++ b/_site/documentation/Serialization.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -94,7 +94,7 @@
 
 <p>Tuples can be comprised of objects of any types. Since Storm is a distributed system, it needs to know how to serialize and deserialize objects when they&#39;re passed between tasks.</p>
 
-<p>Storm uses <a href="http://code.google.com/p/kryo/">Kryo</a> for serialization. Kryo is a flexible and fast serialization library that produces small serializations.</p>
+<p>Storm uses <a href="https://github.com/EsotericSoftware/kryo">Kryo</a> for serialization. Kryo is a flexible and fast serialization library that produces small serializations.</p>
 
 <p>By default, Storm can serialize primitive types, strings, byte arrays, ArrayList, HashMap, HashSet, and the Clojure collection types. If you want to use another type in your tuples, you&#39;ll need to register a custom serializer.</p>
 
@@ -110,17 +110,17 @@
 
 <h3 id="custom-serialization">Custom serialization</h3>
 
-<p>As mentioned, Storm uses Kryo for serialization. To implement custom serializers, you need to register new serializers with Kryo. It&#39;s highly recommended that you read over <a href="http://code.google.com/p/kryo/">Kryo&#39;s home page</a> to understand how it handles custom serialization.</p>
+<p>As mentioned, Storm uses Kryo for serialization. To implement custom serializers, you need to register new serializers with Kryo. It&#39;s highly recommended that you read over <a href="https://github.com/EsotericSoftware/kryo">Kryo&#39;s home page</a> to understand how it handles custom serialization.</p>
 
 <p>Adding custom serializers is done through the &quot;topology.kryo.register&quot; property in your topology config. It takes a list of registrations, where each registration can take one of two forms:</p>
 
 <ol>
 <li>The name of a class to register. In this case, Storm will use Kryo&#39;s <code>FieldsSerializer</code> to serialize the class. This may or may not be optimal for the class -- see the Kryo docs for more details.</li>
-<li>A map from the name of a class to register to an implementation of <a href="http://code.google.com/p/kryo/source/browse/trunk/src/com/esotericsoftware/kryo/Serializer.java">com.esotericsoftware.kryo.Serializer</a>.</li>
+<li>A map from the name of a class to register to an implementation of <a href="https://github.com/EsotericSoftware/kryo/blob/master/src/com/esotericsoftware/kryo/Serializer.java">com.esotericsoftware.kryo.Serializer</a>.</li>
 </ol>
 
 <p>Let&#39;s look at an example.</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">topology.kryo.register:
+<div class="highlight"><pre><code class="language-" data-lang="">topology.kryo.register:
   - com.mycompany.CustomType1
   - com.mycompany.CustomType2: com.mycompany.serializer.CustomType2Serializer
   - com.mycompany.CustomType3
diff --git a/_site/documentation/Serializers.html b/_site/documentation/Serializers.html
index f7c2139..31160ad 100644
--- a/_site/documentation/Serializers.html
+++ b/_site/documentation/Serializers.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
diff --git a/_site/documentation/Setting-up-a-Storm-cluster.html b/_site/documentation/Setting-up-a-Storm-cluster.html
index 35f8749..a7167b4 100644
--- a/_site/documentation/Setting-up-a-Storm-cluster.html
+++ b/_site/documentation/Setting-up-a-Storm-cluster.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -130,35 +130,35 @@
 
 <p>Next, download a Storm release and extract the zip file somewhere on Nimbus and each of the worker machines. The Storm releases can be downloaded <a href="http://github.com/apache/storm/releases">from here</a>.</p>
 
-<h3 id="fill-in-mandatory-configurations-into-storm.yaml">Fill in mandatory configurations into storm.yaml</h3>
+<h3 id="fill-in-mandatory-configurations-into-storm-yaml">Fill in mandatory configurations into storm.yaml</h3>
 
 <p>The Storm release contains a file at <code>conf/storm.yaml</code> that configures the Storm daemons. You can see the default configuration values <a href="https://github.com/apache/storm/blob/master/conf/defaults.yaml">here</a>. storm.yaml overrides anything in defaults.yaml. There&#39;s a few configurations that are mandatory to get a working cluster:</p>
 
 <p>1) <strong>storm.zookeeper.servers</strong>: This is a list of the hosts in the Zookeeper cluster for your Storm cluster. It should look something like:</p>
-<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="l-Scalar-Plain">storm.zookeeper.servers</span><span class="p-Indicator">:</span>
-  <span class="p-Indicator">-</span> <span class="s">&quot;111.222.333.444&quot;</span>
-  <span class="p-Indicator">-</span> <span class="s">&quot;555.666.777.888&quot;</span>
+<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="s">storm.zookeeper.servers</span><span class="pi">:</span>
+  <span class="pi">-</span> <span class="s2">"</span><span class="s">111.222.333.444"</span>
+  <span class="pi">-</span> <span class="s2">"</span><span class="s">555.666.777.888"</span>
 </code></pre></div>
 <p>If the port that your Zookeeper cluster uses is different than the default, you should set <strong>storm.zookeeper.port</strong> as well.</p>
 
 <p>2) <strong>storm.local.dir</strong>: The Nimbus and Supervisor daemons require a directory on the local disk to store small amounts of state (like jars, confs, and things like that). You should create that directory on each machine, give it proper permissions, and then fill in the directory location using this config. For example:</p>
-<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="l-Scalar-Plain">storm.local.dir</span><span class="p-Indicator">:</span> <span class="s">&quot;/mnt/storm&quot;</span>
+<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="s">storm.local.dir</span><span class="pi">:</span> <span class="s2">"</span><span class="s">/mnt/storm"</span>
 </code></pre></div>
 <p>3) <strong>nimbus.host</strong>: The worker nodes need to know which machine is the master in order to download topology jars and confs. For example:</p>
-<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="l-Scalar-Plain">nimbus.host</span><span class="p-Indicator">:</span> <span class="s">&quot;111.222.333.44&quot;</span>
+<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="s">nimbus.host</span><span class="pi">:</span> <span class="s2">"</span><span class="s">111.222.333.44"</span>
 </code></pre></div>
 <p>4) <strong>supervisor.slots.ports</strong>: For each worker machine, you configure how many workers run on that machine with this config. Each worker uses a single port for receiving messages, and this setting defines which ports are open for use. If you define five ports here, then Storm will allocate up to five workers to run on this machine. If you define three ports, Storm will only run up to three. By default, this setting is configured to run 4 workers on the ports 6700, 6701, 6702, and 6703. For example:</p>
-<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="l-Scalar-Plain">supervisor.slots.ports</span><span class="p-Indicator">:</span>
-    <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">6700</span>
-    <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">6701</span>
-    <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">6702</span>
-    <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">6703</span>
+<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="s">supervisor.slots.ports</span><span class="pi">:</span>
+    <span class="pi">-</span> <span class="s">6700</span>
+    <span class="pi">-</span> <span class="s">6701</span>
+    <span class="pi">-</span> <span class="s">6702</span>
+    <span class="pi">-</span> <span class="s">6703</span>
 </code></pre></div>
-<h3 id="configure-external-libraries-and-environmental-variables-(optional)">Configure external libraries and environmental variables (optional)</h3>
+<h3 id="configure-external-libraries-and-environmental-variables-optional">Configure external libraries and environmental variables (optional)</h3>
 
 <p>If you need support from external libraries or custom plugins, you can place such jars into the extlib/ and extlib-daemon/ directories. Note that the extlib-daemon/ directory stores jars used only by daemons (Nimbus, Supervisor, DRPC, UI, Logviewer), e.g., HDFS and customized scheduling libraries. Accordingly, two environmental variables STORM_EXT_CLASSPATH and STORM_EXT_CLASSPATH_DAEMON can be configured by users for including the external classpath and daemon-only external classpath.</p>
 
-<h3 id="launch-daemons-under-supervision-using-&quot;storm&quot;-script-and-a-supervisor-of-your-choice">Launch daemons under supervision using &quot;storm&quot; script and a supervisor of your choice</h3>
+<h3 id="launch-daemons-under-supervision-using-quot-storm-quot-script-and-a-supervisor-of-your-choice">Launch daemons under supervision using &quot;storm&quot; script and a supervisor of your choice</h3>
 
 <p>The last step is to launch all the Storm daemons. It is critical that you run each of these daemons under supervision. Storm is a <strong>fail-fast</strong> system which means the processes will halt whenever an unexpected error is encountered. Storm is designed so that it can safely halt at any point and recover correctly when the process is restarted. This is why Storm keeps no state in-process -- if Nimbus or the Supervisors restart, the running topologies are unaffected. Here&#39;s how to run the Storm daemons:</p>
 
diff --git a/_site/documentation/Setting-up-development-environment.html b/_site/documentation/Setting-up-development-environment.html
index da907c2..082e547 100644
--- a/_site/documentation/Setting-up-development-environment.html
+++ b/_site/documentation/Setting-up-development-environment.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -99,7 +99,7 @@
 
 <p>More detail on each of these steps is below.</p>
 
-<h3 id="what-is-a-development-environment?">What is a development environment?</h3>
+<h3 id="what-is-a-development-environment">What is a development environment?</h3>
 
 <p>Storm has two modes of operation: local mode and remote mode. In local mode, you can develop and test topologies completely in process on your local machine. In remote mode, you submit topologies for execution on a cluster of machines.</p>
 
@@ -116,10 +116,10 @@
 <h3 id="starting-and-stopping-topologies-on-a-remote-cluster">Starting and stopping topologies on a remote cluster</h3>
 
 <p>The previous step installed the <code>storm</code> client on your machine which is used to communicate with remote Storm clusters. Now all you have to do is tell the client which Storm cluster to talk to. To do this, all you have to do is put the host address of the master in the <code>~/.storm/storm.yaml</code> file. It should look something like this:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">nimbus.host: &quot;123.45.678.890&quot;
+<div class="highlight"><pre><code class="language-" data-lang="">nimbus.host: "123.45.678.890"
 </code></pre></div>
 <p>Alternatively, if you use the <a href="https://github.com/nathanmarz/storm-deploy">storm-deploy</a> project to provision Storm clusters on AWS, it will automatically set up your ~/.storm/storm.yaml file. You can manually attach to a Storm cluster (or switch between multiple clusters) using the &quot;attach&quot; command, like so:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">lein run :deploy --attach --name mystormcluster
+<div class="highlight"><pre><code class="language-" data-lang="">lein run :deploy --attach --name mystormcluster
 </code></pre></div>
 <p>More information is on the storm-deploy <a href="https://github.com/nathanmarz/storm-deploy/wiki">wiki</a></p>
 
diff --git a/_site/documentation/Spout-implementations.html b/_site/documentation/Spout-implementations.html
index a325dd0..898ce15 100644
--- a/_site/documentation/Spout-implementations.html
+++ b/_site/documentation/Spout-implementations.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
diff --git "a/_site/documentation/Storm-multi-language-protocol-\050versions-0.7.0-and-below\051.html" "b/_site/documentation/Storm-multi-language-protocol-\050versions-0.7.0-and-below\051.html"
index d8fb159..86a966d 100644
--- "a/_site/documentation/Storm-multi-language-protocol-\050versions-0.7.0-and-below\051.html"
+++ "b/_site/documentation/Storm-multi-language-protocol-\050versions-0.7.0-and-below\051.html"
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -140,56 +140,56 @@
 <li>The rest happens in a while(true) loop</li>
 <li>STDIN: A tuple! This is a JSON encoded structure like this:</li>
 </ul>
-<div class="highlight"><pre><code class="language-text" data-lang="text">{
-    // The tuple&#39;s id
-    &quot;id&quot;: -6955786537413359385,
-    // The id of the component that created this tuple
-    &quot;comp&quot;: 1,
-    // The id of the stream this tuple was emitted to
-    &quot;stream&quot;: 1,
-    // The id of the task that created this tuple
-    &quot;task&quot;: 9,
-    // All the values in this tuple
-    &quot;tuple&quot;: [&quot;snow white and the seven dwarfs&quot;, &quot;field2&quot;, 3]
-}
-</code></pre></div>
+<div class="highlight"><pre><code class="language-" data-lang=""><span class="p">{</span><span class="w">
+    </span><span class="err">//</span><span class="w"> </span><span class="err">The</span><span class="w"> </span><span class="err">tuple's</span><span class="w"> </span><span class="err">id</span><span class="w">
+    </span><span class="nt">"id"</span><span class="p">:</span><span class="w"> </span><span class="mi">-6955786537413359385</span><span class="p">,</span><span class="w">
+    </span><span class="err">//</span><span class="w"> </span><span class="err">The</span><span class="w"> </span><span class="err">id</span><span class="w"> </span><span class="err">of</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">component</span><span class="w"> </span><span class="err">that</span><span class="w"> </span><span class="err">created</span><span class="w"> </span><span class="err">this</span><span class="w"> </span><span class="err">tuple</span><span class="w">
+    </span><span class="nt">"comp"</span><span class="p">:</span><span class="w"> </span><span class="mi">1</span><span class="p">,</span><span class="w">
+    </span><span class="err">//</span><span class="w"> </span><span class="err">The</span><span class="w"> </span><span class="err">id</span><span class="w"> </span><span class="err">of</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">stream</span><span class="w"> </span><span class="err">this</span><span class="w"> </span><span class="err">tuple</span><span class="w"> </span><span class="err">was</span><span class="w"> </span><span class="err">emitted</span><span class="w"> </span><span class="err">to</span><span class="w">
+    </span><span class="nt">"stream"</span><span class="p">:</span><span class="w"> </span><span class="mi">1</span><span class="p">,</span><span class="w">
+    </span><span class="err">//</span><span class="w"> </span><span class="err">The</span><span class="w"> </span><span class="err">id</span><span class="w"> </span><span class="err">of</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">task</span><span class="w"> </span><span class="err">that</span><span class="w"> </span><span class="err">created</span><span class="w"> </span><span class="err">this</span><span class="w"> </span><span class="err">tuple</span><span class="w">
+    </span><span class="nt">"task"</span><span class="p">:</span><span class="w"> </span><span class="mi">9</span><span class="p">,</span><span class="w">
+    </span><span class="err">//</span><span class="w"> </span><span class="err">All</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">values</span><span class="w"> </span><span class="err">in</span><span class="w"> </span><span class="err">this</span><span class="w"> </span><span class="err">tuple</span><span class="w">
+    </span><span class="nt">"tuple"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">"snow white and the seven dwarfs"</span><span class="p">,</span><span class="w"> </span><span class="s2">"field2"</span><span class="p">,</span><span class="w"> </span><span class="mi">3</span><span class="p">]</span><span class="w">
+</span><span class="p">}</span><span class="w">
+</span></code></pre></div>
 <ul>
 <li>STDOUT: The results of your bolt, JSON encoded. This can be a sequence of acks, fails, emits, and/or logs. Emits look like:</li>
 </ul>
-<div class="highlight"><pre><code class="language-text" data-lang="text">{
-    &quot;command&quot;: &quot;emit&quot;,
-    // The ids of the tuples this output tuples should be anchored to
-    &quot;anchors&quot;: [1231231, -234234234],
-    // The id of the stream this tuple was emitted to. Leave this empty to emit to default stream.
-    &quot;stream&quot;: 1,
-    // If doing an emit direct, indicate the task to sent the tuple to
-    &quot;task&quot;: 9,
-    // All the values in this tuple
-    &quot;tuple&quot;: [&quot;field1&quot;, 2, 3]
-}
-</code></pre></div>
+<div class="highlight"><pre><code class="language-" data-lang=""><span class="p">{</span><span class="w">
+    </span><span class="nt">"command"</span><span class="p">:</span><span class="w"> </span><span class="s2">"emit"</span><span class="p">,</span><span class="w">
+    </span><span class="err">//</span><span class="w"> </span><span class="err">The</span><span class="w"> </span><span class="err">ids</span><span class="w"> </span><span class="err">of</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">tuples</span><span class="w"> </span><span class="err">this</span><span class="w"> </span><span class="err">output</span><span class="w"> </span><span class="err">tuples</span><span class="w"> </span><span class="err">should</span><span class="w"> </span><span class="err">be</span><span class="w"> </span><span class="err">anchored</span><span class="w"> </span><span class="err">to</span><span class="w">
+    </span><span class="nt">"anchors"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="mi">1231231</span><span class="p">,</span><span class="w"> </span><span class="mi">-234234234</span><span class="p">],</span><span class="w">
+    </span><span class="err">//</span><span class="w"> </span><span class="err">The</span><span class="w"> </span><span class="err">id</span><span class="w"> </span><span class="err">of</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">stream</span><span class="w"> </span><span class="err">this</span><span class="w"> </span><span class="err">tuple</span><span class="w"> </span><span class="err">was</span><span class="w"> </span><span class="err">emitted</span><span class="w"> </span><span class="err">to.</span><span class="w"> </span><span class="err">Leave</span><span class="w"> </span><span class="err">this</span><span class="w"> </span><span class="err">empty</span><span class="w"> </span><span class="err">to</span><span class="w"> </span><span class="err">emit</span><span class="w"> </span><span class="err">to</span><span class="w"> </span><span class="err">default</span><span class="w"> </span><span class="err">stream.</span><span class="w">
+    </span><span class="nt">"stream"</span><span class="p">:</span><span class="w"> </span><span class="mi">1</span><span class="p">,</span><span class="w">
+    </span><span class="err">//</span><span class="w"> </span><span class="err">If</span><span class="w"> </span><span class="err">doing</span><span class="w"> </span><span class="err">an</span><span class="w"> </span><span class="err">emit</span><span class="w"> </span><span class="err">direct,</span><span class="w"> </span><span class="err">indicate</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">task</span><span class="w"> </span><span class="err">to</span><span class="w"> </span><span class="err">sent</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">tuple</span><span class="w"> </span><span class="err">to</span><span class="w">
+    </span><span class="nt">"task"</span><span class="p">:</span><span class="w"> </span><span class="mi">9</span><span class="p">,</span><span class="w">
+    </span><span class="err">//</span><span class="w"> </span><span class="err">All</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">values</span><span class="w"> </span><span class="err">in</span><span class="w"> </span><span class="err">this</span><span class="w"> </span><span class="err">tuple</span><span class="w">
+    </span><span class="nt">"tuple"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">"field1"</span><span class="p">,</span><span class="w"> </span><span class="mi">2</span><span class="p">,</span><span class="w"> </span><span class="mi">3</span><span class="p">]</span><span class="w">
+</span><span class="p">}</span><span class="w">
+</span></code></pre></div>
 <p>An ack looks like:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">{
-    &quot;command&quot;: &quot;ack&quot;,
-    // the id of the tuple to ack
-    &quot;id&quot;: 123123
-}
-</code></pre></div>
+<div class="highlight"><pre><code class="language-" data-lang=""><span class="p">{</span><span class="w">
+    </span><span class="nt">"command"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ack"</span><span class="p">,</span><span class="w">
+    </span><span class="err">//</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">id</span><span class="w"> </span><span class="err">of</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">tuple</span><span class="w"> </span><span class="err">to</span><span class="w"> </span><span class="err">ack</span><span class="w">
+    </span><span class="nt">"id"</span><span class="p">:</span><span class="w"> </span><span class="mi">123123</span><span class="w">
+</span><span class="p">}</span><span class="w">
+</span></code></pre></div>
 <p>A fail looks like:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">{
-    &quot;command&quot;: &quot;fail&quot;,
-    // the id of the tuple to fail
-    &quot;id&quot;: 123123
-}
-</code></pre></div>
+<div class="highlight"><pre><code class="language-" data-lang=""><span class="p">{</span><span class="w">
+    </span><span class="nt">"command"</span><span class="p">:</span><span class="w"> </span><span class="s2">"fail"</span><span class="p">,</span><span class="w">
+    </span><span class="err">//</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">id</span><span class="w"> </span><span class="err">of</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">tuple</span><span class="w"> </span><span class="err">to</span><span class="w"> </span><span class="err">fail</span><span class="w">
+    </span><span class="nt">"id"</span><span class="p">:</span><span class="w"> </span><span class="mi">123123</span><span class="w">
+</span><span class="p">}</span><span class="w">
+</span></code></pre></div>
 <p>A &quot;log&quot; will log a message in the worker log. It looks like:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">{
-    &quot;command&quot;: &quot;log&quot;,
-    // the message to log
-    &quot;msg&quot;: &quot;hello world!&quot;
+<div class="highlight"><pre><code class="language-" data-lang=""><span class="p">{</span><span class="w">
+    </span><span class="nt">"command"</span><span class="p">:</span><span class="w"> </span><span class="s2">"log"</span><span class="p">,</span><span class="w">
+    </span><span class="err">//</span><span class="w"> </span><span class="err">the</span><span class="w"> </span><span class="err">message</span><span class="w"> </span><span class="err">to</span><span class="w"> </span><span class="err">log</span><span class="w">
+    </span><span class="nt">"msg"</span><span class="p">:</span><span class="w"> </span><span class="s2">"hello world!"</span><span class="w">
 
-}
-</code></pre></div>
+</span><span class="p">}</span><span class="w">
+</span></code></pre></div>
 <ul>
 <li>STDOUT: emit &quot;sync&quot; as a single line by itself when the bolt has finished emitting/acking/failing and is ready for the next input</li>
 </ul>
diff --git a/_site/documentation/Structure-of-the-codebase.html b/_site/documentation/Structure-of-the-codebase.html
index ac1cedc..6b8a63e 100644
--- a/_site/documentation/Structure-of-the-codebase.html
+++ b/_site/documentation/Structure-of-the-codebase.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -100,7 +100,7 @@
 
 <p>The following sections explain each of these layers in more detail.</p>
 
-<h3 id="storm.thrift">storm.thrift</h3>
+<h3 id="storm-thrift">storm.thrift</h3>
 
 <p>The first place to look to understand the structure of Storm&#39;s codebase is the <a href="https://github.com/apache/storm/blob/master/storm-core/src/storm.thrift">storm.thrift</a> file.</p>
 
@@ -172,7 +172,7 @@
 
 <p><a href="https://github.com/apache/storm/tree/master/storm-core/src/jvm/backtype/storm/hooks">backtype.storm.hooks</a>: Interfaces for hooking into various events in Storm, such as when tasks emit tuples, when tuples are acked, etc. User guide for hooks is <a href="https://github.com/apache/storm/wiki/Hooks">here</a>.</p>
 
-<p><a href="https://github.com/apache/storm/tree/master/storm-core/src/jvm/backtype/storm/serialization">backtype.storm.serialization</a>: Implementation of how Storm serializes/deserializes tuples. Built on top of <a href="http://code.google.com/p/kryo/">Kryo</a>.</p>
+<p><a href="https://github.com/apache/storm/tree/master/storm-core/src/jvm/backtype/storm/serialization">backtype.storm.serialization</a>: Implementation of how Storm serializes/deserializes tuples. Built on top of <a href="https://github.com/EsotericSoftware/kryo">Kryo</a>.</p>
 
 <p><a href="https://github.com/apache/storm/tree/master/storm-core/src/jvm/backtype/storm/spout">backtype.storm.spout</a>: Definition of spout and associated interfaces (like the <code>SpoutOutputCollector</code>). Also contains <code>ShellSpout</code> which implements the protocol for defining spouts in non-JVM languages.</p>
 
diff --git a/_site/documentation/Support-for-non-java-languages.html b/_site/documentation/Support-for-non-java-languages.html
index ea0a9a9..19bc091 100644
--- a/_site/documentation/Support-for-non-java-languages.html
+++ b/_site/documentation/Support-for-non-java-languages.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
diff --git a/_site/documentation/Transactional-topologies.html b/_site/documentation/Transactional-topologies.html
index f282645..5d757a0 100644
--- a/_site/documentation/Transactional-topologies.html
+++ b/_site/documentation/Transactional-topologies.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -141,7 +141,7 @@
 
 <p>After bolt 1 finishes its portion of the processing, it will be idle until the rest of the bolts finish and the next batch can be emitted from the spout.</p>
 
-<h3 id="design-3-(storm&#39;s-design)">Design 3 (Storm&#39;s design)</h3>
+<h3 id="design-3-storm-39-s-design">Design 3 (Storm&#39;s design)</h3>
 
 <p>A key realization is that not all the work for processing batches of tuples needs to be strongly ordered. For example, when computing a global count, there&#39;s two parts to the computation:</p>
 
@@ -177,12 +177,12 @@
 <h2 id="the-basics-through-example">The basics through example</h2>
 
 <p>You build transactional topologies by using <a href="/javadoc/apidocs/backtype/storm/transactional/TransactionalTopologyBuilder.html">TransactionalTopologyBuilder</a>. Here&#39;s the transactional topology definition for a topology that computes the global count of tuples from the input stream. This code comes from <a href="https://github.com/apache/storm/blob/master/examples/storm-starter/src/jvm/storm/starter/TransactionalGlobalCount.java">TransactionalGlobalCount</a> in storm-starter.</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">MemoryTransactionalSpout</span> <span class="n">spout</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">MemoryTransactionalSpout</span><span class="o">(</span><span class="n">DATA</span><span class="o">,</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;word&quot;</span><span class="o">),</span> <span class="n">PARTITION_TAKE_PER_BATCH</span><span class="o">);</span>
-<span class="n">TransactionalTopologyBuilder</span> <span class="n">builder</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">TransactionalTopologyBuilder</span><span class="o">(</span><span class="s">&quot;global-count&quot;</span><span class="o">,</span> <span class="s">&quot;spout&quot;</span><span class="o">,</span> <span class="n">spout</span><span class="o">,</span> <span class="mi">3</span><span class="o">);</span>
-<span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">&quot;partial-count&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="nf">BatchCount</span><span class="o">(),</span> <span class="mi">5</span><span class="o">)</span>
-        <span class="o">.</span><span class="na">shuffleGrouping</span><span class="o">(</span><span class="s">&quot;spout&quot;</span><span class="o">);</span>
-<span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">&quot;sum&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="nf">UpdateGlobalCount</span><span class="o">())</span>
-        <span class="o">.</span><span class="na">globalGrouping</span><span class="o">(</span><span class="s">&quot;partial-count&quot;</span><span class="o">);</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">MemoryTransactionalSpout</span> <span class="n">spout</span> <span class="o">=</span> <span class="k">new</span> <span class="n">MemoryTransactionalSpout</span><span class="o">(</span><span class="n">DATA</span><span class="o">,</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"word"</span><span class="o">),</span> <span class="n">PARTITION_TAKE_PER_BATCH</span><span class="o">);</span>
+<span class="n">TransactionalTopologyBuilder</span> <span class="n">builder</span> <span class="o">=</span> <span class="k">new</span> <span class="n">TransactionalTopologyBuilder</span><span class="o">(</span><span class="s">"global-count"</span><span class="o">,</span> <span class="s">"spout"</span><span class="o">,</span> <span class="n">spout</span><span class="o">,</span> <span class="mi">3</span><span class="o">);</span>
+<span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">"partial-count"</span><span class="o">,</span> <span class="k">new</span> <span class="n">BatchCount</span><span class="o">(),</span> <span class="mi">5</span><span class="o">)</span>
+        <span class="o">.</span><span class="na">shuffleGrouping</span><span class="o">(</span><span class="s">"spout"</span><span class="o">);</span>
+<span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">"sum"</span><span class="o">,</span> <span class="k">new</span> <span class="n">UpdateGlobalCount</span><span class="o">())</span>
+        <span class="o">.</span><span class="na">globalGrouping</span><span class="o">(</span><span class="s">"partial-count"</span><span class="o">);</span>
 </code></pre></div>
 <p><code>TransactionalTopologyBuilder</code> takes as input in the constructor an id for the transactional topology, an id for the spout within the topology, a transactional spout, and optionally the parallelism for the transactional spout. The id for the transactional topology is used to store state about the progress of topology in Zookeeper, so that if you restart the topology it will continue where it left off.</p>
 
@@ -198,24 +198,24 @@
     <span class="kt">int</span> <span class="n">_count</span> <span class="o">=</span> <span class="mi">0</span><span class="o">;</span>
 
     <span class="nd">@Override</span>
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">prepare</span><span class="o">(</span><span class="n">Map</span> <span class="n">conf</span><span class="o">,</span> <span class="n">TopologyContext</span> <span class="n">context</span><span class="o">,</span> <span class="n">BatchOutputCollector</span> <span class="n">collector</span><span class="o">,</span> <span class="n">Object</span> <span class="n">id</span><span class="o">)</span> <span class="o">{</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">prepare</span><span class="o">(</span><span class="n">Map</span> <span class="n">conf</span><span class="o">,</span> <span class="n">TopologyContext</span> <span class="n">context</span><span class="o">,</span> <span class="n">BatchOutputCollector</span> <span class="n">collector</span><span class="o">,</span> <span class="n">Object</span> <span class="n">id</span><span class="o">)</span> <span class="o">{</span>
         <span class="n">_collector</span> <span class="o">=</span> <span class="n">collector</span><span class="o">;</span>
         <span class="n">_id</span> <span class="o">=</span> <span class="n">id</span><span class="o">;</span>
     <span class="o">}</span>
 
     <span class="nd">@Override</span>
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">execute</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">tuple</span><span class="o">)</span> <span class="o">{</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">execute</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">tuple</span><span class="o">)</span> <span class="o">{</span>
         <span class="n">_count</span><span class="o">++;</span>
     <span class="o">}</span>
 
     <span class="nd">@Override</span>
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">finishBatch</span><span class="o">()</span> <span class="o">{</span>
-        <span class="n">_collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="k">new</span> <span class="nf">Values</span><span class="o">(</span><span class="n">_id</span><span class="o">,</span> <span class="n">_count</span><span class="o">));</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">finishBatch</span><span class="o">()</span> <span class="o">{</span>
+        <span class="n">_collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="k">new</span> <span class="n">Values</span><span class="o">(</span><span class="n">_id</span><span class="o">,</span> <span class="n">_count</span><span class="o">));</span>
     <span class="o">}</span>
 
     <span class="nd">@Override</span>
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">declareOutputFields</span><span class="o">(</span><span class="n">OutputFieldsDeclarer</span> <span class="n">declarer</span><span class="o">)</span> <span class="o">{</span>
-        <span class="n">declarer</span><span class="o">.</span><span class="na">declare</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;id&quot;</span><span class="o">,</span> <span class="s">&quot;count&quot;</span><span class="o">));</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">declareOutputFields</span><span class="o">(</span><span class="n">OutputFieldsDeclarer</span> <span class="n">declarer</span><span class="o">)</span> <span class="o">{</span>
+        <span class="n">declarer</span><span class="o">.</span><span class="na">declare</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"id"</span><span class="o">,</span> <span class="s">"count"</span><span class="o">));</span>
     <span class="o">}</span>
 <span class="o">}</span>
 </code></pre></div>
@@ -243,22 +243,22 @@
     <span class="kt">int</span> <span class="n">_sum</span> <span class="o">=</span> <span class="mi">0</span><span class="o">;</span>
 
     <span class="nd">@Override</span>
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">prepare</span><span class="o">(</span><span class="n">Map</span> <span class="n">conf</span><span class="o">,</span> <span class="n">TopologyContext</span> <span class="n">context</span><span class="o">,</span> <span class="n">BatchOutputCollector</span> <span class="n">collector</span><span class="o">,</span> <span class="n">TransactionAttempt</span> <span class="n">attempt</span><span class="o">)</span> <span class="o">{</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">prepare</span><span class="o">(</span><span class="n">Map</span> <span class="n">conf</span><span class="o">,</span> <span class="n">TopologyContext</span> <span class="n">context</span><span class="o">,</span> <span class="n">BatchOutputCollector</span> <span class="n">collector</span><span class="o">,</span> <span class="n">TransactionAttempt</span> <span class="n">attempt</span><span class="o">)</span> <span class="o">{</span>
         <span class="n">_collector</span> <span class="o">=</span> <span class="n">collector</span><span class="o">;</span>
         <span class="n">_attempt</span> <span class="o">=</span> <span class="n">attempt</span><span class="o">;</span>
     <span class="o">}</span>
 
     <span class="nd">@Override</span>
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">execute</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">tuple</span><span class="o">)</span> <span class="o">{</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">execute</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">tuple</span><span class="o">)</span> <span class="o">{</span>
         <span class="n">_sum</span><span class="o">+=</span><span class="n">tuple</span><span class="o">.</span><span class="na">getInteger</span><span class="o">(</span><span class="mi">1</span><span class="o">);</span>
     <span class="o">}</span>
 
     <span class="nd">@Override</span>
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">finishBatch</span><span class="o">()</span> <span class="o">{</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">finishBatch</span><span class="o">()</span> <span class="o">{</span>
         <span class="n">Value</span> <span class="n">val</span> <span class="o">=</span> <span class="n">DATABASE</span><span class="o">.</span><span class="na">get</span><span class="o">(</span><span class="n">GLOBAL_COUNT_KEY</span><span class="o">);</span>
         <span class="n">Value</span> <span class="n">newval</span><span class="o">;</span>
         <span class="k">if</span><span class="o">(</span><span class="n">val</span> <span class="o">==</span> <span class="kc">null</span> <span class="o">||</span> <span class="o">!</span><span class="n">val</span><span class="o">.</span><span class="na">txid</span><span class="o">.</span><span class="na">equals</span><span class="o">(</span><span class="n">_attempt</span><span class="o">.</span><span class="na">getTransactionId</span><span class="o">()))</span> <span class="o">{</span>
-            <span class="n">newval</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">Value</span><span class="o">();</span>
+            <span class="n">newval</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Value</span><span class="o">();</span>
             <span class="n">newval</span><span class="o">.</span><span class="na">txid</span> <span class="o">=</span> <span class="n">_attempt</span><span class="o">.</span><span class="na">getTransactionId</span><span class="o">();</span>
             <span class="k">if</span><span class="o">(</span><span class="n">val</span><span class="o">==</span><span class="kc">null</span><span class="o">)</span> <span class="o">{</span>
                 <span class="n">newval</span><span class="o">.</span><span class="na">count</span> <span class="o">=</span> <span class="n">_sum</span><span class="o">;</span>
@@ -269,12 +269,12 @@
         <span class="o">}</span> <span class="k">else</span> <span class="o">{</span>
             <span class="n">newval</span> <span class="o">=</span> <span class="n">val</span><span class="o">;</span>
         <span class="o">}</span>
-        <span class="n">_collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="k">new</span> <span class="nf">Values</span><span class="o">(</span><span class="n">_attempt</span><span class="o">,</span> <span class="n">newval</span><span class="o">.</span><span class="na">count</span><span class="o">));</span>
+        <span class="n">_collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="k">new</span> <span class="n">Values</span><span class="o">(</span><span class="n">_attempt</span><span class="o">,</span> <span class="n">newval</span><span class="o">.</span><span class="na">count</span><span class="o">));</span>
     <span class="o">}</span>
 
     <span class="nd">@Override</span>
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">declareOutputFields</span><span class="o">(</span><span class="n">OutputFieldsDeclarer</span> <span class="n">declarer</span><span class="o">)</span> <span class="o">{</span>
-        <span class="n">declarer</span><span class="o">.</span><span class="na">declare</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;id&quot;</span><span class="o">,</span> <span class="s">&quot;sum&quot;</span><span class="o">));</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">declareOutputFields</span><span class="o">(</span><span class="n">OutputFieldsDeclarer</span> <span class="n">declarer</span><span class="o">)</span> <span class="o">{</span>
+        <span class="n">declarer</span><span class="o">.</span><span class="na">declare</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"id"</span><span class="o">,</span> <span class="s">"sum"</span><span class="o">));</span>
     <span class="o">}</span>
 <span class="o">}</span>
 </code></pre></div>
@@ -300,7 +300,7 @@
 <li>BatchBolt&#39;s that are marked as committers: The only difference between this bolt and a regular batch bolt is when <code>finishBatch</code> is called. A committer bolt has <code>finishedBatch</code> called during the commit phase. The commit phase is guaranteed to occur only after all prior batches have successfully committed, and it will be retried until all bolts in the topology succeed the commit for the batch. There are two ways to make a <code>BatchBolt</code> a committer, by having the <code>BatchBolt</code> implement the <a href="/javadoc/apidocs/backtype/storm/transactional/ICommitter.html">ICommitter</a> marker interface, or by using the <code>setCommiterBolt</code> method in <code>TransactionalTopologyBuilder</code>.</li>
 </ol>
 
-<h4 id="processing-phase-vs.-commit-phase-in-bolts">Processing phase vs. commit phase in bolts</h4>
+<h4 id="processing-phase-vs-commit-phase-in-bolts">Processing phase vs. commit phase in bolts</h4>
 
 <p>To nail down the difference between the processing phase and commit phase of a transaction, let&#39;s look at an example topology:</p>
 
@@ -351,7 +351,7 @@
 <li><em>Number of active batches permissible at once:</em> You must set a limit to the number of batches that can be processed at once. You configure this using the &quot;topology.max.spout.pending&quot; config. If you don&#39;t set this config, it will default to 1.</li>
 </ol>
 
-<h2 id="what-if-you-can&#39;t-emit-the-same-batch-of-tuples-for-a-given-transaction-id?">What if you can&#39;t emit the same batch of tuples for a given transaction id?</h2>
+<h2 id="what-if-you-can-39-t-emit-the-same-batch-of-tuples-for-a-given-transaction-id">What if you can&#39;t emit the same batch of tuples for a given transaction id?</h2>
 
 <p>So far the discussion around transactional topologies has assumed that you can always emit the exact same batch of tuples for the same transaction id. So what do you do if this is not possible?</p>
 
diff --git a/_site/documentation/Trident-API-Overview.html b/_site/documentation/Trident-API-Overview.html
index 9f7f9ca..f3e883e 100644
--- a/_site/documentation/Trident-API-Overview.html
+++ b/_site/documentation/Trident-API-Overview.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -110,23 +110,23 @@
 
 <p>A function takes in a set of input fields and emits zero or more tuples as output. The fields of the output tuple are appended to the original input tuple in the stream. If a function emits no tuples, the original input tuple is filtered out. Otherwise, the input tuple is duplicated for each output tuple. Suppose you have this function:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">class</span> <span class="nc">MyFunction</span> <span class="kd">extends</span> <span class="n">BaseFunction</span> <span class="o">{</span>
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">execute</span><span class="o">(</span><span class="n">TridentTuple</span> <span class="n">tuple</span><span class="o">,</span> <span class="n">TridentCollector</span> <span class="n">collector</span><span class="o">)</span> <span class="o">{</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">execute</span><span class="o">(</span><span class="n">TridentTuple</span> <span class="n">tuple</span><span class="o">,</span> <span class="n">TridentCollector</span> <span class="n">collector</span><span class="o">)</span> <span class="o">{</span>
         <span class="k">for</span><span class="o">(</span><span class="kt">int</span> <span class="n">i</span><span class="o">=</span><span class="mi">0</span><span class="o">;</span> <span class="n">i</span> <span class="o">&lt;</span> <span class="n">tuple</span><span class="o">.</span><span class="na">getInteger</span><span class="o">(</span><span class="mi">0</span><span class="o">);</span> <span class="n">i</span><span class="o">++)</span> <span class="o">{</span>
-            <span class="n">collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="k">new</span> <span class="nf">Values</span><span class="o">(</span><span class="n">i</span><span class="o">));</span>
+            <span class="n">collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="k">new</span> <span class="n">Values</span><span class="o">(</span><span class="n">i</span><span class="o">));</span>
         <span class="o">}</span>
     <span class="o">}</span>
 <span class="o">}</span>
 </code></pre></div>
 <p>Now suppose you have a stream in the variable &quot;mystream&quot; with the fields [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;] with the following tuples:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">[1, 2, 3]
+<div class="highlight"><pre><code class="language-" data-lang="">[1, 2, 3]
 [4, 1, 6]
 [3, 0, 8]
 </code></pre></div>
 <p>If you run this code:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">mystream</span><span class="o">.</span><span class="na">each</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;b&quot;</span><span class="o">),</span> <span class="k">new</span> <span class="nf">MyFunction</span><span class="o">(),</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;d&quot;</span><span class="o">)))</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">mystream</span><span class="o">.</span><span class="na">each</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"b"</span><span class="o">),</span> <span class="k">new</span> <span class="n">MyFunction</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"d"</span><span class="o">)))</span>
 </code></pre></div>
 <p>The resulting tuples would have fields [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;, &quot;d&quot;] and look like this:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">[1, 2, 3, 0]
+<div class="highlight"><pre><code class="language-" data-lang="">[1, 2, 3, 0]
 [1, 2, 3, 1]
 [4, 1, 6, 0]
 </code></pre></div>
@@ -134,43 +134,43 @@
 
 <p>Filters take in a tuple as input and decide whether or not to keep that tuple or not. Suppose you had this filter:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">class</span> <span class="nc">MyFilter</span> <span class="kd">extends</span> <span class="n">BaseFilter</span> <span class="o">{</span>
-    <span class="kd">public</span> <span class="kt">boolean</span> <span class="nf">isKeep</span><span class="o">(</span><span class="n">TridentTuple</span> <span class="n">tuple</span><span class="o">)</span> <span class="o">{</span>
+    <span class="kd">public</span> <span class="kt">boolean</span> <span class="n">isKeep</span><span class="o">(</span><span class="n">TridentTuple</span> <span class="n">tuple</span><span class="o">)</span> <span class="o">{</span>
         <span class="k">return</span> <span class="n">tuple</span><span class="o">.</span><span class="na">getInteger</span><span class="o">(</span><span class="mi">0</span><span class="o">)</span> <span class="o">==</span> <span class="mi">1</span> <span class="o">&amp;&amp;</span> <span class="n">tuple</span><span class="o">.</span><span class="na">getInteger</span><span class="o">(</span><span class="mi">1</span><span class="o">)</span> <span class="o">==</span> <span class="mi">2</span><span class="o">;</span>
     <span class="o">}</span>
 <span class="o">}</span>
 </code></pre></div>
 <p>Now suppose you had these tuples with fields [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;]:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">[1, 2, 3]
+<div class="highlight"><pre><code class="language-" data-lang="">[1, 2, 3]
 [2, 1, 1]
 [2, 3, 4]
 </code></pre></div>
 <p>If you ran this code:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">mystream</span><span class="o">.</span><span class="na">each</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;b&quot;</span><span class="o">,</span> <span class="s">&quot;a&quot;</span><span class="o">),</span> <span class="k">new</span> <span class="nf">MyFilter</span><span class="o">())</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">mystream</span><span class="o">.</span><span class="na">each</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"b"</span><span class="o">,</span> <span class="s">"a"</span><span class="o">),</span> <span class="k">new</span> <span class="n">MyFilter</span><span class="o">())</span>
 </code></pre></div>
 <p>The resulting tuples would be:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">[2, 1, 1]
+<div class="highlight"><pre><code class="language-" data-lang="">[2, 1, 1]
 </code></pre></div>
 <h3 id="partitionaggregate">partitionAggregate</h3>
 
 <p>partitionAggregate runs a function on each partition of a batch of tuples. Unlike functions, the tuples emitted by partitionAggregate replace the input tuples given to it. Consider this example:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">mystream</span><span class="o">.</span><span class="na">partitionAggregate</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;b&quot;</span><span class="o">),</span> <span class="k">new</span> <span class="nf">Sum</span><span class="o">(),</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;sum&quot;</span><span class="o">))</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">mystream</span><span class="o">.</span><span class="na">partitionAggregate</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"b"</span><span class="o">),</span> <span class="k">new</span> <span class="n">Sum</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"sum"</span><span class="o">))</span>
 </code></pre></div>
 <p>Suppose the input stream contained fields [&quot;a&quot;, &quot;b&quot;] and the following partitions of tuples:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">Partition 0:
-[&quot;a&quot;, 1]
-[&quot;b&quot;, 2]
+<div class="highlight"><pre><code class="language-" data-lang="">Partition 0:
+["a", 1]
+["b", 2]
 
 Partition 1:
-[&quot;a&quot;, 3]
-[&quot;c&quot;, 8]
+["a", 3]
+["c", 8]
 
 Partition 2:
-[&quot;e&quot;, 1]
-[&quot;d&quot;, 9]
-[&quot;d&quot;, 10]
+["e", 1]
+["d", 9]
+["d", 10]
 </code></pre></div>
 <p>Then the output stream of that code would contain these tuples with one field called &quot;sum&quot;:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">Partition 0:
+<div class="highlight"><pre><code class="language-" data-lang="">Partition 0:
 [3]
 
 Partition 1:
@@ -183,22 +183,22 @@
 
 <p>Here&#39;s the interface for CombinerAggregator:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">interface</span> <span class="nc">CombinerAggregator</span><span class="o">&lt;</span><span class="n">T</span><span class="o">&gt;</span> <span class="kd">extends</span> <span class="n">Serializable</span> <span class="o">{</span>
-    <span class="n">T</span> <span class="nf">init</span><span class="o">(</span><span class="n">TridentTuple</span> <span class="n">tuple</span><span class="o">);</span>
-    <span class="n">T</span> <span class="nf">combine</span><span class="o">(</span><span class="n">T</span> <span class="n">val1</span><span class="o">,</span> <span class="n">T</span> <span class="n">val2</span><span class="o">);</span>
-    <span class="n">T</span> <span class="nf">zero</span><span class="o">();</span>
+    <span class="n">T</span> <span class="n">init</span><span class="o">(</span><span class="n">TridentTuple</span> <span class="n">tuple</span><span class="o">);</span>
+    <span class="n">T</span> <span class="n">combine</span><span class="o">(</span><span class="n">T</span> <span class="n">val1</span><span class="o">,</span> <span class="n">T</span> <span class="n">val2</span><span class="o">);</span>
+    <span class="n">T</span> <span class="n">zero</span><span class="o">();</span>
 <span class="o">}</span>
 </code></pre></div>
 <p>A CombinerAggregator returns a single tuple with a single field as output. CombinerAggregators run the init function on each input tuple and use the combine function to combine values until there&#39;s only one value left. If there&#39;s no tuples in the partition, the CombinerAggregator emits the output of the zero function. For example, here&#39;s the implementation of Count:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">class</span> <span class="nc">Count</span> <span class="kd">implements</span> <span class="n">CombinerAggregator</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">&gt;</span> <span class="o">{</span>
-    <span class="kd">public</span> <span class="n">Long</span> <span class="nf">init</span><span class="o">(</span><span class="n">TridentTuple</span> <span class="n">tuple</span><span class="o">)</span> <span class="o">{</span>
+    <span class="kd">public</span> <span class="n">Long</span> <span class="n">init</span><span class="o">(</span><span class="n">TridentTuple</span> <span class="n">tuple</span><span class="o">)</span> <span class="o">{</span>
         <span class="k">return</span> <span class="mi">1L</span><span class="o">;</span>
     <span class="o">}</span>
 
-    <span class="kd">public</span> <span class="n">Long</span> <span class="nf">combine</span><span class="o">(</span><span class="n">Long</span> <span class="n">val1</span><span class="o">,</span> <span class="n">Long</span> <span class="n">val2</span><span class="o">)</span> <span class="o">{</span>
+    <span class="kd">public</span> <span class="n">Long</span> <span class="n">combine</span><span class="o">(</span><span class="n">Long</span> <span class="n">val1</span><span class="o">,</span> <span class="n">Long</span> <span class="n">val2</span><span class="o">)</span> <span class="o">{</span>
         <span class="k">return</span> <span class="n">val1</span> <span class="o">+</span> <span class="n">val2</span><span class="o">;</span>
     <span class="o">}</span>
 
-    <span class="kd">public</span> <span class="n">Long</span> <span class="nf">zero</span><span class="o">()</span> <span class="o">{</span>
+    <span class="kd">public</span> <span class="n">Long</span> <span class="n">zero</span><span class="o">()</span> <span class="o">{</span>
         <span class="k">return</span> <span class="mi">0L</span><span class="o">;</span>
     <span class="o">}</span>
 <span class="o">}</span>
@@ -207,17 +207,17 @@
 
 <p>A ReducerAggregator has the following interface:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">interface</span> <span class="nc">ReducerAggregator</span><span class="o">&lt;</span><span class="n">T</span><span class="o">&gt;</span> <span class="kd">extends</span> <span class="n">Serializable</span> <span class="o">{</span>
-    <span class="n">T</span> <span class="nf">init</span><span class="o">();</span>
-    <span class="n">T</span> <span class="nf">reduce</span><span class="o">(</span><span class="n">T</span> <span class="n">curr</span><span class="o">,</span> <span class="n">TridentTuple</span> <span class="n">tuple</span><span class="o">);</span>
+    <span class="n">T</span> <span class="n">init</span><span class="o">();</span>
+    <span class="n">T</span> <span class="n">reduce</span><span class="o">(</span><span class="n">T</span> <span class="n">curr</span><span class="o">,</span> <span class="n">TridentTuple</span> <span class="n">tuple</span><span class="o">);</span>
 <span class="o">}</span>
 </code></pre></div>
 <p>A ReducerAggregator produces an initial value with init, and then it iterates on that value for each input tuple to produce a single tuple with a single value as output. For example, here&#39;s how you would define Count as a ReducerAggregator:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">class</span> <span class="nc">Count</span> <span class="kd">implements</span> <span class="n">ReducerAggregator</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">&gt;</span> <span class="o">{</span>
-    <span class="kd">public</span> <span class="n">Long</span> <span class="nf">init</span><span class="o">()</span> <span class="o">{</span>
+    <span class="kd">public</span> <span class="n">Long</span> <span class="n">init</span><span class="o">()</span> <span class="o">{</span>
         <span class="k">return</span> <span class="mi">0L</span><span class="o">;</span>
     <span class="o">}</span>
 
-    <span class="kd">public</span> <span class="n">Long</span> <span class="nf">reduce</span><span class="o">(</span><span class="n">Long</span> <span class="n">curr</span><span class="o">,</span> <span class="n">TridentTuple</span> <span class="n">tuple</span><span class="o">)</span> <span class="o">{</span>
+    <span class="kd">public</span> <span class="n">Long</span> <span class="n">reduce</span><span class="o">(</span><span class="n">Long</span> <span class="n">curr</span><span class="o">,</span> <span class="n">TridentTuple</span> <span class="n">tuple</span><span class="o">)</span> <span class="o">{</span>
         <span class="k">return</span> <span class="n">curr</span> <span class="o">+</span> <span class="mi">1</span><span class="o">;</span>
     <span class="o">}</span>
 <span class="o">}</span>
@@ -226,9 +226,9 @@
 
 <p>The most general interface for performing aggregations is Aggregator, which looks like this:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">interface</span> <span class="nc">Aggregator</span><span class="o">&lt;</span><span class="n">T</span><span class="o">&gt;</span> <span class="kd">extends</span> <span class="n">Operation</span> <span class="o">{</span>
-    <span class="n">T</span> <span class="nf">init</span><span class="o">(</span><span class="n">Object</span> <span class="n">batchId</span><span class="o">,</span> <span class="n">TridentCollector</span> <span class="n">collector</span><span class="o">);</span>
-    <span class="kt">void</span> <span class="nf">aggregate</span><span class="o">(</span><span class="n">T</span> <span class="n">state</span><span class="o">,</span> <span class="n">TridentTuple</span> <span class="n">tuple</span><span class="o">,</span> <span class="n">TridentCollector</span> <span class="n">collector</span><span class="o">);</span>
-    <span class="kt">void</span> <span class="nf">complete</span><span class="o">(</span><span class="n">T</span> <span class="n">state</span><span class="o">,</span> <span class="n">TridentCollector</span> <span class="n">collector</span><span class="o">);</span>
+    <span class="n">T</span> <span class="n">init</span><span class="o">(</span><span class="n">Object</span> <span class="n">batchId</span><span class="o">,</span> <span class="n">TridentCollector</span> <span class="n">collector</span><span class="o">);</span>
+    <span class="kt">void</span> <span class="n">aggregate</span><span class="o">(</span><span class="n">T</span> <span class="n">state</span><span class="o">,</span> <span class="n">TridentTuple</span> <span class="n">tuple</span><span class="o">,</span> <span class="n">TridentCollector</span> <span class="n">collector</span><span class="o">);</span>
+    <span class="kt">void</span> <span class="n">complete</span><span class="o">(</span><span class="n">T</span> <span class="n">state</span><span class="o">,</span> <span class="n">TridentCollector</span> <span class="n">collector</span><span class="o">);</span>
 <span class="o">}</span>
 </code></pre></div>
 <p>Aggregators can emit any number of tuples with any number of fields. They can emit tuples at any point during execution. Aggregators execute in the following way:</p>
@@ -245,23 +245,23 @@
         <span class="kt">long</span> <span class="n">count</span> <span class="o">=</span> <span class="mi">0</span><span class="o">;</span>
     <span class="o">}</span>
 
-    <span class="kd">public</span> <span class="n">CountState</span> <span class="nf">init</span><span class="o">(</span><span class="n">Object</span> <span class="n">batchId</span><span class="o">,</span> <span class="n">TridentCollector</span> <span class="n">collector</span><span class="o">)</span> <span class="o">{</span>
-        <span class="k">return</span> <span class="k">new</span> <span class="nf">CountState</span><span class="o">();</span>
+    <span class="kd">public</span> <span class="n">CountState</span> <span class="n">init</span><span class="o">(</span><span class="n">Object</span> <span class="n">batchId</span><span class="o">,</span> <span class="n">TridentCollector</span> <span class="n">collector</span><span class="o">)</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="k">new</span> <span class="n">CountState</span><span class="o">();</span>
     <span class="o">}</span>
 
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">aggregate</span><span class="o">(</span><span class="n">CountState</span> <span class="n">state</span><span class="o">,</span> <span class="n">TridentTuple</span> <span class="n">tuple</span><span class="o">,</span> <span class="n">TridentCollector</span> <span class="n">collector</span><span class="o">)</span> <span class="o">{</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">aggregate</span><span class="o">(</span><span class="n">CountState</span> <span class="n">state</span><span class="o">,</span> <span class="n">TridentTuple</span> <span class="n">tuple</span><span class="o">,</span> <span class="n">TridentCollector</span> <span class="n">collector</span><span class="o">)</span> <span class="o">{</span>
         <span class="n">state</span><span class="o">.</span><span class="na">count</span><span class="o">+=</span><span class="mi">1</span><span class="o">;</span>
     <span class="o">}</span>
 
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">complete</span><span class="o">(</span><span class="n">CountState</span> <span class="n">state</span><span class="o">,</span> <span class="n">TridentCollector</span> <span class="n">collector</span><span class="o">)</span> <span class="o">{</span>
-        <span class="n">collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="k">new</span> <span class="nf">Values</span><span class="o">(</span><span class="n">state</span><span class="o">.</span><span class="na">count</span><span class="o">));</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">complete</span><span class="o">(</span><span class="n">CountState</span> <span class="n">state</span><span class="o">,</span> <span class="n">TridentCollector</span> <span class="n">collector</span><span class="o">)</span> <span class="o">{</span>
+        <span class="n">collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="k">new</span> <span class="n">Values</span><span class="o">(</span><span class="n">state</span><span class="o">.</span><span class="na">count</span><span class="o">));</span>
     <span class="o">}</span>
 <span class="o">}</span>
 </code></pre></div>
 <p>Sometimes you want to execute multiple aggregators at the same time. This is called chaining and can be accomplished like this:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">mystream</span><span class="o">.</span><span class="na">chainedAgg</span><span class="o">()</span>
-        <span class="o">.</span><span class="na">partitionAggregate</span><span class="o">(</span><span class="k">new</span> <span class="nf">Count</span><span class="o">(),</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;count&quot;</span><span class="o">))</span>
-        <span class="o">.</span><span class="na">partitionAggregate</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;b&quot;</span><span class="o">),</span> <span class="k">new</span> <span class="nf">Sum</span><span class="o">(),</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;sum&quot;</span><span class="o">))</span>
+        <span class="o">.</span><span class="na">partitionAggregate</span><span class="o">(</span><span class="k">new</span> <span class="n">Count</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"count"</span><span class="o">))</span>
+        <span class="o">.</span><span class="na">partitionAggregate</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"b"</span><span class="o">),</span> <span class="k">new</span> <span class="n">Sum</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"sum"</span><span class="o">))</span>
         <span class="o">.</span><span class="na">chainEnd</span><span class="o">()</span>
 </code></pre></div>
 <p>This code will run the Count and Sum aggregators on each partition. The output will contain a single tuple with the fields [&quot;count&quot;, &quot;sum&quot;].</p>
@@ -273,7 +273,7 @@
 <h3 id="projection">projection</h3>
 
 <p>The projection method on Stream keeps only the fields specified in the operation. If you had a Stream with fields [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;, &quot;d&quot;] and you ran this code:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">mystream</span><span class="o">.</span><span class="na">project</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;b&quot;</span><span class="o">,</span> <span class="s">&quot;d&quot;</span><span class="o">))</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">mystream</span><span class="o">.</span><span class="na">project</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"b"</span><span class="o">,</span> <span class="s">"d"</span><span class="o">))</span>
 </code></pre></div>
 <p>The output stream would contain only the fields [&quot;b&quot;, &quot;d&quot;].</p>
 
@@ -297,7 +297,7 @@
 <p>Running aggregate on a Stream does a global aggregation. When you use a ReducerAggregator or an Aggregator, the stream is first repartitioned into a single partition, and then the aggregation function is run on that partition. When you use a CombinerAggregator, on the other hand, first Trident will compute partial aggregations of each partition, then repartition to a single partition, and then finish the aggregation after the network transfer. CombinerAggregator&#39;s are far more efficient and should be used when possible.</p>
 
 <p>Here&#39;s an example of using aggregate to get a global count for a batch:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">mystream</span><span class="o">.</span><span class="na">aggregate</span><span class="o">(</span><span class="k">new</span> <span class="nf">Count</span><span class="o">(),</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;count&quot;</span><span class="o">))</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">mystream</span><span class="o">.</span><span class="na">aggregate</span><span class="o">(</span><span class="k">new</span> <span class="n">Count</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"count"</span><span class="o">))</span>
 </code></pre></div>
 <p>Like partitionAggregate, aggregators for aggregate can be chained. However, if you chain a CombinerAggregator with a non-CombinerAggregator, Trident is unable to do the partial aggregation optimization.</p>
 
@@ -323,7 +323,7 @@
 <p>Another way to combine streams is with a join. Now, a standard join, like the kind from SQL, require finite input. So they don&#39;t make sense with infinite streams. Joins in Trident only apply within each small batch that comes off of the spout. </p>
 
 <p>Here&#39;s an example join between a stream containing fields [&quot;key&quot;, &quot;val1&quot;, &quot;val2&quot;] and another stream containing [&quot;x&quot;, &quot;val1&quot;]:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">topology</span><span class="o">.</span><span class="na">join</span><span class="o">(</span><span class="n">stream1</span><span class="o">,</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;key&quot;</span><span class="o">),</span> <span class="n">stream2</span><span class="o">,</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;x&quot;</span><span class="o">),</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;key&quot;</span><span class="o">,</span> <span class="s">&quot;a&quot;</span><span class="o">,</span> <span class="s">&quot;b&quot;</span><span class="o">,</span> <span class="s">&quot;c&quot;</span><span class="o">));</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">topology</span><span class="o">.</span><span class="na">join</span><span class="o">(</span><span class="n">stream1</span><span class="o">,</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"key"</span><span class="o">),</span> <span class="n">stream2</span><span class="o">,</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"x"</span><span class="o">),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"key"</span><span class="o">,</span> <span class="s">"a"</span><span class="o">,</span> <span class="s">"b"</span><span class="o">,</span> <span class="s">"c"</span><span class="o">));</span>
 </code></pre></div>
 <p>This joins stream1 and stream2 together using &quot;key&quot; and &quot;x&quot; as the join fields for each respective stream. Then, Trident requires that all the output fields of the new stream be named, since the input streams could have overlapping field names. The tuples emitted from the join will contain:</p>
 
diff --git a/_site/documentation/Trident-spouts.html b/_site/documentation/Trident-spouts.html
index 114a007..200c145 100644
--- a/_site/documentation/Trident-spouts.html
+++ b/_site/documentation/Trident-spouts.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -97,8 +97,8 @@
 <p>There is an inextricable link between how you source your data streams and how you update state (e.g. databases) based on those data streams. See <a href="Trident-state.html">Trident state doc</a> for an explanation of this – understanding this link is imperative for understanding the spout options available.</p>
 
 <p>Regular Storm spouts will be non-transactional spouts in a Trident topology. To use a regular Storm IRichSpout, create the stream like this in a TridentTopology:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">TridentTopology</span> <span class="n">topology</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">TridentTopology</span><span class="o">();</span>
-<span class="n">topology</span><span class="o">.</span><span class="na">newStream</span><span class="o">(</span><span class="s">&quot;myspoutid&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="nf">MyRichSpout</span><span class="o">());</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">TridentTopology</span> <span class="n">topology</span> <span class="o">=</span> <span class="k">new</span> <span class="n">TridentTopology</span><span class="o">();</span>
+<span class="n">topology</span><span class="o">.</span><span class="na">newStream</span><span class="o">(</span><span class="s">"myspoutid"</span><span class="o">,</span> <span class="k">new</span> <span class="n">MyRichSpout</span><span class="o">());</span>
 </code></pre></div>
 <p>All spouts in a Trident topology are required to be given a unique identifier for the stream – this identifier must be unique across all topologies run on the cluster. Trident will use this identifier to store metadata about what the spout has consumed in Zookeeper, including the txid and any metadata associated with the spout.</p>
 
diff --git a/_site/documentation/Trident-state.html b/_site/documentation/Trident-state.html
index 0d8990e..cd79194 100644
--- a/_site/documentation/Trident-state.html
+++ b/_site/documentation/Trident-state.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -131,17 +131,17 @@
 <p>Suppose your topology computes word count and you want to store the word counts in a key/value database. The key will be the word, and the value will contain the count. You&#39;ve already seen that storing just the count as the value isn&#39;t sufficient to know whether you&#39;ve processed a batch of tuples before. Instead, what you can do is store the transaction id with the count in the database as an atomic value. Then, when updating the count, you can just compare the transaction id in the database with the transaction id for the current batch. If they&#39;re the same, you skip the update – because of the strong ordering, you know for sure that the value in the database incorporates the current batch. If they&#39;re different, you increment the count. This logic works because the batch for a txid never changes, and Trident ensures that state updates are ordered among batches.</p>
 
 <p>Consider this example of why it works. Suppose you are processing txid 3 which consists of the following batch of tuples:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">[&quot;man&quot;]
-[&quot;man&quot;]
-[&quot;dog&quot;]
+<div class="highlight"><pre><code class="language-" data-lang="">["man"]
+["man"]
+["dog"]
 </code></pre></div>
 <p>Suppose the database currently holds the following key/value pairs:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">man =&gt; [count=3, txid=1]
+<div class="highlight"><pre><code class="language-" data-lang="">man =&gt; [count=3, txid=1]
 dog =&gt; [count=4, txid=3]
 apple =&gt; [count=10, txid=2]
 </code></pre></div>
 <p>The txid associated with &quot;man&quot; is txid 1. Since the current txid is 3, you know for sure that this batch of tuples is not represented in that count. So you can go ahead and increment the count by 2 and update the txid. On the other hand, the txid for &quot;dog&quot; is the same as the current txid. So you know for sure that the increment from the current batch is already represented in the database for the &quot;dog&quot; key. So you can skip the update. After completing updates, the database looks like this:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">man =&gt; [count=5, txid=3]
+<div class="highlight"><pre><code class="language-" data-lang="">man =&gt; [count=5, txid=3]
 dog =&gt; [count=4, txid=3]
 apple =&gt; [count=10, txid=2]
 </code></pre></div>
@@ -160,23 +160,23 @@
 <p>With opaque transactional spouts, it&#39;s no longer possible to use the trick of skipping state updates if the transaction id in the database is the same as the transaction id for the current batch. This is because the batch may have changed between state updates.</p>
 
 <p>What you can do is store more state in the database. Rather than store a value and transaction id in the database, you instead store a value, transaction id, and the previous value in the database. Let&#39;s again use the example of storing a count in the database. Suppose the partial count for your batch is &quot;2&quot; and it&#39;s time to apply a state update. Suppose the value in the database looks like this:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">{ value = 4,
-  prevValue = 1,
-  txid = 2
-}
-</code></pre></div>
+<div class="highlight"><pre><code class="language-" data-lang=""><span class="p">{</span><span class="w"> </span><span class="err">value</span><span class="w"> </span><span class="err">=</span><span class="w"> </span><span class="err">4,</span><span class="w">
+  </span><span class="err">prevValue</span><span class="w"> </span><span class="err">=</span><span class="w"> </span><span class="err">1,</span><span class="w">
+  </span><span class="err">txid</span><span class="w"> </span><span class="err">=</span><span class="w"> </span><span class="err">2</span><span class="w">
+</span><span class="p">}</span><span class="w">
+</span></code></pre></div>
 <p>Suppose your current txid is 3, different than what&#39;s in the database. In this case, you set &quot;prevValue&quot; equal to &quot;value&quot;, increment &quot;value&quot; by your partial count, and update the txid. The new database value will look like this:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">{ value = 6,
-  prevValue = 4,
-  txid = 3
-}
-</code></pre></div>
+<div class="highlight"><pre><code class="language-" data-lang=""><span class="p">{</span><span class="w"> </span><span class="err">value</span><span class="w"> </span><span class="err">=</span><span class="w"> </span><span class="err">6,</span><span class="w">
+  </span><span class="err">prevValue</span><span class="w"> </span><span class="err">=</span><span class="w"> </span><span class="err">4,</span><span class="w">
+  </span><span class="err">txid</span><span class="w"> </span><span class="err">=</span><span class="w"> </span><span class="err">3</span><span class="w">
+</span><span class="p">}</span><span class="w">
+</span></code></pre></div>
 <p>Now suppose your current txid is 2, equal to what&#39;s in the database. Now you know that the &quot;value&quot; in the database contains an update from a previous batch for your current txid, but that batch may have been different so you have to ignore it. What you do in this case is increment &quot;prevValue&quot; by your partial count to compute the new &quot;value&quot;. You then set the value in the database to this:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">{ value = 3,
-  prevValue = 1,
-  txid = 2
-}
-</code></pre></div>
+<div class="highlight"><pre><code class="language-" data-lang=""><span class="p">{</span><span class="w"> </span><span class="err">value</span><span class="w"> </span><span class="err">=</span><span class="w"> </span><span class="err">3,</span><span class="w">
+  </span><span class="err">prevValue</span><span class="w"> </span><span class="err">=</span><span class="w"> </span><span class="err">1,</span><span class="w">
+  </span><span class="err">txid</span><span class="w"> </span><span class="err">=</span><span class="w"> </span><span class="err">2</span><span class="w">
+</span><span class="p">}</span><span class="w">
+</span></code></pre></div>
 <p>This works because of the strong ordering of batches provided by Trident. Once Trident moves onto a new batch for state updates, it will never go back to a previous batch. And since opaque transactional spouts guarantee no overlap between batches – that each tuple is successfully processed by one batch – you can safely update based on the previous value.</p>
 
 <h2 id="non-transactional-spouts">Non-transactional spouts</h2>
@@ -196,66 +196,66 @@
 <h2 id="state-apis">State APIs</h2>
 
 <p>You&#39;ve seen the intricacies of what it takes to achieve exactly-once semantics. The nice thing about Trident is that it internalizes all the fault-tolerance logic within the State – as a user you don&#39;t have to deal with comparing txids, storing multiple values in the database, or anything like that. You can write code like this:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">TridentTopology</span> <span class="n">topology</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">TridentTopology</span><span class="o">();</span>        
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">TridentTopology</span> <span class="n">topology</span> <span class="o">=</span> <span class="k">new</span> <span class="n">TridentTopology</span><span class="o">();</span>        
 <span class="n">TridentState</span> <span class="n">wordCounts</span> <span class="o">=</span>
-      <span class="n">topology</span><span class="o">.</span><span class="na">newStream</span><span class="o">(</span><span class="s">&quot;spout1&quot;</span><span class="o">,</span> <span class="n">spout</span><span class="o">)</span>
-        <span class="o">.</span><span class="na">each</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;sentence&quot;</span><span class="o">),</span> <span class="k">new</span> <span class="nf">Split</span><span class="o">(),</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;word&quot;</span><span class="o">))</span>
-        <span class="o">.</span><span class="na">groupBy</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;word&quot;</span><span class="o">))</span>
-        <span class="o">.</span><span class="na">persistentAggregate</span><span class="o">(</span><span class="n">MemcachedState</span><span class="o">.</span><span class="na">opaque</span><span class="o">(</span><span class="n">serverLocations</span><span class="o">),</span> <span class="k">new</span> <span class="nf">Count</span><span class="o">(),</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;count&quot;</span><span class="o">))</span>                
+      <span class="n">topology</span><span class="o">.</span><span class="na">newStream</span><span class="o">(</span><span class="s">"spout1"</span><span class="o">,</span> <span class="n">spout</span><span class="o">)</span>
+        <span class="o">.</span><span class="na">each</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"sentence"</span><span class="o">),</span> <span class="k">new</span> <span class="n">Split</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"word"</span><span class="o">))</span>
+        <span class="o">.</span><span class="na">groupBy</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"word"</span><span class="o">))</span>
+        <span class="o">.</span><span class="na">persistentAggregate</span><span class="o">(</span><span class="n">MemcachedState</span><span class="o">.</span><span class="na">opaque</span><span class="o">(</span><span class="n">serverLocations</span><span class="o">),</span> <span class="k">new</span> <span class="n">Count</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"count"</span><span class="o">))</span>                
         <span class="o">.</span><span class="na">parallelismHint</span><span class="o">(</span><span class="mi">6</span><span class="o">);</span>
 </code></pre></div>
 <p>All the logic necessary to manage opaque transactional state logic is internalized in the MemcachedState.opaque call. Additionally, updates are automatically batched to minimize roundtrips to the database.</p>
 
 <p>The base State interface just has two methods:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">interface</span> <span class="nc">State</span> <span class="o">{</span>
-    <span class="kt">void</span> <span class="nf">beginCommit</span><span class="o">(</span><span class="n">Long</span> <span class="n">txid</span><span class="o">);</span> <span class="c1">// can be null for things like partitionPersist occurring off a DRPC stream</span>
-    <span class="kt">void</span> <span class="nf">commit</span><span class="o">(</span><span class="n">Long</span> <span class="n">txid</span><span class="o">);</span>
+    <span class="kt">void</span> <span class="n">beginCommit</span><span class="o">(</span><span class="n">Long</span> <span class="n">txid</span><span class="o">);</span> <span class="c1">// can be null for things like partitionPersist occurring off a DRPC stream</span>
+    <span class="kt">void</span> <span class="n">commit</span><span class="o">(</span><span class="n">Long</span> <span class="n">txid</span><span class="o">);</span>
 <span class="o">}</span>
 </code></pre></div>
 <p>You&#39;re told when a state update is beginning, when a state update is ending, and you&#39;re given the txid in each case. Trident assumes nothing about how your state works, what kind of methods there are to update it, and what kind of methods there are to read from it.</p>
 
 <p>Suppose you have a home-grown database that contains user location information and you want to be able to access it from Trident. Your State implementation would have methods for getting and setting user information:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">class</span> <span class="nc">LocationDB</span> <span class="kd">implements</span> <span class="n">State</span> <span class="o">{</span>
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">beginCommit</span><span class="o">(</span><span class="n">Long</span> <span class="n">txid</span><span class="o">)</span> <span class="o">{</span>    
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">beginCommit</span><span class="o">(</span><span class="n">Long</span> <span class="n">txid</span><span class="o">)</span> <span class="o">{</span>    
     <span class="o">}</span>
 
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">commit</span><span class="o">(</span><span class="n">Long</span> <span class="n">txid</span><span class="o">)</span> <span class="o">{</span>    
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">commit</span><span class="o">(</span><span class="n">Long</span> <span class="n">txid</span><span class="o">)</span> <span class="o">{</span>    
     <span class="o">}</span>
 
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">setLocation</span><span class="o">(</span><span class="kt">long</span> <span class="n">userId</span><span class="o">,</span> <span class="n">String</span> <span class="n">location</span><span class="o">)</span> <span class="o">{</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">setLocation</span><span class="o">(</span><span class="kt">long</span> <span class="n">userId</span><span class="o">,</span> <span class="n">String</span> <span class="n">location</span><span class="o">)</span> <span class="o">{</span>
       <span class="c1">// code to access database and set location</span>
     <span class="o">}</span>
 
-    <span class="kd">public</span> <span class="n">String</span> <span class="nf">getLocation</span><span class="o">(</span><span class="kt">long</span> <span class="n">userId</span><span class="o">)</span> <span class="o">{</span>
+    <span class="kd">public</span> <span class="n">String</span> <span class="n">getLocation</span><span class="o">(</span><span class="kt">long</span> <span class="n">userId</span><span class="o">)</span> <span class="o">{</span>
       <span class="c1">// code to get location from database</span>
     <span class="o">}</span>
 <span class="o">}</span>
 </code></pre></div>
 <p>You then provide Trident a StateFactory that can create instances of your State object within Trident tasks. The StateFactory for your LocationDB might look something like this:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">class</span> <span class="nc">LocationDBFactory</span> <span class="kd">implements</span> <span class="n">StateFactory</span> <span class="o">{</span>
-   <span class="kd">public</span> <span class="n">State</span> <span class="nf">makeState</span><span class="o">(</span><span class="n">Map</span> <span class="n">conf</span><span class="o">,</span> <span class="kt">int</span> <span class="n">partitionIndex</span><span class="o">,</span> <span class="kt">int</span> <span class="n">numPartitions</span><span class="o">)</span> <span class="o">{</span>
-      <span class="k">return</span> <span class="k">new</span> <span class="nf">LocationDB</span><span class="o">();</span>
+   <span class="kd">public</span> <span class="n">State</span> <span class="n">makeState</span><span class="o">(</span><span class="n">Map</span> <span class="n">conf</span><span class="o">,</span> <span class="kt">int</span> <span class="n">partitionIndex</span><span class="o">,</span> <span class="kt">int</span> <span class="n">numPartitions</span><span class="o">)</span> <span class="o">{</span>
+      <span class="k">return</span> <span class="k">new</span> <span class="n">LocationDB</span><span class="o">();</span>
    <span class="o">}</span> 
 <span class="o">}</span>
 </code></pre></div>
 <p>Trident provides the QueryFunction interface for writing Trident operations that query a source of state, and the StateUpdater interface for writing Trident operations that update a source of state. For example, let&#39;s write an operation &quot;QueryLocation&quot; that queries the LocationDB for the locations of users. Let&#39;s start off with how you would use it in a topology. Let&#39;s say this topology consumes an input stream of userids:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">TridentTopology</span> <span class="n">topology</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">TridentTopology</span><span class="o">();</span>
-<span class="n">TridentState</span> <span class="n">locations</span> <span class="o">=</span> <span class="n">topology</span><span class="o">.</span><span class="na">newStaticState</span><span class="o">(</span><span class="k">new</span> <span class="nf">LocationDBFactory</span><span class="o">());</span>
-<span class="n">topology</span><span class="o">.</span><span class="na">newStream</span><span class="o">(</span><span class="s">&quot;myspout&quot;</span><span class="o">,</span> <span class="n">spout</span><span class="o">)</span>
-        <span class="o">.</span><span class="na">stateQuery</span><span class="o">(</span><span class="n">locations</span><span class="o">,</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;userid&quot;</span><span class="o">),</span> <span class="k">new</span> <span class="nf">QueryLocation</span><span class="o">(),</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;location&quot;</span><span class="o">))</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">TridentTopology</span> <span class="n">topology</span> <span class="o">=</span> <span class="k">new</span> <span class="n">TridentTopology</span><span class="o">();</span>
+<span class="n">TridentState</span> <span class="n">locations</span> <span class="o">=</span> <span class="n">topology</span><span class="o">.</span><span class="na">newStaticState</span><span class="o">(</span><span class="k">new</span> <span class="n">LocationDBFactory</span><span class="o">());</span>
+<span class="n">topology</span><span class="o">.</span><span class="na">newStream</span><span class="o">(</span><span class="s">"myspout"</span><span class="o">,</span> <span class="n">spout</span><span class="o">)</span>
+        <span class="o">.</span><span class="na">stateQuery</span><span class="o">(</span><span class="n">locations</span><span class="o">,</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"userid"</span><span class="o">),</span> <span class="k">new</span> <span class="n">QueryLocation</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"location"</span><span class="o">))</span>
 </code></pre></div>
 <p>Now let&#39;s take a look at what the implementation of QueryLocation would look like:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">class</span> <span class="nc">QueryLocation</span> <span class="kd">extends</span> <span class="n">BaseQueryFunction</span><span class="o">&lt;</span><span class="n">LocationDB</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="o">{</span>
-    <span class="kd">public</span> <span class="n">List</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;</span> <span class="nf">batchRetrieve</span><span class="o">(</span><span class="n">LocationDB</span> <span class="n">state</span><span class="o">,</span> <span class="n">List</span><span class="o">&lt;</span><span class="n">TridentTuple</span><span class="o">&gt;</span> <span class="n">inputs</span><span class="o">)</span> <span class="o">{</span>
-        <span class="n">List</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;</span> <span class="n">ret</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">ArrayList</span><span class="o">();</span>
+    <span class="kd">public</span> <span class="n">List</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;</span> <span class="n">batchRetrieve</span><span class="o">(</span><span class="n">LocationDB</span> <span class="n">state</span><span class="o">,</span> <span class="n">List</span><span class="o">&lt;</span><span class="n">TridentTuple</span><span class="o">&gt;</span> <span class="n">inputs</span><span class="o">)</span> <span class="o">{</span>
+        <span class="n">List</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;</span> <span class="n">ret</span> <span class="o">=</span> <span class="k">new</span> <span class="n">ArrayList</span><span class="o">();</span>
         <span class="k">for</span><span class="o">(</span><span class="n">TridentTuple</span> <span class="nl">input:</span> <span class="n">inputs</span><span class="o">)</span> <span class="o">{</span>
             <span class="n">ret</span><span class="o">.</span><span class="na">add</span><span class="o">(</span><span class="n">state</span><span class="o">.</span><span class="na">getLocation</span><span class="o">(</span><span class="n">input</span><span class="o">.</span><span class="na">getLong</span><span class="o">(</span><span class="mi">0</span><span class="o">)));</span>
         <span class="o">}</span>
         <span class="k">return</span> <span class="n">ret</span><span class="o">;</span>
     <span class="o">}</span>
 
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">execute</span><span class="o">(</span><span class="n">TridentTuple</span> <span class="n">tuple</span><span class="o">,</span> <span class="n">String</span> <span class="n">location</span><span class="o">,</span> <span class="n">TridentCollector</span> <span class="n">collector</span><span class="o">)</span> <span class="o">{</span>
-        <span class="n">collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="k">new</span> <span class="nf">Values</span><span class="o">(</span><span class="n">location</span><span class="o">));</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">execute</span><span class="o">(</span><span class="n">TridentTuple</span> <span class="n">tuple</span><span class="o">,</span> <span class="n">String</span> <span class="n">location</span><span class="o">,</span> <span class="n">TridentCollector</span> <span class="n">collector</span><span class="o">)</span> <span class="o">{</span>
+        <span class="n">collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="k">new</span> <span class="n">Values</span><span class="o">(</span><span class="n">location</span><span class="o">));</span>
     <span class="o">}</span>    
 <span class="o">}</span>
 </code></pre></div>
@@ -263,24 +263,24 @@
 
 <p>You can see that this code doesn&#39;t take advantage of the batching that Trident does, since it just queries the LocationDB one at a time. So a better way to write the LocationDB would be like this:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">class</span> <span class="nc">LocationDB</span> <span class="kd">implements</span> <span class="n">State</span> <span class="o">{</span>
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">beginCommit</span><span class="o">(</span><span class="n">Long</span> <span class="n">txid</span><span class="o">)</span> <span class="o">{</span>    
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">beginCommit</span><span class="o">(</span><span class="n">Long</span> <span class="n">txid</span><span class="o">)</span> <span class="o">{</span>    
     <span class="o">}</span>
 
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">commit</span><span class="o">(</span><span class="n">Long</span> <span class="n">txid</span><span class="o">)</span> <span class="o">{</span>    
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">commit</span><span class="o">(</span><span class="n">Long</span> <span class="n">txid</span><span class="o">)</span> <span class="o">{</span>    
     <span class="o">}</span>
 
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">setLocationsBulk</span><span class="o">(</span><span class="n">List</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">&gt;</span> <span class="n">userIds</span><span class="o">,</span> <span class="n">List</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;</span> <span class="n">locations</span><span class="o">)</span> <span class="o">{</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">setLocationsBulk</span><span class="o">(</span><span class="n">List</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">&gt;</span> <span class="n">userIds</span><span class="o">,</span> <span class="n">List</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;</span> <span class="n">locations</span><span class="o">)</span> <span class="o">{</span>
       <span class="c1">// set locations in bulk</span>
     <span class="o">}</span>
 
-    <span class="kd">public</span> <span class="n">List</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;</span> <span class="nf">bulkGetLocations</span><span class="o">(</span><span class="n">List</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">&gt;</span> <span class="n">userIds</span><span class="o">)</span> <span class="o">{</span>
+    <span class="kd">public</span> <span class="n">List</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;</span> <span class="n">bulkGetLocations</span><span class="o">(</span><span class="n">List</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">&gt;</span> <span class="n">userIds</span><span class="o">)</span> <span class="o">{</span>
       <span class="c1">// get locations in bulk</span>
     <span class="o">}</span>
 <span class="o">}</span>
 </code></pre></div>
 <p>Then, you can write the QueryLocation function like this:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">class</span> <span class="nc">QueryLocation</span> <span class="kd">extends</span> <span class="n">BaseQueryFunction</span><span class="o">&lt;</span><span class="n">LocationDB</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;</span> <span class="o">{</span>
-    <span class="kd">public</span> <span class="n">List</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;</span> <span class="nf">batchRetrieve</span><span class="o">(</span><span class="n">LocationDB</span> <span class="n">state</span><span class="o">,</span> <span class="n">List</span><span class="o">&lt;</span><span class="n">TridentTuple</span><span class="o">&gt;</span> <span class="n">inputs</span><span class="o">)</span> <span class="o">{</span>
+    <span class="kd">public</span> <span class="n">List</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;</span> <span class="n">batchRetrieve</span><span class="o">(</span><span class="n">LocationDB</span> <span class="n">state</span><span class="o">,</span> <span class="n">List</span><span class="o">&lt;</span><span class="n">TridentTuple</span><span class="o">&gt;</span> <span class="n">inputs</span><span class="o">)</span> <span class="o">{</span>
         <span class="n">List</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">&gt;</span> <span class="n">userIds</span> <span class="o">=</span> <span class="k">new</span> <span class="n">ArrayList</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">&gt;();</span>
         <span class="k">for</span><span class="o">(</span><span class="n">TridentTuple</span> <span class="nl">input:</span> <span class="n">inputs</span><span class="o">)</span> <span class="o">{</span>
             <span class="n">userIds</span><span class="o">.</span><span class="na">add</span><span class="o">(</span><span class="n">input</span><span class="o">.</span><span class="na">getLong</span><span class="o">(</span><span class="mi">0</span><span class="o">));</span>
@@ -288,8 +288,8 @@
         <span class="k">return</span> <span class="n">state</span><span class="o">.</span><span class="na">bulkGetLocations</span><span class="o">(</span><span class="n">userIds</span><span class="o">);</span>
     <span class="o">}</span>
 
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">execute</span><span class="o">(</span><span class="n">TridentTuple</span> <span class="n">tuple</span><span class="o">,</span> <span class="n">String</span> <span class="n">location</span><span class="o">,</span> <span class="n">TridentCollector</span> <span class="n">collector</span><span class="o">)</span> <span class="o">{</span>
-        <span class="n">collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="k">new</span> <span class="nf">Values</span><span class="o">(</span><span class="n">location</span><span class="o">));</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">execute</span><span class="o">(</span><span class="n">TridentTuple</span> <span class="n">tuple</span><span class="o">,</span> <span class="n">String</span> <span class="n">location</span><span class="o">,</span> <span class="n">TridentCollector</span> <span class="n">collector</span><span class="o">)</span> <span class="o">{</span>
+        <span class="n">collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="k">new</span> <span class="n">Values</span><span class="o">(</span><span class="n">location</span><span class="o">));</span>
     <span class="o">}</span>    
 <span class="o">}</span>
 </code></pre></div>
@@ -297,7 +297,7 @@
 
 <p>To update state, you make use of the StateUpdater interface. Here&#39;s a StateUpdater that updates a LocationDB with new location information:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">class</span> <span class="nc">LocationUpdater</span> <span class="kd">extends</span> <span class="n">BaseStateUpdater</span><span class="o">&lt;</span><span class="n">LocationDB</span><span class="o">&gt;</span> <span class="o">{</span>
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">updateState</span><span class="o">(</span><span class="n">LocationDB</span> <span class="n">state</span><span class="o">,</span> <span class="n">List</span><span class="o">&lt;</span><span class="n">TridentTuple</span><span class="o">&gt;</span> <span class="n">tuples</span><span class="o">,</span> <span class="n">TridentCollector</span> <span class="n">collector</span><span class="o">)</span> <span class="o">{</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">updateState</span><span class="o">(</span><span class="n">LocationDB</span> <span class="n">state</span><span class="o">,</span> <span class="n">List</span><span class="o">&lt;</span><span class="n">TridentTuple</span><span class="o">&gt;</span> <span class="n">tuples</span><span class="o">,</span> <span class="n">TridentCollector</span> <span class="n">collector</span><span class="o">)</span> <span class="o">{</span>
         <span class="n">List</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">&gt;</span> <span class="n">ids</span> <span class="o">=</span> <span class="k">new</span> <span class="n">ArrayList</span><span class="o">&lt;</span><span class="n">Long</span><span class="o">&gt;();</span>
         <span class="n">List</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;</span> <span class="n">locations</span> <span class="o">=</span> <span class="k">new</span> <span class="n">ArrayList</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;();</span>
         <span class="k">for</span><span class="o">(</span><span class="n">TridentTuple</span> <span class="nl">t:</span> <span class="n">tuples</span><span class="o">)</span> <span class="o">{</span>
@@ -309,10 +309,10 @@
 <span class="o">}</span>
 </code></pre></div>
 <p>Here&#39;s how you would use this operation in a Trident topology:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">TridentTopology</span> <span class="n">topology</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">TridentTopology</span><span class="o">();</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">TridentTopology</span> <span class="n">topology</span> <span class="o">=</span> <span class="k">new</span> <span class="n">TridentTopology</span><span class="o">();</span>
 <span class="n">TridentState</span> <span class="n">locations</span> <span class="o">=</span> 
-    <span class="n">topology</span><span class="o">.</span><span class="na">newStream</span><span class="o">(</span><span class="s">&quot;locations&quot;</span><span class="o">,</span> <span class="n">locationsSpout</span><span class="o">)</span>
-        <span class="o">.</span><span class="na">partitionPersist</span><span class="o">(</span><span class="k">new</span> <span class="nf">LocationDBFactory</span><span class="o">(),</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;userid&quot;</span><span class="o">,</span> <span class="s">&quot;location&quot;</span><span class="o">),</span> <span class="k">new</span> <span class="nf">LocationUpdater</span><span class="o">())</span>
+    <span class="n">topology</span><span class="o">.</span><span class="na">newStream</span><span class="o">(</span><span class="s">"locations"</span><span class="o">,</span> <span class="n">locationsSpout</span><span class="o">)</span>
+        <span class="o">.</span><span class="na">partitionPersist</span><span class="o">(</span><span class="k">new</span> <span class="n">LocationDBFactory</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"userid"</span><span class="o">,</span> <span class="s">"location"</span><span class="o">),</span> <span class="k">new</span> <span class="n">LocationUpdater</span><span class="o">())</span>
 </code></pre></div>
 <p>The partitionPersist operation updates a source of state. The StateUpdater receives the State and a batch of tuples with updates to that State. This code just grabs the userids and locations from the input tuples and does a bulk set into the State. </p>
 
@@ -323,25 +323,25 @@
 <h2 id="persistentaggregate">persistentAggregate</h2>
 
 <p>Trident has another method for updating States called persistentAggregate. You&#39;ve seen this used in the streaming word count example, shown again below:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">TridentTopology</span> <span class="n">topology</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">TridentTopology</span><span class="o">();</span>        
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">TridentTopology</span> <span class="n">topology</span> <span class="o">=</span> <span class="k">new</span> <span class="n">TridentTopology</span><span class="o">();</span>        
 <span class="n">TridentState</span> <span class="n">wordCounts</span> <span class="o">=</span>
-      <span class="n">topology</span><span class="o">.</span><span class="na">newStream</span><span class="o">(</span><span class="s">&quot;spout1&quot;</span><span class="o">,</span> <span class="n">spout</span><span class="o">)</span>
-        <span class="o">.</span><span class="na">each</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;sentence&quot;</span><span class="o">),</span> <span class="k">new</span> <span class="nf">Split</span><span class="o">(),</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;word&quot;</span><span class="o">))</span>
-        <span class="o">.</span><span class="na">groupBy</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;word&quot;</span><span class="o">))</span>
-        <span class="o">.</span><span class="na">persistentAggregate</span><span class="o">(</span><span class="k">new</span> <span class="n">MemoryMapState</span><span class="o">.</span><span class="na">Factory</span><span class="o">(),</span> <span class="k">new</span> <span class="nf">Count</span><span class="o">(),</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;count&quot;</span><span class="o">))</span>
+      <span class="n">topology</span><span class="o">.</span><span class="na">newStream</span><span class="o">(</span><span class="s">"spout1"</span><span class="o">,</span> <span class="n">spout</span><span class="o">)</span>
+        <span class="o">.</span><span class="na">each</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"sentence"</span><span class="o">),</span> <span class="k">new</span> <span class="n">Split</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"word"</span><span class="o">))</span>
+        <span class="o">.</span><span class="na">groupBy</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"word"</span><span class="o">))</span>
+        <span class="o">.</span><span class="na">persistentAggregate</span><span class="o">(</span><span class="k">new</span> <span class="n">MemoryMapState</span><span class="o">.</span><span class="na">Factory</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Count</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"count"</span><span class="o">))</span>
 </code></pre></div>
 <p>persistentAggregate is an additional abstraction built on top of partitionPersist that knows how to take a Trident aggregator and use it to apply updates to the source of state. In this case, since this is a grouped stream, Trident expects the state you provide to implement the &quot;MapState&quot; interface. The grouping fields will be the keys in the state, and the aggregation result will be the values in the state. The &quot;MapState&quot; interface looks like this:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">interface</span> <span class="nc">MapState</span><span class="o">&lt;</span><span class="n">T</span><span class="o">&gt;</span> <span class="kd">extends</span> <span class="n">State</span> <span class="o">{</span>
-    <span class="n">List</span><span class="o">&lt;</span><span class="n">T</span><span class="o">&gt;</span> <span class="nf">multiGet</span><span class="o">(</span><span class="n">List</span><span class="o">&lt;</span><span class="n">List</span><span class="o">&lt;</span><span class="n">Object</span><span class="o">&gt;&gt;</span> <span class="n">keys</span><span class="o">);</span>
-    <span class="n">List</span><span class="o">&lt;</span><span class="n">T</span><span class="o">&gt;</span> <span class="nf">multiUpdate</span><span class="o">(</span><span class="n">List</span><span class="o">&lt;</span><span class="n">List</span><span class="o">&lt;</span><span class="n">Object</span><span class="o">&gt;&gt;</span> <span class="n">keys</span><span class="o">,</span> <span class="n">List</span><span class="o">&lt;</span><span class="n">ValueUpdater</span><span class="o">&gt;</span> <span class="n">updaters</span><span class="o">);</span>
-    <span class="kt">void</span> <span class="nf">multiPut</span><span class="o">(</span><span class="n">List</span><span class="o">&lt;</span><span class="n">List</span><span class="o">&lt;</span><span class="n">Object</span><span class="o">&gt;&gt;</span> <span class="n">keys</span><span class="o">,</span> <span class="n">List</span><span class="o">&lt;</span><span class="n">T</span><span class="o">&gt;</span> <span class="n">vals</span><span class="o">);</span>
+    <span class="n">List</span><span class="o">&lt;</span><span class="n">T</span><span class="o">&gt;</span> <span class="n">multiGet</span><span class="o">(</span><span class="n">List</span><span class="o">&lt;</span><span class="n">List</span><span class="o">&lt;</span><span class="n">Object</span><span class="o">&gt;&gt;</span> <span class="n">keys</span><span class="o">);</span>
+    <span class="n">List</span><span class="o">&lt;</span><span class="n">T</span><span class="o">&gt;</span> <span class="n">multiUpdate</span><span class="o">(</span><span class="n">List</span><span class="o">&lt;</span><span class="n">List</span><span class="o">&lt;</span><span class="n">Object</span><span class="o">&gt;&gt;</span> <span class="n">keys</span><span class="o">,</span> <span class="n">List</span><span class="o">&lt;</span><span class="n">ValueUpdater</span><span class="o">&gt;</span> <span class="n">updaters</span><span class="o">);</span>
+    <span class="kt">void</span> <span class="n">multiPut</span><span class="o">(</span><span class="n">List</span><span class="o">&lt;</span><span class="n">List</span><span class="o">&lt;</span><span class="n">Object</span><span class="o">&gt;&gt;</span> <span class="n">keys</span><span class="o">,</span> <span class="n">List</span><span class="o">&lt;</span><span class="n">T</span><span class="o">&gt;</span> <span class="n">vals</span><span class="o">);</span>
 <span class="o">}</span>
 </code></pre></div>
 <p>When you do aggregations on non-grouped streams (a global aggregation), Trident expects your State object to implement the &quot;Snapshottable&quot; interface:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">interface</span> <span class="nc">Snapshottable</span><span class="o">&lt;</span><span class="n">T</span><span class="o">&gt;</span> <span class="kd">extends</span> <span class="n">State</span> <span class="o">{</span>
-    <span class="n">T</span> <span class="nf">get</span><span class="o">();</span>
-    <span class="n">T</span> <span class="nf">update</span><span class="o">(</span><span class="n">ValueUpdater</span> <span class="n">updater</span><span class="o">);</span>
-    <span class="kt">void</span> <span class="nf">set</span><span class="o">(</span><span class="n">T</span> <span class="n">o</span><span class="o">);</span>
+    <span class="n">T</span> <span class="n">get</span><span class="o">();</span>
+    <span class="n">T</span> <span class="n">update</span><span class="o">(</span><span class="n">ValueUpdater</span> <span class="n">updater</span><span class="o">);</span>
+    <span class="kt">void</span> <span class="n">set</span><span class="o">(</span><span class="n">T</span> <span class="n">o</span><span class="o">);</span>
 <span class="o">}</span>
 </code></pre></div>
 <p><a href="https://github.com/apache/storm/blob/master/storm-core/src/jvm/storm/trident/testing/MemoryMapState.java">MemoryMapState</a> and <a href="https://github.com/nathanmarz/trident-memcached/blob/master/src/jvm/trident/memcached/MemcachedState.java">MemcachedState</a> each implement both of these interfaces.</p>
@@ -350,8 +350,8 @@
 
 <p>Trident makes it easy to implement MapState&#39;s, doing almost all the work for you. The OpaqueMap, TransactionalMap, and NonTransactionalMap classes implement all the logic for doing the respective fault-tolerance logic. You simply provide these classes with an IBackingMap implementation that knows how to do multiGets and multiPuts of the respective key/values. IBackingMap looks like this:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">interface</span> <span class="nc">IBackingMap</span><span class="o">&lt;</span><span class="n">T</span><span class="o">&gt;</span> <span class="o">{</span>
-    <span class="n">List</span><span class="o">&lt;</span><span class="n">T</span><span class="o">&gt;</span> <span class="nf">multiGet</span><span class="o">(</span><span class="n">List</span><span class="o">&lt;</span><span class="n">List</span><span class="o">&lt;</span><span class="n">Object</span><span class="o">&gt;&gt;</span> <span class="n">keys</span><span class="o">);</span> 
-    <span class="kt">void</span> <span class="nf">multiPut</span><span class="o">(</span><span class="n">List</span><span class="o">&lt;</span><span class="n">List</span><span class="o">&lt;</span><span class="n">Object</span><span class="o">&gt;&gt;</span> <span class="n">keys</span><span class="o">,</span> <span class="n">List</span><span class="o">&lt;</span><span class="n">T</span><span class="o">&gt;</span> <span class="n">vals</span><span class="o">);</span> 
+    <span class="n">List</span><span class="o">&lt;</span><span class="n">T</span><span class="o">&gt;</span> <span class="n">multiGet</span><span class="o">(</span><span class="n">List</span><span class="o">&lt;</span><span class="n">List</span><span class="o">&lt;</span><span class="n">Object</span><span class="o">&gt;&gt;</span> <span class="n">keys</span><span class="o">);</span> 
+    <span class="kt">void</span> <span class="n">multiPut</span><span class="o">(</span><span class="n">List</span><span class="o">&lt;</span><span class="n">List</span><span class="o">&lt;</span><span class="n">Object</span><span class="o">&gt;&gt;</span> <span class="n">keys</span><span class="o">,</span> <span class="n">List</span><span class="o">&lt;</span><span class="n">T</span><span class="o">&gt;</span> <span class="n">vals</span><span class="o">);</span> 
 <span class="o">}</span>
 </code></pre></div>
 <p>OpaqueMap&#39;s will call multiPut with <a href="https://github.com/apache/storm/blob/master/storm-core/src/jvm/storm/trident/state/OpaqueValue.java">OpaqueValue</a>&#39;s for the vals, TransactionalMap&#39;s will give <a href="https://github.com/apache/storm/blob/master/storm-core/src/jvm/storm/trident/state/TransactionalValue.java">TransactionalValue</a>&#39;s for the vals, and NonTransactionalMaps will just pass the objects from the topology through.</p>
diff --git a/_site/documentation/Trident-tutorial.html b/_site/documentation/Trident-tutorial.html
index 8a5504a..a0f3a1b 100644
--- a/_site/documentation/Trident-tutorial.html
+++ b/_site/documentation/Trident-tutorial.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -102,20 +102,20 @@
 </ol>
 
 <p>For the purposes of illustration, this example will read an infinite stream of sentences from the following source:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">FixedBatchSpout</span> <span class="n">spout</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">FixedBatchSpout</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;sentence&quot;</span><span class="o">),</span> <span class="mi">3</span><span class="o">,</span>
-               <span class="k">new</span> <span class="nf">Values</span><span class="o">(</span><span class="s">&quot;the cow jumped over the moon&quot;</span><span class="o">),</span>
-               <span class="k">new</span> <span class="nf">Values</span><span class="o">(</span><span class="s">&quot;the man went to the store and bought some candy&quot;</span><span class="o">),</span>
-               <span class="k">new</span> <span class="nf">Values</span><span class="o">(</span><span class="s">&quot;four score and seven years ago&quot;</span><span class="o">),</span>
-               <span class="k">new</span> <span class="nf">Values</span><span class="o">(</span><span class="s">&quot;how many apples can you eat&quot;</span><span class="o">));</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">FixedBatchSpout</span> <span class="n">spout</span> <span class="o">=</span> <span class="k">new</span> <span class="n">FixedBatchSpout</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"sentence"</span><span class="o">),</span> <span class="mi">3</span><span class="o">,</span>
+               <span class="k">new</span> <span class="n">Values</span><span class="o">(</span><span class="s">"the cow jumped over the moon"</span><span class="o">),</span>
+               <span class="k">new</span> <span class="n">Values</span><span class="o">(</span><span class="s">"the man went to the store and bought some candy"</span><span class="o">),</span>
+               <span class="k">new</span> <span class="n">Values</span><span class="o">(</span><span class="s">"four score and seven years ago"</span><span class="o">),</span>
+               <span class="k">new</span> <span class="n">Values</span><span class="o">(</span><span class="s">"how many apples can you eat"</span><span class="o">));</span>
 <span class="n">spout</span><span class="o">.</span><span class="na">setCycle</span><span class="o">(</span><span class="kc">true</span><span class="o">);</span>
 </code></pre></div>
 <p>This spout cycles through that set of sentences over and over to produce the sentence stream. Here&#39;s the code to do the streaming word count part of the computation:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">TridentTopology</span> <span class="n">topology</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">TridentTopology</span><span class="o">();</span>        
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">TridentTopology</span> <span class="n">topology</span> <span class="o">=</span> <span class="k">new</span> <span class="n">TridentTopology</span><span class="o">();</span>        
 <span class="n">TridentState</span> <span class="n">wordCounts</span> <span class="o">=</span>
-     <span class="n">topology</span><span class="o">.</span><span class="na">newStream</span><span class="o">(</span><span class="s">&quot;spout1&quot;</span><span class="o">,</span> <span class="n">spout</span><span class="o">)</span>
-       <span class="o">.</span><span class="na">each</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;sentence&quot;</span><span class="o">),</span> <span class="k">new</span> <span class="nf">Split</span><span class="o">(),</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;word&quot;</span><span class="o">))</span>
-       <span class="o">.</span><span class="na">groupBy</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;word&quot;</span><span class="o">))</span>
-       <span class="o">.</span><span class="na">persistentAggregate</span><span class="o">(</span><span class="k">new</span> <span class="n">MemoryMapState</span><span class="o">.</span><span class="na">Factory</span><span class="o">(),</span> <span class="k">new</span> <span class="nf">Count</span><span class="o">(),</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;count&quot;</span><span class="o">))</span>                
+     <span class="n">topology</span><span class="o">.</span><span class="na">newStream</span><span class="o">(</span><span class="s">"spout1"</span><span class="o">,</span> <span class="n">spout</span><span class="o">)</span>
+       <span class="o">.</span><span class="na">each</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"sentence"</span><span class="o">),</span> <span class="k">new</span> <span class="n">Split</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"word"</span><span class="o">))</span>
+       <span class="o">.</span><span class="na">groupBy</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"word"</span><span class="o">))</span>
+       <span class="o">.</span><span class="na">persistentAggregate</span><span class="o">(</span><span class="k">new</span> <span class="n">MemoryMapState</span><span class="o">.</span><span class="na">Factory</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Count</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"count"</span><span class="o">))</span>                
        <span class="o">.</span><span class="na">parallelismHint</span><span class="o">(</span><span class="mi">6</span><span class="o">);</span>
 </code></pre></div>
 <p>Let&#39;s go through the code line by line. First a TridentTopology object is created, which exposes the interface for constructing Trident computations. TridentTopology has a method called newStream that creates a new stream of data in the topology reading from an input source. In this case, the input source is just the FixedBatchSpout defined from before. Input sources can also be queue brokers like Kestrel or Kafka. Trident keeps track of a small amount of state for each input source (metadata about what it has consumed) in Zookeeper, and the &quot;spout1&quot; string here specifies the node in Zookeeper where Trident should keep that metadata.</p>
@@ -130,10 +130,10 @@
 
 <p>Back to the example, the spout emits a stream containing one field called &quot;sentence&quot;. The next line of the topology definition applies the Split function to each tuple in the stream, taking the &quot;sentence&quot; field and splitting it into words. Each sentence tuple creates potentially many word tuples – for instance, the sentence &quot;the cow jumped over the moon&quot; creates six &quot;word&quot; tuples. Here&#39;s the definition of Split:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">class</span> <span class="nc">Split</span> <span class="kd">extends</span> <span class="n">BaseFunction</span> <span class="o">{</span>
-   <span class="kd">public</span> <span class="kt">void</span> <span class="nf">execute</span><span class="o">(</span><span class="n">TridentTuple</span> <span class="n">tuple</span><span class="o">,</span> <span class="n">TridentCollector</span> <span class="n">collector</span><span class="o">)</span> <span class="o">{</span>
+   <span class="kd">public</span> <span class="kt">void</span> <span class="n">execute</span><span class="o">(</span><span class="n">TridentTuple</span> <span class="n">tuple</span><span class="o">,</span> <span class="n">TridentCollector</span> <span class="n">collector</span><span class="o">)</span> <span class="o">{</span>
        <span class="n">String</span> <span class="n">sentence</span> <span class="o">=</span> <span class="n">tuple</span><span class="o">.</span><span class="na">getString</span><span class="o">(</span><span class="mi">0</span><span class="o">);</span>
-       <span class="k">for</span><span class="o">(</span><span class="n">String</span> <span class="nl">word:</span> <span class="n">sentence</span><span class="o">.</span><span class="na">split</span><span class="o">(</span><span class="s">&quot; &quot;</span><span class="o">))</span> <span class="o">{</span>
-           <span class="n">collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="k">new</span> <span class="nf">Values</span><span class="o">(</span><span class="n">word</span><span class="o">));</span>                
+       <span class="k">for</span><span class="o">(</span><span class="n">String</span> <span class="nl">word:</span> <span class="n">sentence</span><span class="o">.</span><span class="na">split</span><span class="o">(</span><span class="s">" "</span><span class="o">))</span> <span class="o">{</span>
+           <span class="n">collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="k">new</span> <span class="n">Values</span><span class="o">(</span><span class="n">word</span><span class="o">));</span>                
        <span class="o">}</span>
    <span class="o">}</span>
 <span class="o">}</span>
@@ -141,7 +141,7 @@
 <p>As you can see, it&#39;s really simple. It simply grabs the sentence, splits it on whitespace, and emits a tuple for each word.</p>
 
 <p>The rest of the topology computes word count and keeps the results persistently stored. First the stream is grouped by the &quot;word&quot; field. Then, each group is persistently aggregated using the Count aggregator. The persistentAggregate function knows how to store and update the results of the aggregation in a source of state. In this example, the word counts are kept in memory, but this can be trivially swapped to use Memcached, Cassandra, or any other persistent store. Swapping this topology to store counts in Memcached is as simple as replacing the persistentAggregate line with this (using <a href="https://github.com/nathanmarz/trident-memcached">trident-memcached</a>), where the &quot;serverLocations&quot; is a list of host/ports for the Memcached cluster:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="o">.</span><span class="na">persistentAggregate</span><span class="o">(</span><span class="n">MemcachedState</span><span class="o">.</span><span class="na">transactional</span><span class="o">(</span><span class="n">serverLocations</span><span class="o">),</span> <span class="k">new</span> <span class="nf">Count</span><span class="o">(),</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;count&quot;</span><span class="o">))</span>        
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="o">.</span><span class="na">persistentAggregate</span><span class="o">(</span><span class="n">MemcachedState</span><span class="o">.</span><span class="na">transactional</span><span class="o">(</span><span class="n">serverLocations</span><span class="o">),</span> <span class="k">new</span> <span class="n">Count</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"count"</span><span class="o">))</span>        
 <span class="n">MemcachedState</span><span class="o">.</span><span class="na">transactional</span><span class="o">()</span>
 </code></pre></div>
 <p>The values stored by persistentAggregate represents the aggregation of all batches ever emitted by the stream.</p>
@@ -151,19 +151,19 @@
 <p>The persistentAggregate method transforms a Stream into a TridentState object. In this case the TridentState object represents all the word counts. We will use this TridentState object to implement the distributed query portion of the computation.</p>
 
 <p>The next part of the topology implements a low latency distributed query on the word counts. The query takes as input a whitespace separated list of words and return the sum of the counts for those words. These queries are executed just like normal RPC calls, except they are parallelized in the background. Here&#39;s an example of how you might invoke one of these queries:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">DRPCClient</span> <span class="n">client</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">DRPCClient</span><span class="o">(</span><span class="s">&quot;drpc.server.location&quot;</span><span class="o">,</span> <span class="mi">3772</span><span class="o">);</span>
-<span class="n">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="n">client</span><span class="o">.</span><span class="na">execute</span><span class="o">(</span><span class="s">&quot;words&quot;</span><span class="o">,</span> <span class="s">&quot;cat dog the man&quot;</span><span class="o">);</span>
-<span class="c1">// prints the JSON-encoded result, e.g.: &quot;[[5078]]&quot;</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">DRPCClient</span> <span class="n">client</span> <span class="o">=</span> <span class="k">new</span> <span class="n">DRPCClient</span><span class="o">(</span><span class="s">"drpc.server.location"</span><span class="o">,</span> <span class="mi">3772</span><span class="o">);</span>
+<span class="n">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="n">client</span><span class="o">.</span><span class="na">execute</span><span class="o">(</span><span class="s">"words"</span><span class="o">,</span> <span class="s">"cat dog the man"</span><span class="o">);</span>
+<span class="c1">// prints the JSON-encoded result, e.g.: "[[5078]]"</span>
 </code></pre></div>
 <p>As you can see, it looks just like a regular remote procedure call (RPC), except it&#39;s executing in parallel across a Storm cluster. The latency for small queries like this are typically around 10ms. More intense DRPC queries can take longer of course, although the latency largely depends on how many resources you have allocated for the computation.</p>
 
 <p>The implementation of the distributed query portion of the topology looks like this:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">topology</span><span class="o">.</span><span class="na">newDRPCStream</span><span class="o">(</span><span class="s">&quot;words&quot;</span><span class="o">)</span>
-       <span class="o">.</span><span class="na">each</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;args&quot;</span><span class="o">),</span> <span class="k">new</span> <span class="nf">Split</span><span class="o">(),</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;word&quot;</span><span class="o">))</span>
-       <span class="o">.</span><span class="na">groupBy</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;word&quot;</span><span class="o">))</span>
-       <span class="o">.</span><span class="na">stateQuery</span><span class="o">(</span><span class="n">wordCounts</span><span class="o">,</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;word&quot;</span><span class="o">),</span> <span class="k">new</span> <span class="nf">MapGet</span><span class="o">(),</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;count&quot;</span><span class="o">))</span>
-       <span class="o">.</span><span class="na">each</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;count&quot;</span><span class="o">),</span> <span class="k">new</span> <span class="nf">FilterNull</span><span class="o">())</span>
-       <span class="o">.</span><span class="na">aggregate</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;count&quot;</span><span class="o">),</span> <span class="k">new</span> <span class="nf">Sum</span><span class="o">(),</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;sum&quot;</span><span class="o">));</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">topology</span><span class="o">.</span><span class="na">newDRPCStream</span><span class="o">(</span><span class="s">"words"</span><span class="o">)</span>
+       <span class="o">.</span><span class="na">each</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"args"</span><span class="o">),</span> <span class="k">new</span> <span class="n">Split</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"word"</span><span class="o">))</span>
+       <span class="o">.</span><span class="na">groupBy</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"word"</span><span class="o">))</span>
+       <span class="o">.</span><span class="na">stateQuery</span><span class="o">(</span><span class="n">wordCounts</span><span class="o">,</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"word"</span><span class="o">),</span> <span class="k">new</span> <span class="n">MapGet</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"count"</span><span class="o">))</span>
+       <span class="o">.</span><span class="na">each</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"count"</span><span class="o">),</span> <span class="k">new</span> <span class="n">FilterNull</span><span class="o">())</span>
+       <span class="o">.</span><span class="na">aggregate</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"count"</span><span class="o">),</span> <span class="k">new</span> <span class="n">Sum</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"sum"</span><span class="o">));</span>
 </code></pre></div>
 <p>The same TridentTopology object is used to create the DRPC stream, and the function is named &quot;words&quot;. The function name corresponds to the function name given in the first argument of execute when using a DRPCClient.</p>
 
@@ -192,17 +192,17 @@
 <span class="n">TridentState</span> <span class="n">tweetersToFollowers</span> <span class="o">=</span>
        <span class="n">topology</span><span class="o">.</span><span class="na">newStaticState</span><span class="o">(</span><span class="n">getTweeterToFollowersState</span><span class="o">());</span>
 
-<span class="n">topology</span><span class="o">.</span><span class="na">newDRPCStream</span><span class="o">(</span><span class="s">&quot;reach&quot;</span><span class="o">)</span>
-       <span class="o">.</span><span class="na">stateQuery</span><span class="o">(</span><span class="n">urlToTweeters</span><span class="o">,</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;args&quot;</span><span class="o">),</span> <span class="k">new</span> <span class="nf">MapGet</span><span class="o">(),</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;tweeters&quot;</span><span class="o">))</span>
-       <span class="o">.</span><span class="na">each</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;tweeters&quot;</span><span class="o">),</span> <span class="k">new</span> <span class="nf">ExpandList</span><span class="o">(),</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;tweeter&quot;</span><span class="o">))</span>
+<span class="n">topology</span><span class="o">.</span><span class="na">newDRPCStream</span><span class="o">(</span><span class="s">"reach"</span><span class="o">)</span>
+       <span class="o">.</span><span class="na">stateQuery</span><span class="o">(</span><span class="n">urlToTweeters</span><span class="o">,</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"args"</span><span class="o">),</span> <span class="k">new</span> <span class="n">MapGet</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"tweeters"</span><span class="o">))</span>
+       <span class="o">.</span><span class="na">each</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"tweeters"</span><span class="o">),</span> <span class="k">new</span> <span class="n">ExpandList</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"tweeter"</span><span class="o">))</span>
        <span class="o">.</span><span class="na">shuffle</span><span class="o">()</span>
-       <span class="o">.</span><span class="na">stateQuery</span><span class="o">(</span><span class="n">tweetersToFollowers</span><span class="o">,</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;tweeter&quot;</span><span class="o">),</span> <span class="k">new</span> <span class="nf">MapGet</span><span class="o">(),</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;followers&quot;</span><span class="o">))</span>
+       <span class="o">.</span><span class="na">stateQuery</span><span class="o">(</span><span class="n">tweetersToFollowers</span><span class="o">,</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"tweeter"</span><span class="o">),</span> <span class="k">new</span> <span class="n">MapGet</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"followers"</span><span class="o">))</span>
        <span class="o">.</span><span class="na">parallelismHint</span><span class="o">(</span><span class="mi">200</span><span class="o">)</span>
-       <span class="o">.</span><span class="na">each</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;followers&quot;</span><span class="o">),</span> <span class="k">new</span> <span class="nf">ExpandList</span><span class="o">(),</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;follower&quot;</span><span class="o">))</span>
-       <span class="o">.</span><span class="na">groupBy</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;follower&quot;</span><span class="o">))</span>
-       <span class="o">.</span><span class="na">aggregate</span><span class="o">(</span><span class="k">new</span> <span class="nf">One</span><span class="o">(),</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;one&quot;</span><span class="o">))</span>
+       <span class="o">.</span><span class="na">each</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"followers"</span><span class="o">),</span> <span class="k">new</span> <span class="n">ExpandList</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"follower"</span><span class="o">))</span>
+       <span class="o">.</span><span class="na">groupBy</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"follower"</span><span class="o">))</span>
+       <span class="o">.</span><span class="na">aggregate</span><span class="o">(</span><span class="k">new</span> <span class="n">One</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"one"</span><span class="o">))</span>
        <span class="o">.</span><span class="na">parallelismHint</span><span class="o">(</span><span class="mi">20</span><span class="o">)</span>
-       <span class="o">.</span><span class="na">aggregate</span><span class="o">(</span><span class="k">new</span> <span class="nf">Count</span><span class="o">(),</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;reach&quot;</span><span class="o">));</span>
+       <span class="o">.</span><span class="na">aggregate</span><span class="o">(</span><span class="k">new</span> <span class="n">Count</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"reach"</span><span class="o">));</span>
 </code></pre></div>
 <p>The topology creates TridentState objects representing each external database using the newStaticState method. These can then be queried in the topology. Like all sources of state, queries to these databases will be automatically batched for maximum efficiency.</p>
 
@@ -212,15 +212,15 @@
 
 <p>Next, the set of followers is uniqued and counted. This is done in two steps. First a &quot;group by&quot; is done on the batch by &quot;follower&quot;, running the &quot;One&quot; aggregator on each group. The &quot;One&quot; aggregator simply emits a single tuple containing the number one for each group. Then, the ones are summed together to get the unique count of the followers set. Here&#39;s the definition of the &quot;One&quot; aggregator:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">class</span> <span class="nc">One</span> <span class="kd">implements</span> <span class="n">CombinerAggregator</span><span class="o">&lt;</span><span class="n">Integer</span><span class="o">&gt;</span> <span class="o">{</span>
-   <span class="kd">public</span> <span class="n">Integer</span> <span class="nf">init</span><span class="o">(</span><span class="n">TridentTuple</span> <span class="n">tuple</span><span class="o">)</span> <span class="o">{</span>
+   <span class="kd">public</span> <span class="n">Integer</span> <span class="n">init</span><span class="o">(</span><span class="n">TridentTuple</span> <span class="n">tuple</span><span class="o">)</span> <span class="o">{</span>
        <span class="k">return</span> <span class="mi">1</span><span class="o">;</span>
    <span class="o">}</span>
 
-   <span class="kd">public</span> <span class="n">Integer</span> <span class="nf">combine</span><span class="o">(</span><span class="n">Integer</span> <span class="n">val1</span><span class="o">,</span> <span class="n">Integer</span> <span class="n">val2</span><span class="o">)</span> <span class="o">{</span>
+   <span class="kd">public</span> <span class="n">Integer</span> <span class="n">combine</span><span class="o">(</span><span class="n">Integer</span> <span class="n">val1</span><span class="o">,</span> <span class="n">Integer</span> <span class="n">val2</span><span class="o">)</span> <span class="o">{</span>
        <span class="k">return</span> <span class="mi">1</span><span class="o">;</span>
    <span class="o">}</span>
 
-   <span class="kd">public</span> <span class="n">Integer</span> <span class="nf">zero</span><span class="o">()</span> <span class="o">{</span>
+   <span class="kd">public</span> <span class="n">Integer</span> <span class="n">zero</span><span class="o">()</span> <span class="o">{</span>
        <span class="k">return</span> <span class="mi">1</span><span class="o">;</span>
    <span class="o">}</span>        
 <span class="o">}</span>
@@ -234,11 +234,11 @@
 <p>The Trident data model is the TridentTuple which is a named list of values. During a topology, tuples are incrementally built up through a sequence of operations. Operations generally take in a set of input fields and emit a set of &quot;function fields&quot;. The input fields are used to select a subset of the tuple as input to the operation, while the &quot;function fields&quot; name the fields the operation emits.</p>
 
 <p>Consider this example. Suppose you have a stream called &quot;stream&quot; that contains the fields &quot;x&quot;, &quot;y&quot;, and &quot;z&quot;. To run a filter MyFilter that takes in &quot;y&quot; as input, you would say:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">stream</span><span class="o">.</span><span class="na">each</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;y&quot;</span><span class="o">),</span> <span class="k">new</span> <span class="nf">MyFilter</span><span class="o">())</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">stream</span><span class="o">.</span><span class="na">each</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"y"</span><span class="o">),</span> <span class="k">new</span> <span class="n">MyFilter</span><span class="o">())</span>
 </code></pre></div>
 <p>Suppose the implementation of MyFilter is this:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">class</span> <span class="nc">MyFilter</span> <span class="kd">extends</span> <span class="n">BaseFilter</span> <span class="o">{</span>
-   <span class="kd">public</span> <span class="kt">boolean</span> <span class="nf">isKeep</span><span class="o">(</span><span class="n">TridentTuple</span> <span class="n">tuple</span><span class="o">)</span> <span class="o">{</span>
+   <span class="kd">public</span> <span class="kt">boolean</span> <span class="n">isKeep</span><span class="o">(</span><span class="n">TridentTuple</span> <span class="n">tuple</span><span class="o">)</span> <span class="o">{</span>
        <span class="k">return</span> <span class="n">tuple</span><span class="o">.</span><span class="na">getInteger</span><span class="o">(</span><span class="mi">0</span><span class="o">)</span> <span class="o">&lt;</span> <span class="mi">10</span><span class="o">;</span>
    <span class="o">}</span>
 <span class="o">}</span>
@@ -247,26 +247,26 @@
 
 <p>Let&#39;s now look at how &quot;function fields&quot; work. Suppose you had this function:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">class</span> <span class="nc">AddAndMultiply</span> <span class="kd">extends</span> <span class="n">BaseFunction</span> <span class="o">{</span>
-   <span class="kd">public</span> <span class="kt">void</span> <span class="nf">execute</span><span class="o">(</span><span class="n">TridentTuple</span> <span class="n">tuple</span><span class="o">,</span> <span class="n">TridentCollector</span> <span class="n">collector</span><span class="o">)</span> <span class="o">{</span>
+   <span class="kd">public</span> <span class="kt">void</span> <span class="n">execute</span><span class="o">(</span><span class="n">TridentTuple</span> <span class="n">tuple</span><span class="o">,</span> <span class="n">TridentCollector</span> <span class="n">collector</span><span class="o">)</span> <span class="o">{</span>
        <span class="kt">int</span> <span class="n">i1</span> <span class="o">=</span> <span class="n">tuple</span><span class="o">.</span><span class="na">getInteger</span><span class="o">(</span><span class="mi">0</span><span class="o">);</span>
        <span class="kt">int</span> <span class="n">i2</span> <span class="o">=</span> <span class="n">tuple</span><span class="o">.</span><span class="na">getInteger</span><span class="o">(</span><span class="mi">1</span><span class="o">);</span>
-       <span class="n">collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="k">new</span> <span class="nf">Values</span><span class="o">(</span><span class="n">i1</span> <span class="o">+</span> <span class="n">i2</span><span class="o">,</span> <span class="n">i1</span> <span class="o">*</span> <span class="n">i2</span><span class="o">));</span>
+       <span class="n">collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="k">new</span> <span class="n">Values</span><span class="o">(</span><span class="n">i1</span> <span class="o">+</span> <span class="n">i2</span><span class="o">,</span> <span class="n">i1</span> <span class="o">*</span> <span class="n">i2</span><span class="o">));</span>
    <span class="o">}</span>
 <span class="o">}</span>
 </code></pre></div>
 <p>This function takes two numbers as input and emits two new values: the addition of the numbers and the multiplication of the numbers. Suppose you had a stream with the fields &quot;x&quot;, &quot;y&quot;, and &quot;z&quot;. You would use this function like this:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">stream</span><span class="o">.</span><span class="na">each</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;x&quot;</span><span class="o">,</span> <span class="s">&quot;y&quot;</span><span class="o">),</span> <span class="k">new</span> <span class="nf">AddAndMultiply</span><span class="o">(),</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;added&quot;</span><span class="o">,</span> <span class="s">&quot;multiplied&quot;</span><span class="o">));</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">stream</span><span class="o">.</span><span class="na">each</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"x"</span><span class="o">,</span> <span class="s">"y"</span><span class="o">),</span> <span class="k">new</span> <span class="n">AddAndMultiply</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"added"</span><span class="o">,</span> <span class="s">"multiplied"</span><span class="o">));</span>
 </code></pre></div>
 <p>The output of functions is additive: the fields are added to the input tuple. So the output of this each call would contain tuples with the five fields &quot;x&quot;, &quot;y&quot;, &quot;z&quot;, &quot;added&quot;, and &quot;multiplied&quot;. &quot;added&quot; corresponds to the first value emitted by AddAndMultiply, while &quot;multiplied&quot; corresponds to the second value.</p>
 
 <p>With aggregators, on the other hand, the function fields replace the input tuples. So if you had a stream containing the fields &quot;val1&quot; and &quot;val2&quot;, and you did this:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">stream</span><span class="o">.</span><span class="na">aggregate</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;val2&quot;</span><span class="o">),</span> <span class="k">new</span> <span class="nf">Sum</span><span class="o">(),</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;sum&quot;</span><span class="o">))</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">stream</span><span class="o">.</span><span class="na">aggregate</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"val2"</span><span class="o">),</span> <span class="k">new</span> <span class="n">Sum</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"sum"</span><span class="o">))</span>
 </code></pre></div>
 <p>The output stream would only contain a single tuple with a single field called &quot;sum&quot;, representing the sum of all &quot;val2&quot; fields in that batch.</p>
 
 <p>With grouped streams, the output will contain the grouping fields followed by the fields emitted by the aggregator. For example:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">stream</span><span class="o">.</span><span class="na">groupBy</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;val1&quot;</span><span class="o">))</span>
-     <span class="o">.</span><span class="na">aggregate</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;val2&quot;</span><span class="o">),</span> <span class="k">new</span> <span class="nf">Sum</span><span class="o">(),</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;sum&quot;</span><span class="o">))</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">stream</span><span class="o">.</span><span class="na">groupBy</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"val1"</span><span class="o">))</span>
+     <span class="o">.</span><span class="na">aggregate</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"val2"</span><span class="o">),</span> <span class="k">new</span> <span class="n">Sum</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"sum"</span><span class="o">))</span>
 </code></pre></div>
 <p>In this example, the output will contain the fields &quot;val1&quot; and &quot;sum&quot;.</p>
 
diff --git a/_site/documentation/Troubleshooting.html b/_site/documentation/Troubleshooting.html
index ab967f2..e1d392c 100644
--- a/_site/documentation/Troubleshooting.html
+++ b/_site/documentation/Troubleshooting.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -157,7 +157,7 @@
 <li>Try deleting the local dirs for the supervisors and restarting the daemons. Supervisors create a unique id for themselves and store it locally. When that id is copied to other nodes, Storm gets confused. </li>
 </ul>
 
-<h3 id="&quot;multiple-defaults.yaml-found&quot;-error">&quot;Multiple defaults.yaml found&quot; error</h3>
+<h3 id="quot-multiple-defaults-yaml-found-quot-error">&quot;Multiple defaults.yaml found&quot; error</h3>
 
 <p>Symptoms:</p>
 
@@ -171,7 +171,7 @@
 <li>You&#39;re most likely including the Storm jars inside your topology jar. When packaging your topology jar, don&#39;t include the Storm jars as Storm will put those on the classpath for you.</li>
 </ul>
 
-<h3 id="&quot;nosuchmethoderror&quot;-when-running-storm-jar">&quot;NoSuchMethodError&quot; when running storm jar</h3>
+<h3 id="quot-nosuchmethoderror-quot-when-running-storm-jar">&quot;NoSuchMethodError&quot; when running storm jar</h3>
 
 <p>Symptoms:</p>
 
@@ -192,7 +192,7 @@
 <ul>
 <li>At runtime, you get a stack trace like the following:</li>
 </ul>
-<div class="highlight"><pre><code class="language-text" data-lang="text">java.lang.RuntimeException: java.util.ConcurrentModificationException
+<div class="highlight"><pre><code class="language-" data-lang="">java.lang.RuntimeException: java.util.ConcurrentModificationException
     at backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:84)
     at backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:55)
     at backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:56)
@@ -233,7 +233,7 @@
 <ul>
 <li>You get a NullPointerException that looks something like:</li>
 </ul>
-<div class="highlight"><pre><code class="language-text" data-lang="text">java.lang.RuntimeException: java.lang.NullPointerException
+<div class="highlight"><pre><code class="language-" data-lang="">java.lang.RuntimeException: java.lang.NullPointerException
     at backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:84)
     at backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:55)
     at backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:56)
@@ -252,7 +252,7 @@
     ... 6 more
 </code></pre></div>
 <p>or </p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">java.lang.RuntimeException: java.lang.NullPointerException
+<div class="highlight"><pre><code class="language-" data-lang="">java.lang.RuntimeException: java.lang.NullPointerException
         at
 backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:128)
 ~[storm-core-0.9.3.jar:0.9.3]
diff --git a/_site/documentation/Understanding-the-parallelism-of-a-Storm-topology.html b/_site/documentation/Understanding-the-parallelism-of-a-Storm-topology.html
index 9ca3ee2..f2d2b16 100644
--- a/_site/documentation/Understanding-the-parallelism-of-a-Storm-topology.html
+++ b/_site/documentation/Understanding-the-parallelism-of-a-Storm-topology.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -90,7 +90,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="what-makes-a-running-topology:-worker-processes,-executors-and-tasks">What makes a running topology: worker processes, executors and tasks</h2>
+<h2 id="what-makes-a-running-topology-worker-processes-executors-and-tasks">What makes a running topology: worker processes, executors and tasks</h2>
 
 <p>Storm distinguishes between the following three main entities that are used to actually run a topology in a Storm cluster:</p>
 
@@ -128,7 +128,7 @@
 </ul></li>
 </ul>
 
-<h3 id="number-of-executors-(threads)">Number of executors (threads)</h3>
+<h3 id="number-of-executors-threads">Number of executors (threads)</h3>
 
 <ul>
 <li>Description: How many executors to spawn <em>per component</em>.</li>
@@ -155,9 +155,9 @@
 </ul>
 
 <p>Here is an example code snippet to show these settings in practice:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">topologyBuilder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">&quot;green-bolt&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="nf">GreenBolt</span><span class="o">(),</span> <span class="mi">2</span><span class="o">)</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">topologyBuilder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">"green-bolt"</span><span class="o">,</span> <span class="k">new</span> <span class="n">GreenBolt</span><span class="o">(),</span> <span class="mi">2</span><span class="o">)</span>
                <span class="o">.</span><span class="na">setNumTasks</span><span class="o">(</span><span class="mi">4</span><span class="o">)</span>
-               <span class="o">.</span><span class="na">shuffleGrouping</span><span class="o">(</span><span class="s">&quot;blue-spout&quot;</span><span class="o">);</span>
+               <span class="o">.</span><span class="na">shuffleGrouping</span><span class="o">(</span><span class="s">"blue-spout"</span><span class="o">);</span>
 </code></pre></div>
 <p>In the above code we configured Storm to run the bolt <code>GreenBolt</code> with an initial number of two executors and four associated tasks. Storm will run two tasks per executor (thread). If you do not explicitly configure the number of tasks, Storm will run by default one task per executor.</p>
 
@@ -168,20 +168,20 @@
 <p><img src="images/example-of-a-running-topology.png" alt="Example of a running topology in Storm"></p>
 
 <p>The <code>GreenBolt</code> was configured as per the code snippet above whereas <code>BlueSpout</code> and <code>YellowBolt</code> only set the parallelism hint (number of executors). Here is the relevant code:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">Config</span> <span class="n">conf</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">Config</span><span class="o">();</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">Config</span> <span class="n">conf</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Config</span><span class="o">();</span>
 <span class="n">conf</span><span class="o">.</span><span class="na">setNumWorkers</span><span class="o">(</span><span class="mi">2</span><span class="o">);</span> <span class="c1">// use two worker processes</span>
 
-<span class="n">topologyBuilder</span><span class="o">.</span><span class="na">setSpout</span><span class="o">(</span><span class="s">&quot;blue-spout&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="nf">BlueSpout</span><span class="o">(),</span> <span class="mi">2</span><span class="o">);</span> <span class="c1">// set parallelism hint to 2</span>
+<span class="n">topologyBuilder</span><span class="o">.</span><span class="na">setSpout</span><span class="o">(</span><span class="s">"blue-spout"</span><span class="o">,</span> <span class="k">new</span> <span class="n">BlueSpout</span><span class="o">(),</span> <span class="mi">2</span><span class="o">);</span> <span class="c1">// set parallelism hint to 2</span>
 
-<span class="n">topologyBuilder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">&quot;green-bolt&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="nf">GreenBolt</span><span class="o">(),</span> <span class="mi">2</span><span class="o">)</span>
+<span class="n">topologyBuilder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">"green-bolt"</span><span class="o">,</span> <span class="k">new</span> <span class="n">GreenBolt</span><span class="o">(),</span> <span class="mi">2</span><span class="o">)</span>
                <span class="o">.</span><span class="na">setNumTasks</span><span class="o">(</span><span class="mi">4</span><span class="o">)</span>
-               <span class="o">.</span><span class="na">shuffleGrouping</span><span class="o">(</span><span class="s">&quot;blue-spout&quot;</span><span class="o">);</span>
+               <span class="o">.</span><span class="na">shuffleGrouping</span><span class="o">(</span><span class="s">"blue-spout"</span><span class="o">);</span>
 
-<span class="n">topologyBuilder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">&quot;yellow-bolt&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="nf">YellowBolt</span><span class="o">(),</span> <span class="mi">6</span><span class="o">)</span>
-               <span class="o">.</span><span class="na">shuffleGrouping</span><span class="o">(</span><span class="s">&quot;green-bolt&quot;</span><span class="o">);</span>
+<span class="n">topologyBuilder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">"yellow-bolt"</span><span class="o">,</span> <span class="k">new</span> <span class="n">YellowBolt</span><span class="o">(),</span> <span class="mi">6</span><span class="o">)</span>
+               <span class="o">.</span><span class="na">shuffleGrouping</span><span class="o">(</span><span class="s">"green-bolt"</span><span class="o">);</span>
 
 <span class="n">StormSubmitter</span><span class="o">.</span><span class="na">submitTopology</span><span class="o">(</span>
-        <span class="s">&quot;mytopology&quot;</span><span class="o">,</span>
+        <span class="s">"mytopology"</span><span class="o">,</span>
         <span class="n">conf</span><span class="o">,</span>
         <span class="n">topologyBuilder</span><span class="o">.</span><span class="na">createTopology</span><span class="o">()</span>
     <span class="o">);</span>
@@ -204,9 +204,9 @@
 </ol>
 
 <p>Here is an example of using the CLI tool:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">## Reconfigure the topology &quot;mytopology&quot; to use 5 worker processes,
-## the spout &quot;blue-spout&quot; to use 3 executors and
-## the bolt &quot;yellow-bolt&quot; to use 10 executors.
+<div class="highlight"><pre><code class="language-" data-lang="">## Reconfigure the topology "mytopology" to use 5 worker processes,
+## the spout "blue-spout" to use 3 executors and
+## the bolt "yellow-bolt" to use 10 executors.
 
 $ storm rebalance mytopology -n 5 -e blue-spout=3 -e yellow-bolt=10
 </code></pre></div>
diff --git a/_site/documentation/Using-non-JVM-languages-with-Storm.html b/_site/documentation/Using-non-JVM-languages-with-Storm.html
index d863a39..2483509 100644
--- a/_site/documentation/Using-non-JVM-languages-with-Storm.html
+++ b/_site/documentation/Using-non-JVM-languages-with-Storm.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -127,7 +127,7 @@
 <p>The right place to start is src/storm.thrift. Since Storm topologies are just Thrift structures, and Nimbus is a Thrift daemon, you can create and submit topologies in any language.</p>
 
 <p>When you create the Thrift structs for spouts and bolts, the code for the spout or bolt is specified in the ComponentObject struct:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">union ComponentObject {
+<div class="highlight"><pre><code class="language-" data-lang="">union ComponentObject {
   1: binary serialized_java;
   2: ShellComponent shell;
   3: JavaObject java_object;
@@ -136,13 +136,13 @@
 <p>For a non-JVM DSL, you would want to make use of &quot;2&quot; and &quot;3&quot;. ShellComponent lets you specify a script to run that component (e.g., your python code). And JavaObject lets you specify native java spouts and bolts for the component (and Storm will use reflection to create that spout or bolt).</p>
 
 <p>There&#39;s a &quot;storm shell&quot; command that will help with submitting a topology. Its usage is like this:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">storm shell resources/ python topology.py arg1 arg2
+<div class="highlight"><pre><code class="language-" data-lang="">storm shell resources/ python topology.py arg1 arg2
 </code></pre></div>
 <p>storm shell will then package resources/ into a jar, upload the jar to Nimbus, and call your topology.py script like this:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">python topology.py arg1 arg2 {nimbus-host} {nimbus-port} {uploaded-jar-location}
+<div class="highlight"><pre><code class="language-" data-lang="">python topology.py arg1 arg2 {nimbus-host} {nimbus-port} {uploaded-jar-location}
 </code></pre></div>
 <p>Then you can connect to Nimbus using the Thrift API and submit the topology, passing {uploaded-jar-location} into the submitTopology method. For reference, here&#39;s the submitTopology definition:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">void submitTopology(1: string name, 2: string uploadedJarLocation, 3: string jsonConf, 4: StormTopology topology)
+<div class="highlight"><pre><code class="language-" data-lang="">void submitTopology(1: string name, 2: string uploadedJarLocation, 3: string jsonConf, 4: StormTopology topology)
     throws (1: AlreadyAliveException e, 2: InvalidTopologyException ite);
 </code></pre></div>
 
diff --git a/_site/documentation/flux.html b/_site/documentation/flux.html
index 609888c..01126ff 100644
--- a/_site/documentation/flux.html
+++ b/_site/documentation/flux.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -114,12 +114,13 @@
 deveoper-intensive.</p>
 
 <p>Have you ever found yourself repeating this pattern?:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">static</span> <span class="kt">void</span> <span class="nf">main</span><span class="o">(</span><span class="n">String</span><span class="o">[]</span> <span class="n">args</span><span class="o">)</span> <span class="kd">throws</span> <span class="n">Exception</span> <span class="o">{</span>
-    <span class="c1">// logic to determine if we&#39;re running locally or not...</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java">
+<span class="kd">public</span> <span class="kd">static</span> <span class="kt">void</span> <span class="nf">main</span><span class="p">(</span><span class="n">String</span><span class="o">[]</span> <span class="n">args</span><span class="o">)</span> <span class="kd">throws</span> <span class="n">Exception</span> <span class="o">{</span>
+    <span class="c1">// logic to determine if we're running locally or not...</span>
     <span class="c1">// create necessary config options...</span>
     <span class="kt">boolean</span> <span class="n">runLocal</span> <span class="o">=</span> <span class="n">shouldRunLocal</span><span class="o">();</span>
     <span class="k">if</span><span class="o">(</span><span class="n">runLocal</span><span class="o">){</span>
-        <span class="n">LocalCluster</span> <span class="n">cluster</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">LocalCluster</span><span class="o">();</span>
+        <span class="n">LocalCluster</span> <span class="n">cluster</span> <span class="o">=</span> <span class="k">new</span> <span class="n">LocalCluster</span><span class="o">();</span>
         <span class="n">cluster</span><span class="o">.</span><span class="na">submitTopology</span><span class="o">(</span><span class="n">name</span><span class="o">,</span> <span class="n">conf</span><span class="o">,</span> <span class="n">topology</span><span class="o">);</span>
     <span class="o">}</span> <span class="k">else</span> <span class="o">{</span>
         <span class="n">StormSubmitter</span><span class="o">.</span><span class="na">submitTopology</span><span class="o">(</span><span class="n">name</span><span class="o">,</span> <span class="n">conf</span><span class="o">,</span> <span class="n">topology</span><span class="o">);</span>
@@ -167,19 +168,19 @@
 <li>Node.js 0.10.x or later</li>
 </ul>
 
-<h4 id="building-with-unit-tests-enabled:">Building with unit tests enabled:</h4>
-<div class="highlight"><pre><code class="language-text" data-lang="text">mvn clean install
+<h4 id="building-with-unit-tests-enabled">Building with unit tests enabled:</h4>
+<div class="highlight"><pre><code class="language-" data-lang="">mvn clean install
 </code></pre></div>
-<h4 id="building-with-unit-tests-disabled:">Building with unit tests disabled:</h4>
+<h4 id="building-with-unit-tests-disabled">Building with unit tests disabled:</h4>
 
 <p>If you would like to build Flux without installing Python or Node.js you can simply skip the unit tests:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">mvn clean install -DskipTests=true
+<div class="highlight"><pre><code class="language-" data-lang="">mvn clean install -DskipTests=true
 </code></pre></div>
 <p>Note that if you plan on using Flux to deploy topologies to a remote cluster, you will still need to have Python
 installed since it is required by Apache Storm.</p>
 
-<h4 id="building-with-integration-tests-enabled:">Building with integration tests enabled:</h4>
-<div class="highlight"><pre><code class="language-text" data-lang="text">mvn clean install -DskipIntegration=false
+<h4 id="building-with-integration-tests-enabled">Building with integration tests enabled:</h4>
+<div class="highlight"><pre><code class="language-" data-lang="">mvn clean install -DskipIntegration=false
 </code></pre></div>
 <h3 id="packaging-with-maven">Packaging with Maven</h3>
 
@@ -232,9 +233,9 @@
                     <span class="nt">&lt;configuration&gt;</span>
                         <span class="nt">&lt;transformers&gt;</span>
                             <span class="nt">&lt;transformer</span>
-                                    <span class="na">implementation=</span><span class="s">&quot;org.apache.maven.plugins.shade.resource.ServicesResourceTransformer&quot;</span><span class="nt">/&gt;</span>
+                                    <span class="na">implementation=</span><span class="s">"org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"</span><span class="nt">/&gt;</span>
                             <span class="nt">&lt;transformer</span>
-                                    <span class="na">implementation=</span><span class="s">&quot;org.apache.maven.plugins.shade.resource.ManifestResourceTransformer&quot;</span><span class="nt">&gt;</span>
+                                    <span class="na">implementation=</span><span class="s">"org.apache.maven.plugins.shade.resource.ManifestResourceTransformer"</span><span class="nt">&gt;</span>
                                 <span class="nt">&lt;mainClass&gt;</span>org.apache.storm.flux.Flux<span class="nt">&lt;/mainClass&gt;</span>
                             <span class="nt">&lt;/transformer&gt;</span>
                         <span class="nt">&lt;/transformers&gt;</span>
@@ -251,9 +252,10 @@
 or remotely using the <code>storm jar</code> command. For example, if your fat jar is named <code>myTopology-0.1.0-SNAPSHOT.jar</code> you
 could run it locally with the command:</p>
 <div class="highlight"><pre><code class="language-bash" data-lang="bash">storm jar myTopology-0.1.0-SNAPSHOT.jar org.apache.storm.flux.Flux --local my_config.yaml
+
 </code></pre></div>
 <h3 id="command-line-options">Command line options</h3>
-<div class="highlight"><pre><code class="language-text" data-lang="text">usage: storm jar &lt;my_topology_uber_jar.jar&gt; org.apache.storm.flux.Flux
+<div class="highlight"><pre><code class="language-" data-lang="">usage: storm jar &lt;my_topology_uber_jar.jar&gt; org.apache.storm.flux.Flux
              [options] &lt;topology-config.yaml&gt;
  -d,--dry-run                 Do not run or deploy the topology. Just
                               build, validate, and print information about
@@ -290,7 +292,7 @@
 <div class="highlight"><pre><code class="language-bash" data-lang="bash">storm jar myTopology-0.1.0-SNAPSHOT.jar org.apache.storm.flux.Flux --remote my_config.yaml -c nimbus.host<span class="o">=</span>localhost
 </code></pre></div>
 <h3 id="sample-output">Sample output</h3>
-<div class="highlight"><pre><code class="language-text" data-lang="text">███████╗██╗     ██╗   ██╗██╗  ██╗
+<div class="highlight"><pre><code class="language-" data-lang="">███████╗██╗     ██╗   ██╗██╗  ██╗
 ██╔════╝██║     ██║   ██║╚██╗██╔╝
 █████╗  ██║     ██║   ██║ ╚███╔╝
 ██╔══╝  ██║     ██║   ██║ ██╔██╗
@@ -313,7 +315,7 @@
 splitsentence --FIELDS--&gt; count
 count --SHUFFLE--&gt; log
 --------------------------------------
-Submitting topology: &#39;shell-topology&#39; to remote cluster...
+Submitting topology: 'shell-topology' to remote cluster...
 </code></pre></div>
 <h2 id="yaml-configuration">YAML Configuration</h2>
 
@@ -338,41 +340,43 @@
 </ol>
 
 <p>For example, here is a simple definition of a wordcount topology using the YAML DSL:</p>
-<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="s">&quot;yaml-topology&quot;</span>
-<span class="l-Scalar-Plain">config</span><span class="p-Indicator">:</span>
-  <span class="l-Scalar-Plain">topology.workers</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">1</span>
+<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="s">name</span><span class="pi">:</span> <span class="s2">"</span><span class="s">yaml-topology"</span>
+<span class="s">config</span><span class="pi">:</span>
+  <span class="s">topology.workers</span><span class="pi">:</span> <span class="s">1</span>
 
 <span class="c1"># spout definitions</span>
-<span class="l-Scalar-Plain">spouts</span><span class="p-Indicator">:</span>
-  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">id</span><span class="p-Indicator">:</span> <span class="s">&quot;spout-1&quot;</span>
-    <span class="l-Scalar-Plain">className</span><span class="p-Indicator">:</span> <span class="s">&quot;backtype.storm.testing.TestWordSpout&quot;</span>
-    <span class="l-Scalar-Plain">parallelism</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">1</span>
+<span class="s">spouts</span><span class="pi">:</span>
+  <span class="pi">-</span> <span class="s">id</span><span class="pi">:</span> <span class="s2">"</span><span class="s">spout-1"</span>
+    <span class="s">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">backtype.storm.testing.TestWordSpout"</span>
+    <span class="s">parallelism</span><span class="pi">:</span> <span class="s">1</span>
 
 <span class="c1"># bolt definitions</span>
-<span class="l-Scalar-Plain">bolts</span><span class="p-Indicator">:</span>
-  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">id</span><span class="p-Indicator">:</span> <span class="s">&quot;bolt-1&quot;</span>
-    <span class="l-Scalar-Plain">className</span><span class="p-Indicator">:</span> <span class="s">&quot;backtype.storm.testing.TestWordCounter&quot;</span>
-    <span class="l-Scalar-Plain">parallelism</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">1</span>
-  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">id</span><span class="p-Indicator">:</span> <span class="s">&quot;bolt-2&quot;</span>
-    <span class="l-Scalar-Plain">className</span><span class="p-Indicator">:</span> <span class="s">&quot;org.apache.storm.flux.wrappers.bolts.LogInfoBolt&quot;</span>
-    <span class="l-Scalar-Plain">parallelism</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">1</span>
+<span class="s">bolts</span><span class="pi">:</span>
+  <span class="pi">-</span> <span class="s">id</span><span class="pi">:</span> <span class="s2">"</span><span class="s">bolt-1"</span>
+    <span class="s">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">backtype.storm.testing.TestWordCounter"</span>
+    <span class="s">parallelism</span><span class="pi">:</span> <span class="s">1</span>
+  <span class="pi">-</span> <span class="s">id</span><span class="pi">:</span> <span class="s2">"</span><span class="s">bolt-2"</span>
+    <span class="s">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">org.apache.storm.flux.wrappers.bolts.LogInfoBolt"</span>
+    <span class="s">parallelism</span><span class="pi">:</span> <span class="s">1</span>
 
 <span class="c1">#stream definitions</span>
-<span class="l-Scalar-Plain">streams</span><span class="p-Indicator">:</span>
-  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="s">&quot;spout-1</span><span class="nv"> </span><span class="s">--&gt;</span><span class="nv"> </span><span class="s">bolt-1&quot;</span> <span class="c1"># name isn&#39;t used (placeholder for logging, UI, etc.)</span>
-    <span class="l-Scalar-Plain">from</span><span class="p-Indicator">:</span> <span class="s">&quot;spout-1&quot;</span>
-    <span class="l-Scalar-Plain">to</span><span class="p-Indicator">:</span> <span class="s">&quot;bolt-1&quot;</span>
-    <span class="l-Scalar-Plain">grouping</span><span class="p-Indicator">:</span>
-      <span class="l-Scalar-Plain">type</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">FIELDS</span>
-      <span class="l-Scalar-Plain">args</span><span class="p-Indicator">:</span> <span class="p-Indicator">[</span><span class="s">&quot;word&quot;</span><span class="p-Indicator">]</span>
+<span class="s">streams</span><span class="pi">:</span>
+  <span class="pi">-</span> <span class="s">name</span><span class="pi">:</span> <span class="s2">"</span><span class="s">spout-1</span><span class="nv"> </span><span class="s">--&gt;</span><span class="nv"> </span><span class="s">bolt-1"</span> <span class="c1"># name isn't used (placeholder for logging, UI, etc.)</span>
+    <span class="s">from</span><span class="pi">:</span> <span class="s2">"</span><span class="s">spout-1"</span>
+    <span class="s">to</span><span class="pi">:</span> <span class="s2">"</span><span class="s">bolt-1"</span>
+    <span class="s">grouping</span><span class="pi">:</span>
+      <span class="s">type</span><span class="pi">:</span> <span class="s">FIELDS</span>
+      <span class="s">args</span><span class="pi">:</span> <span class="pi">[</span><span class="s2">"</span><span class="s">word"</span><span class="pi">]</span>
 
-  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="s">&quot;bolt-1</span><span class="nv"> </span><span class="s">--&gt;</span><span class="nv"> </span><span class="s">bolt2&quot;</span>
-    <span class="l-Scalar-Plain">from</span><span class="p-Indicator">:</span> <span class="s">&quot;bolt-1&quot;</span>
-    <span class="l-Scalar-Plain">to</span><span class="p-Indicator">:</span> <span class="s">&quot;bolt-2&quot;</span>
-    <span class="l-Scalar-Plain">grouping</span><span class="p-Indicator">:</span>
-      <span class="l-Scalar-Plain">type</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">SHUFFLE</span>
+  <span class="pi">-</span> <span class="s">name</span><span class="pi">:</span> <span class="s2">"</span><span class="s">bolt-1</span><span class="nv"> </span><span class="s">--&gt;</span><span class="nv"> </span><span class="s">bolt2"</span>
+    <span class="s">from</span><span class="pi">:</span> <span class="s2">"</span><span class="s">bolt-1"</span>
+    <span class="s">to</span><span class="pi">:</span> <span class="s2">"</span><span class="s">bolt-2"</span>
+    <span class="s">grouping</span><span class="pi">:</span>
+      <span class="s">type</span><span class="pi">:</span> <span class="s">SHUFFLE</span>
+
+
 </code></pre></div>
-<h2 id="property-substitution/filtering">Property Substitution/Filtering</h2>
+<h2 id="property-substitution-filtering">Property Substitution/Filtering</h2>
 
 <p>It&#39;s common for developers to want to easily switch between configurations, for example switching deployment between
 a development environment and a production environment. This can be accomplished by using separate YAML configuration
@@ -387,21 +391,21 @@
 <div class="highlight"><pre><code class="language-bash" data-lang="bash">storm jar myTopology-0.1.0-SNAPSHOT.jar org.apache.storm.flux.Flux --local my_config.yaml --filter dev.properties
 </code></pre></div>
 <p>With the following <code>dev.properties</code> file:</p>
-<div class="highlight"><pre><code class="language-properties" data-lang="properties"><span class="na">kafka.zookeeper.hosts</span><span class="o">:</span> <span class="s">localhost:2181</span>
+<div class="highlight"><pre><code class="language-properties" data-lang="properties"><span class="py">kafka.zookeeper.hosts</span><span class="p">:</span> <span class="s">localhost:2181</span>
 </code></pre></div>
 <p>You would then be able to reference those properties by key in your <code>.yaml</code> file using <code>${}</code> syntax:</p>
-<div class="highlight"><pre><code class="language-yaml" data-lang="yaml">  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">id</span><span class="p-Indicator">:</span> <span class="s">&quot;zkHosts&quot;</span>
-    <span class="l-Scalar-Plain">className</span><span class="p-Indicator">:</span> <span class="s">&quot;storm.kafka.ZkHosts&quot;</span>
-    <span class="l-Scalar-Plain">constructorArgs</span><span class="p-Indicator">:</span>
-      <span class="p-Indicator">-</span> <span class="s">&quot;${kafka.zookeeper.hosts}&quot;</span>
+<div class="highlight"><pre><code class="language-yaml" data-lang="yaml">  <span class="pi">-</span> <span class="s">id</span><span class="pi">:</span> <span class="s2">"</span><span class="s">zkHosts"</span>
+    <span class="s">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">storm.kafka.ZkHosts"</span>
+    <span class="s">constructorArgs</span><span class="pi">:</span>
+      <span class="pi">-</span> <span class="s2">"</span><span class="s">${kafka.zookeeper.hosts}"</span>
 </code></pre></div>
 <p>In this case, Flux would replace <code>${kafka.zookeeper.hosts}</code> with <code>localhost:2181</code> before parsing the YAML contents.</p>
 
-<h3 id="environment-variable-substitution/filtering">Environment Variable Substitution/Filtering</h3>
+<h3 id="environment-variable-substitution-filtering">Environment Variable Substitution/Filtering</h3>
 
 <p>Flux also allows environment variable substitution. For example, if an environment variable named <code>ZK_HOSTS</code> if defined,
 you can reference it in a Flux YAML file with the following syntax:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">${ENV-ZK_HOSTS}
+<div class="highlight"><pre><code class="language-" data-lang="">${ENV-ZK_HOSTS}
 </code></pre></div>
 <h2 id="components">Components</h2>
 
@@ -411,21 +415,21 @@
 <p>Every component is identified, at a minimum, by a unique identifier (String) and a class name (String). For example,
 the following will make an instance of the <code>storm.kafka.StringScheme</code> class available as a reference under the key
 <code>&quot;stringScheme&quot;</code> . This assumes the <code>storm.kafka.StringScheme</code> has a default constructor.</p>
-<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="l-Scalar-Plain">components</span><span class="p-Indicator">:</span>
-  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">id</span><span class="p-Indicator">:</span> <span class="s">&quot;stringScheme&quot;</span>
-    <span class="l-Scalar-Plain">className</span><span class="p-Indicator">:</span> <span class="s">&quot;storm.kafka.StringScheme&quot;</span>
+<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="s">components</span><span class="pi">:</span>
+  <span class="pi">-</span> <span class="s">id</span><span class="pi">:</span> <span class="s2">"</span><span class="s">stringScheme"</span>
+    <span class="s">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">storm.kafka.StringScheme"</span>
 </code></pre></div>
-<h3 id="contructor-arguments,-references,-properties-and-configuration-methods">Contructor Arguments, References, Properties and Configuration Methods</h3>
+<h3 id="contructor-arguments-references-properties-and-configuration-methods">Contructor Arguments, References, Properties and Configuration Methods</h3>
 
 <h4 id="constructor-arguments">Constructor Arguments</h4>
 
 <p>Arguments to a class constructor can be configured by adding a <code>contructorArgs</code> element to a components.
 <code>constructorArgs</code> is a list of objects that will be passed to the class&#39; constructor. The following example creates an
 object by calling the constructor that takes a single string as an argument:</p>
-<div class="highlight"><pre><code class="language-yaml" data-lang="yaml">  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">id</span><span class="p-Indicator">:</span> <span class="s">&quot;zkHosts&quot;</span>
-    <span class="l-Scalar-Plain">className</span><span class="p-Indicator">:</span> <span class="s">&quot;storm.kafka.ZkHosts&quot;</span>
-    <span class="l-Scalar-Plain">constructorArgs</span><span class="p-Indicator">:</span>
-      <span class="p-Indicator">-</span> <span class="s">&quot;localhost:2181&quot;</span>
+<div class="highlight"><pre><code class="language-yaml" data-lang="yaml">  <span class="pi">-</span> <span class="s">id</span><span class="pi">:</span> <span class="s2">"</span><span class="s">zkHosts"</span>
+    <span class="s">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">storm.kafka.ZkHosts"</span>
+    <span class="s">constructorArgs</span><span class="pi">:</span>
+      <span class="pi">-</span> <span class="s2">"</span><span class="s">localhost:2181"</span>
 </code></pre></div>
 <h4 id="references">References</h4>
 
@@ -434,14 +438,14 @@
 
 <p>In the following example, a component with the id <code>&quot;stringScheme&quot;</code> is created, and later referenced, as a an argument
 to another component&#39;s constructor:</p>
-<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="l-Scalar-Plain">components</span><span class="p-Indicator">:</span>
-  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">id</span><span class="p-Indicator">:</span> <span class="s">&quot;stringScheme&quot;</span>
-    <span class="l-Scalar-Plain">className</span><span class="p-Indicator">:</span> <span class="s">&quot;storm.kafka.StringScheme&quot;</span>
+<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="s">components</span><span class="pi">:</span>
+  <span class="pi">-</span> <span class="s">id</span><span class="pi">:</span> <span class="s2">"</span><span class="s">stringScheme"</span>
+    <span class="s">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">storm.kafka.StringScheme"</span>
 
-  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">id</span><span class="p-Indicator">:</span> <span class="s">&quot;stringMultiScheme&quot;</span>
-    <span class="l-Scalar-Plain">className</span><span class="p-Indicator">:</span> <span class="s">&quot;backtype.storm.spout.SchemeAsMultiScheme&quot;</span>
-    <span class="l-Scalar-Plain">constructorArgs</span><span class="p-Indicator">:</span>
-      <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">ref</span><span class="p-Indicator">:</span> <span class="s">&quot;stringScheme&quot;</span> <span class="c1"># component with id &quot;stringScheme&quot; must be declared above.</span>
+  <span class="pi">-</span> <span class="s">id</span><span class="pi">:</span> <span class="s2">"</span><span class="s">stringMultiScheme"</span>
+    <span class="s">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">backtype.storm.spout.SchemeAsMultiScheme"</span>
+    <span class="s">constructorArgs</span><span class="pi">:</span>
+      <span class="pi">-</span> <span class="s">ref</span><span class="pi">:</span> <span class="s2">"</span><span class="s">stringScheme"</span> <span class="c1"># component with id "stringScheme" must be declared above.</span>
 </code></pre></div>
 <p><strong>N.B.:</strong> References can only be used after (below) the object they point to has been declared.</p>
 
@@ -449,22 +453,22 @@
 
 <p>In addition to calling constructors with different arguments, Flux also allows you to configure components using
 JavaBean-like setter methods and fields declared as <code>public</code>:</p>
-<div class="highlight"><pre><code class="language-yaml" data-lang="yaml">  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">id</span><span class="p-Indicator">:</span> <span class="s">&quot;spoutConfig&quot;</span>
-    <span class="l-Scalar-Plain">className</span><span class="p-Indicator">:</span> <span class="s">&quot;storm.kafka.SpoutConfig&quot;</span>
-    <span class="l-Scalar-Plain">constructorArgs</span><span class="p-Indicator">:</span>
+<div class="highlight"><pre><code class="language-yaml" data-lang="yaml">  <span class="pi">-</span> <span class="s">id</span><span class="pi">:</span> <span class="s2">"</span><span class="s">spoutConfig"</span>
+    <span class="s">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">storm.kafka.SpoutConfig"</span>
+    <span class="s">constructorArgs</span><span class="pi">:</span>
       <span class="c1"># brokerHosts</span>
-      <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">ref</span><span class="p-Indicator">:</span> <span class="s">&quot;zkHosts&quot;</span>
+      <span class="pi">-</span> <span class="s">ref</span><span class="pi">:</span> <span class="s2">"</span><span class="s">zkHosts"</span>
       <span class="c1"># topic</span>
-      <span class="p-Indicator">-</span> <span class="s">&quot;myKafkaTopic&quot;</span>
+      <span class="pi">-</span> <span class="s2">"</span><span class="s">myKafkaTopic"</span>
       <span class="c1"># zkRoot</span>
-      <span class="p-Indicator">-</span> <span class="s">&quot;/kafkaSpout&quot;</span>
+      <span class="pi">-</span> <span class="s2">"</span><span class="s">/kafkaSpout"</span>
       <span class="c1"># id</span>
-      <span class="p-Indicator">-</span> <span class="s">&quot;myId&quot;</span>
-    <span class="l-Scalar-Plain">properties</span><span class="p-Indicator">:</span>
-      <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="s">&quot;forceFromStart&quot;</span>
-        <span class="l-Scalar-Plain">value</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">true</span>
-      <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="s">&quot;scheme&quot;</span>
-        <span class="l-Scalar-Plain">ref</span><span class="p-Indicator">:</span> <span class="s">&quot;stringMultiScheme&quot;</span>
+      <span class="pi">-</span> <span class="s2">"</span><span class="s">myId"</span>
+    <span class="s">properties</span><span class="pi">:</span>
+      <span class="pi">-</span> <span class="s">name</span><span class="pi">:</span> <span class="s2">"</span><span class="s">forceFromStart"</span>
+        <span class="s">value</span><span class="pi">:</span> <span class="s">true</span>
+      <span class="pi">-</span> <span class="s">name</span><span class="pi">:</span> <span class="s2">"</span><span class="s">scheme"</span>
+        <span class="s">ref</span><span class="pi">:</span> <span class="s2">"</span><span class="s">stringMultiScheme"</span>
 </code></pre></div>
 <p>In the example above, the <code>properties</code> declaration will cause Flux to look for a public method in the <code>SpoutConfig</code> with
 the signature <code>setForceFromStart(boolean b)</code> and attempt to invoke it. If a setter method is not found, Flux will then
@@ -480,31 +484,31 @@
 that use the builder pattern for configuration/composition.</p>
 
 <p>The following YAML example creates a bolt and configures it by calling several methods:</p>
-<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="l-Scalar-Plain">bolts</span><span class="p-Indicator">:</span>
-  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">id</span><span class="p-Indicator">:</span> <span class="s">&quot;bolt-1&quot;</span>
-    <span class="l-Scalar-Plain">className</span><span class="p-Indicator">:</span> <span class="s">&quot;org.apache.storm.flux.test.TestBolt&quot;</span>
-    <span class="l-Scalar-Plain">parallelism</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">1</span>
-    <span class="l-Scalar-Plain">configMethods</span><span class="p-Indicator">:</span>
-      <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="s">&quot;withFoo&quot;</span>
-        <span class="l-Scalar-Plain">args</span><span class="p-Indicator">:</span>
-          <span class="p-Indicator">-</span> <span class="s">&quot;foo&quot;</span>
-      <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="s">&quot;withBar&quot;</span>
-        <span class="l-Scalar-Plain">args</span><span class="p-Indicator">:</span>
-          <span class="p-Indicator">-</span> <span class="s">&quot;bar&quot;</span>
-      <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="s">&quot;withFooBar&quot;</span>
-        <span class="l-Scalar-Plain">args</span><span class="p-Indicator">:</span>
-          <span class="p-Indicator">-</span> <span class="s">&quot;foo&quot;</span>
-          <span class="p-Indicator">-</span> <span class="s">&quot;bar&quot;</span>
+<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="s">bolts</span><span class="pi">:</span>
+  <span class="pi">-</span> <span class="s">id</span><span class="pi">:</span> <span class="s2">"</span><span class="s">bolt-1"</span>
+    <span class="s">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">org.apache.storm.flux.test.TestBolt"</span>
+    <span class="s">parallelism</span><span class="pi">:</span> <span class="s">1</span>
+    <span class="s">configMethods</span><span class="pi">:</span>
+      <span class="pi">-</span> <span class="s">name</span><span class="pi">:</span> <span class="s2">"</span><span class="s">withFoo"</span>
+        <span class="s">args</span><span class="pi">:</span>
+          <span class="pi">-</span> <span class="s2">"</span><span class="s">foo"</span>
+      <span class="pi">-</span> <span class="s">name</span><span class="pi">:</span> <span class="s2">"</span><span class="s">withBar"</span>
+        <span class="s">args</span><span class="pi">:</span>
+          <span class="pi">-</span> <span class="s2">"</span><span class="s">bar"</span>
+      <span class="pi">-</span> <span class="s">name</span><span class="pi">:</span> <span class="s2">"</span><span class="s">withFooBar"</span>
+        <span class="s">args</span><span class="pi">:</span>
+          <span class="pi">-</span> <span class="s2">"</span><span class="s">foo"</span>
+          <span class="pi">-</span> <span class="s2">"</span><span class="s">bar"</span>
 </code></pre></div>
 <p>The signatures of the corresponding methods are as follows:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java">    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">withFoo</span><span class="o">(</span><span class="n">String</span> <span class="n">foo</span><span class="o">);</span>
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">withBar</span><span class="o">(</span><span class="n">String</span> <span class="n">bar</span><span class="o">);</span>
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">withFooBar</span><span class="o">(</span><span class="n">String</span> <span class="n">foo</span><span class="o">,</span> <span class="n">String</span> <span class="n">bar</span><span class="o">);</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java">    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">withFoo</span><span class="p">(</span><span class="n">String</span> <span class="n">foo</span><span class="o">);</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">withBar</span><span class="o">(</span><span class="n">String</span> <span class="n">bar</span><span class="o">);</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">withFooBar</span><span class="o">(</span><span class="n">String</span> <span class="n">foo</span><span class="o">,</span> <span class="n">String</span> <span class="n">bar</span><span class="o">);</span>
 </code></pre></div>
 <p>Arguments passed to configuration methods work much the same way as constructor arguments, and support references as
 well.</p>
 
-<h3 id="using-java-enums-in-contructor-arguments,-references,-properties-and-configuration-methods">Using Java <code>enum</code>s in Contructor Arguments, References, Properties and Configuration Methods</h3>
+<h3 id="using-java-enums-in-contructor-arguments-references-properties-and-configuration-methods">Using Java <code>enum</code>s in Contructor Arguments, References, Properties and Configuration Methods</h3>
 
 <p>You can easily use Java <code>enum</code> values as arguments in a Flux YAML file, simply by referencing the name of the <code>enum</code>.</p>
 
@@ -514,27 +518,28 @@
 <span class="o">}</span>
 </code></pre></div>
 <p>And the <code>org.apache.storm.hdfs.bolt.rotation.FileSizeRotationPolicy</code> class has the following constructor:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="nf">FileSizeRotationPolicy</span><span class="o">(</span><span class="kt">float</span> <span class="n">count</span><span class="o">,</span> <span class="n">Units</span> <span class="n">units</span><span class="o">)</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="nf">FileSizeRotationPolicy</span><span class="p">(</span><span class="kt">float</span> <span class="n">count</span><span class="o">,</span> <span class="n">Units</span> <span class="n">units</span><span class="o">)</span>
+
 </code></pre></div>
 <p>The following Flux <code>component</code> definition could be used to call the constructor:</p>
-<div class="highlight"><pre><code class="language-yaml" data-lang="yaml">  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">id</span><span class="p-Indicator">:</span> <span class="s">&quot;rotationPolicy&quot;</span>
-    <span class="l-Scalar-Plain">className</span><span class="p-Indicator">:</span> <span class="s">&quot;org.apache.storm.hdfs.bolt.rotation.FileSizeRotationPolicy&quot;</span>
-    <span class="l-Scalar-Plain">constructorArgs</span><span class="p-Indicator">:</span>
-      <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">5.0</span>
-      <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">MB</span>
+<div class="highlight"><pre><code class="language-yaml" data-lang="yaml">  <span class="pi">-</span> <span class="s">id</span><span class="pi">:</span> <span class="s2">"</span><span class="s">rotationPolicy"</span>
+    <span class="s">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">org.apache.storm.hdfs.bolt.rotation.FileSizeRotationPolicy"</span>
+    <span class="s">constructorArgs</span><span class="pi">:</span>
+      <span class="pi">-</span> <span class="s">5.0</span>
+      <span class="pi">-</span> <span class="s">MB</span>
 </code></pre></div>
 <p>The above definition is functionally equivalent to the following Java code:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="c1">// rotate files when they reach 5MB</span>
-<span class="n">FileRotationPolicy</span> <span class="n">rotationPolicy</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">FileSizeRotationPolicy</span><span class="o">(</span><span class="mf">5.0f</span><span class="o">,</span> <span class="n">Units</span><span class="o">.</span><span class="na">MB</span><span class="o">);</span>
+<span class="n">FileRotationPolicy</span> <span class="n">rotationPolicy</span> <span class="o">=</span> <span class="k">new</span> <span class="n">FileSizeRotationPolicy</span><span class="o">(</span><span class="mf">5.0f</span><span class="o">,</span> <span class="n">Units</span><span class="o">.</span><span class="na">MB</span><span class="o">);</span>
 </code></pre></div>
 <h2 id="topology-config">Topology Config</h2>
 
 <p>The <code>config</code> section is simply a map of Storm topology configuration parameters that will be passed to the
 <code>backtype.storm.StormSubmitter</code> as an instance of the <code>backtype.storm.Config</code> class:</p>
-<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="l-Scalar-Plain">config</span><span class="p-Indicator">:</span>
-  <span class="l-Scalar-Plain">topology.workers</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">4</span>
-  <span class="l-Scalar-Plain">topology.max.spout.pending</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">1000</span>
-  <span class="l-Scalar-Plain">topology.message.timeout.secs</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">30</span>
+<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="s">config</span><span class="pi">:</span>
+  <span class="s">topology.workers</span><span class="pi">:</span> <span class="s">4</span>
+  <span class="s">topology.max.spout.pending</span><span class="pi">:</span> <span class="s">1000</span>
+  <span class="s">topology.message.timeout.secs</span><span class="pi">:</span> <span class="s">30</span>
 </code></pre></div>
 <h1 id="existing-topologies">Existing Topologies</h1>
 
@@ -544,22 +549,22 @@
 
 <p>The easiest way to use an existing topology class is to define
 a <code>getTopology()</code> instance method with one of the following signatures:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="n">StormTopology</span> <span class="nf">getTopology</span><span class="o">(</span><span class="n">Map</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Object</span><span class="o">&gt;</span> <span class="n">config</span><span class="o">)</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="n">StormTopology</span> <span class="nf">getTopology</span><span class="p">(</span><span class="n">Map</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Object</span><span class="o">&gt;</span> <span class="n">config</span><span class="o">)</span>
 </code></pre></div>
 <p>or:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="n">StormTopology</span> <span class="nf">getTopology</span><span class="o">(</span><span class="n">Config</span> <span class="n">config</span><span class="o">)</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="n">StormTopology</span> <span class="nf">getTopology</span><span class="p">(</span><span class="n">Config</span> <span class="n">config</span><span class="o">)</span>
 </code></pre></div>
 <p>You could then use the following YAML to configure your topology:</p>
-<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="s">&quot;existing-topology&quot;</span>
-<span class="l-Scalar-Plain">topologySource</span><span class="p-Indicator">:</span>
-  <span class="l-Scalar-Plain">className</span><span class="p-Indicator">:</span> <span class="s">&quot;org.apache.storm.flux.test.SimpleTopology&quot;</span>
+<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="s">name</span><span class="pi">:</span> <span class="s2">"</span><span class="s">existing-topology"</span>
+<span class="s">topologySource</span><span class="pi">:</span>
+  <span class="s">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">org.apache.storm.flux.test.SimpleTopology"</span>
 </code></pre></div>
 <p>If the class you would like to use as a topology source has a different method name (i.e. not <code>getTopology</code>), you can
 override it:</p>
-<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="s">&quot;existing-topology&quot;</span>
-<span class="l-Scalar-Plain">topologySource</span><span class="p-Indicator">:</span>
-  <span class="l-Scalar-Plain">className</span><span class="p-Indicator">:</span> <span class="s">&quot;org.apache.storm.flux.test.SimpleTopology&quot;</span>
-  <span class="l-Scalar-Plain">methodName</span><span class="p-Indicator">:</span> <span class="s">&quot;getTopologyWithDifferentMethodName&quot;</span>
+<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="s">name</span><span class="pi">:</span> <span class="s2">"</span><span class="s">existing-topology"</span>
+<span class="s">topologySource</span><span class="pi">:</span>
+  <span class="s">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">org.apache.storm.flux.test.SimpleTopology"</span>
+  <span class="s">methodName</span><span class="pi">:</span> <span class="s2">"</span><span class="s">getTopologyWithDifferentMethodName"</span>
 </code></pre></div>
 <p><strong>N.B.:</strong> The specified method must accept a single argument of type <code>java.util.Map&lt;String, Object&gt;</code> or
 <code>backtype.storm.Config</code>, and return a <code>backtype.storm.generated.StormTopology</code> object.</p>
@@ -576,91 +581,92 @@
 well.</p>
 
 <p>Shell spout example:</p>
-<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="l-Scalar-Plain">spouts</span><span class="p-Indicator">:</span>
-  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">id</span><span class="p-Indicator">:</span> <span class="s">&quot;sentence-spout&quot;</span>
-    <span class="l-Scalar-Plain">className</span><span class="p-Indicator">:</span> <span class="s">&quot;org.apache.storm.flux.spouts.GenericShellSpout&quot;</span>
+<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="s">spouts</span><span class="pi">:</span>
+  <span class="pi">-</span> <span class="s">id</span><span class="pi">:</span> <span class="s2">"</span><span class="s">sentence-spout"</span>
+    <span class="s">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">org.apache.storm.flux.spouts.GenericShellSpout"</span>
     <span class="c1"># shell spout constructor takes 2 arguments: String[], String[]</span>
-    <span class="l-Scalar-Plain">constructorArgs</span><span class="p-Indicator">:</span>
+    <span class="s">constructorArgs</span><span class="pi">:</span>
       <span class="c1"># command line</span>
-      <span class="p-Indicator">-</span> <span class="p-Indicator">[</span><span class="s">&quot;node&quot;</span><span class="p-Indicator">,</span> <span class="s">&quot;randomsentence.js&quot;</span><span class="p-Indicator">]</span>
+      <span class="pi">-</span> <span class="pi">[</span><span class="s2">"</span><span class="s">node"</span><span class="pi">,</span> <span class="s2">"</span><span class="s">randomsentence.js"</span><span class="pi">]</span>
       <span class="c1"># output fields</span>
-      <span class="p-Indicator">-</span> <span class="p-Indicator">[</span><span class="s">&quot;word&quot;</span><span class="p-Indicator">]</span>
-    <span class="l-Scalar-Plain">parallelism</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">1</span>
+      <span class="pi">-</span> <span class="pi">[</span><span class="s2">"</span><span class="s">word"</span><span class="pi">]</span>
+    <span class="s">parallelism</span><span class="pi">:</span> <span class="s">1</span>
 </code></pre></div>
 <p>Kafka spout example:</p>
-<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="l-Scalar-Plain">components</span><span class="p-Indicator">:</span>
-  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">id</span><span class="p-Indicator">:</span> <span class="s">&quot;stringScheme&quot;</span>
-    <span class="l-Scalar-Plain">className</span><span class="p-Indicator">:</span> <span class="s">&quot;storm.kafka.StringScheme&quot;</span>
+<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="s">components</span><span class="pi">:</span>
+  <span class="pi">-</span> <span class="s">id</span><span class="pi">:</span> <span class="s2">"</span><span class="s">stringScheme"</span>
+    <span class="s">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">storm.kafka.StringScheme"</span>
 
-  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">id</span><span class="p-Indicator">:</span> <span class="s">&quot;stringMultiScheme&quot;</span>
-    <span class="l-Scalar-Plain">className</span><span class="p-Indicator">:</span> <span class="s">&quot;backtype.storm.spout.SchemeAsMultiScheme&quot;</span>
-    <span class="l-Scalar-Plain">constructorArgs</span><span class="p-Indicator">:</span>
-      <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">ref</span><span class="p-Indicator">:</span> <span class="s">&quot;stringScheme&quot;</span>
+  <span class="pi">-</span> <span class="s">id</span><span class="pi">:</span> <span class="s2">"</span><span class="s">stringMultiScheme"</span>
+    <span class="s">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">backtype.storm.spout.SchemeAsMultiScheme"</span>
+    <span class="s">constructorArgs</span><span class="pi">:</span>
+      <span class="pi">-</span> <span class="s">ref</span><span class="pi">:</span> <span class="s2">"</span><span class="s">stringScheme"</span>
 
-  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">id</span><span class="p-Indicator">:</span> <span class="s">&quot;zkHosts&quot;</span>
-    <span class="l-Scalar-Plain">className</span><span class="p-Indicator">:</span> <span class="s">&quot;storm.kafka.ZkHosts&quot;</span>
-    <span class="l-Scalar-Plain">constructorArgs</span><span class="p-Indicator">:</span>
-      <span class="p-Indicator">-</span> <span class="s">&quot;localhost:2181&quot;</span>
+  <span class="pi">-</span> <span class="s">id</span><span class="pi">:</span> <span class="s2">"</span><span class="s">zkHosts"</span>
+    <span class="s">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">storm.kafka.ZkHosts"</span>
+    <span class="s">constructorArgs</span><span class="pi">:</span>
+      <span class="pi">-</span> <span class="s2">"</span><span class="s">localhost:2181"</span>
 
 <span class="c1"># Alternative kafka config</span>
-<span class="c1">#  - id: &quot;kafkaConfig&quot;</span>
-<span class="c1">#    className: &quot;storm.kafka.KafkaConfig&quot;</span>
+<span class="c1">#  - id: "kafkaConfig"</span>
+<span class="c1">#    className: "storm.kafka.KafkaConfig"</span>
 <span class="c1">#    constructorArgs:</span>
 <span class="c1">#      # brokerHosts</span>
-<span class="c1">#      - ref: &quot;zkHosts&quot;</span>
+<span class="c1">#      - ref: "zkHosts"</span>
 <span class="c1">#      # topic</span>
-<span class="c1">#      - &quot;myKafkaTopic&quot;</span>
+<span class="c1">#      - "myKafkaTopic"</span>
 <span class="c1">#      # clientId (optional)</span>
-<span class="c1">#      - &quot;myKafkaClientId&quot;</span>
+<span class="c1">#      - "myKafkaClientId"</span>
 
-  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">id</span><span class="p-Indicator">:</span> <span class="s">&quot;spoutConfig&quot;</span>
-    <span class="l-Scalar-Plain">className</span><span class="p-Indicator">:</span> <span class="s">&quot;storm.kafka.SpoutConfig&quot;</span>
-    <span class="l-Scalar-Plain">constructorArgs</span><span class="p-Indicator">:</span>
+  <span class="pi">-</span> <span class="s">id</span><span class="pi">:</span> <span class="s2">"</span><span class="s">spoutConfig"</span>
+    <span class="s">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">storm.kafka.SpoutConfig"</span>
+    <span class="s">constructorArgs</span><span class="pi">:</span>
       <span class="c1"># brokerHosts</span>
-      <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">ref</span><span class="p-Indicator">:</span> <span class="s">&quot;zkHosts&quot;</span>
+      <span class="pi">-</span> <span class="s">ref</span><span class="pi">:</span> <span class="s2">"</span><span class="s">zkHosts"</span>
       <span class="c1"># topic</span>
-      <span class="p-Indicator">-</span> <span class="s">&quot;myKafkaTopic&quot;</span>
+      <span class="pi">-</span> <span class="s2">"</span><span class="s">myKafkaTopic"</span>
       <span class="c1"># zkRoot</span>
-      <span class="p-Indicator">-</span> <span class="s">&quot;/kafkaSpout&quot;</span>
+      <span class="pi">-</span> <span class="s2">"</span><span class="s">/kafkaSpout"</span>
       <span class="c1"># id</span>
-      <span class="p-Indicator">-</span> <span class="s">&quot;myId&quot;</span>
-    <span class="l-Scalar-Plain">properties</span><span class="p-Indicator">:</span>
-      <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="s">&quot;forceFromStart&quot;</span>
-        <span class="l-Scalar-Plain">value</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">true</span>
-      <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="s">&quot;scheme&quot;</span>
-        <span class="l-Scalar-Plain">ref</span><span class="p-Indicator">:</span> <span class="s">&quot;stringMultiScheme&quot;</span>
+      <span class="pi">-</span> <span class="s2">"</span><span class="s">myId"</span>
+    <span class="s">properties</span><span class="pi">:</span>
+      <span class="pi">-</span> <span class="s">name</span><span class="pi">:</span> <span class="s2">"</span><span class="s">forceFromStart"</span>
+        <span class="s">value</span><span class="pi">:</span> <span class="s">true</span>
+      <span class="pi">-</span> <span class="s">name</span><span class="pi">:</span> <span class="s2">"</span><span class="s">scheme"</span>
+        <span class="s">ref</span><span class="pi">:</span> <span class="s2">"</span><span class="s">stringMultiScheme"</span>
 
-<span class="l-Scalar-Plain">config</span><span class="p-Indicator">:</span>
-  <span class="l-Scalar-Plain">topology.workers</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">1</span>
+<span class="s">config</span><span class="pi">:</span>
+  <span class="s">topology.workers</span><span class="pi">:</span> <span class="s">1</span>
 
 <span class="c1"># spout definitions</span>
-<span class="l-Scalar-Plain">spouts</span><span class="p-Indicator">:</span>
-  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">id</span><span class="p-Indicator">:</span> <span class="s">&quot;kafka-spout&quot;</span>
-    <span class="l-Scalar-Plain">className</span><span class="p-Indicator">:</span> <span class="s">&quot;storm.kafka.KafkaSpout&quot;</span>
-    <span class="l-Scalar-Plain">constructorArgs</span><span class="p-Indicator">:</span>
-      <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">ref</span><span class="p-Indicator">:</span> <span class="s">&quot;spoutConfig&quot;</span>
+<span class="s">spouts</span><span class="pi">:</span>
+  <span class="pi">-</span> <span class="s">id</span><span class="pi">:</span> <span class="s2">"</span><span class="s">kafka-spout"</span>
+    <span class="s">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">storm.kafka.KafkaSpout"</span>
+    <span class="s">constructorArgs</span><span class="pi">:</span>
+      <span class="pi">-</span> <span class="s">ref</span><span class="pi">:</span> <span class="s2">"</span><span class="s">spoutConfig"</span>
+
 </code></pre></div>
 <p>Bolt Examples:</p>
 <div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="c1"># bolt definitions</span>
-<span class="l-Scalar-Plain">bolts</span><span class="p-Indicator">:</span>
-  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">id</span><span class="p-Indicator">:</span> <span class="s">&quot;splitsentence&quot;</span>
-    <span class="l-Scalar-Plain">className</span><span class="p-Indicator">:</span> <span class="s">&quot;org.apache.storm.flux.bolts.GenericShellBolt&quot;</span>
-    <span class="l-Scalar-Plain">constructorArgs</span><span class="p-Indicator">:</span>
+<span class="s">bolts</span><span class="pi">:</span>
+  <span class="pi">-</span> <span class="s">id</span><span class="pi">:</span> <span class="s2">"</span><span class="s">splitsentence"</span>
+    <span class="s">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">org.apache.storm.flux.bolts.GenericShellBolt"</span>
+    <span class="s">constructorArgs</span><span class="pi">:</span>
       <span class="c1"># command line</span>
-      <span class="p-Indicator">-</span> <span class="p-Indicator">[</span><span class="s">&quot;python&quot;</span><span class="p-Indicator">,</span> <span class="s">&quot;splitsentence.py&quot;</span><span class="p-Indicator">]</span>
+      <span class="pi">-</span> <span class="pi">[</span><span class="s2">"</span><span class="s">python"</span><span class="pi">,</span> <span class="s2">"</span><span class="s">splitsentence.py"</span><span class="pi">]</span>
       <span class="c1"># output fields</span>
-      <span class="p-Indicator">-</span> <span class="p-Indicator">[</span><span class="s">&quot;word&quot;</span><span class="p-Indicator">]</span>
-    <span class="l-Scalar-Plain">parallelism</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">1</span>
+      <span class="pi">-</span> <span class="pi">[</span><span class="s2">"</span><span class="s">word"</span><span class="pi">]</span>
+    <span class="s">parallelism</span><span class="pi">:</span> <span class="s">1</span>
     <span class="c1"># ...</span>
 
-  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">id</span><span class="p-Indicator">:</span> <span class="s">&quot;log&quot;</span>
-    <span class="l-Scalar-Plain">className</span><span class="p-Indicator">:</span> <span class="s">&quot;org.apache.storm.flux.wrappers.bolts.LogInfoBolt&quot;</span>
-    <span class="l-Scalar-Plain">parallelism</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">1</span>
+  <span class="pi">-</span> <span class="s">id</span><span class="pi">:</span> <span class="s2">"</span><span class="s">log"</span>
+    <span class="s">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">org.apache.storm.flux.wrappers.bolts.LogInfoBolt"</span>
+    <span class="s">parallelism</span><span class="pi">:</span> <span class="s">1</span>
     <span class="c1"># ...</span>
 
-  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">id</span><span class="p-Indicator">:</span> <span class="s">&quot;count&quot;</span>
-    <span class="l-Scalar-Plain">className</span><span class="p-Indicator">:</span> <span class="s">&quot;backtype.storm.testing.TestWordCounter&quot;</span>
-    <span class="l-Scalar-Plain">parallelism</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">1</span>
+  <span class="pi">-</span> <span class="s">id</span><span class="pi">:</span> <span class="s2">"</span><span class="s">count"</span>
+    <span class="s">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">backtype.storm.testing.TestWordCounter"</span>
+    <span class="s">parallelism</span><span class="pi">:</span> <span class="s">1</span>
     <span class="c1"># ...</span>
 </code></pre></div>
 <h2 id="streams-and-stream-groupings">Streams and Stream Groupings</h2>
@@ -689,31 +695,31 @@
 <p><strong><code>customClass</code></strong> For the <code>CUSTOM</code> grouping, a definition of custom grouping class instance</p>
 
 <p>The <code>streams</code> definition example below sets up a topology with the following wiring:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">    kafka-spout --&gt; splitsentence --&gt; count --&gt; log
+<div class="highlight"><pre><code class="language-" data-lang="">    kafka-spout --&gt; splitsentence --&gt; count --&gt; log
 </code></pre></div><div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="c1">#stream definitions</span>
 <span class="c1"># stream definitions define connections between spouts and bolts.</span>
 <span class="c1"># note that such connections can be cyclical</span>
 <span class="c1"># custom stream groupings are also supported</span>
 
-<span class="l-Scalar-Plain">streams</span><span class="p-Indicator">:</span>
-  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="s">&quot;kafka</span><span class="nv"> </span><span class="s">--&gt;</span><span class="nv"> </span><span class="s">split&quot;</span> <span class="c1"># name isn&#39;t used (placeholder for logging, UI, etc.)</span>
-    <span class="l-Scalar-Plain">from</span><span class="p-Indicator">:</span> <span class="s">&quot;kafka-spout&quot;</span>
-    <span class="l-Scalar-Plain">to</span><span class="p-Indicator">:</span> <span class="s">&quot;splitsentence&quot;</span>
-    <span class="l-Scalar-Plain">grouping</span><span class="p-Indicator">:</span>
-      <span class="l-Scalar-Plain">type</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">SHUFFLE</span>
+<span class="s">streams</span><span class="pi">:</span>
+  <span class="pi">-</span> <span class="s">name</span><span class="pi">:</span> <span class="s2">"</span><span class="s">kafka</span><span class="nv"> </span><span class="s">--&gt;</span><span class="nv"> </span><span class="s">split"</span> <span class="c1"># name isn't used (placeholder for logging, UI, etc.)</span>
+    <span class="s">from</span><span class="pi">:</span> <span class="s2">"</span><span class="s">kafka-spout"</span>
+    <span class="s">to</span><span class="pi">:</span> <span class="s2">"</span><span class="s">splitsentence"</span>
+    <span class="s">grouping</span><span class="pi">:</span>
+      <span class="s">type</span><span class="pi">:</span> <span class="s">SHUFFLE</span>
 
-  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="s">&quot;split</span><span class="nv"> </span><span class="s">--&gt;</span><span class="nv"> </span><span class="s">count&quot;</span>
-    <span class="l-Scalar-Plain">from</span><span class="p-Indicator">:</span> <span class="s">&quot;splitsentence&quot;</span>
-    <span class="l-Scalar-Plain">to</span><span class="p-Indicator">:</span> <span class="s">&quot;count&quot;</span>
-    <span class="l-Scalar-Plain">grouping</span><span class="p-Indicator">:</span>
-      <span class="l-Scalar-Plain">type</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">FIELDS</span>
-      <span class="l-Scalar-Plain">args</span><span class="p-Indicator">:</span> <span class="p-Indicator">[</span><span class="s">&quot;word&quot;</span><span class="p-Indicator">]</span>
+  <span class="pi">-</span> <span class="s">name</span><span class="pi">:</span> <span class="s2">"</span><span class="s">split</span><span class="nv"> </span><span class="s">--&gt;</span><span class="nv"> </span><span class="s">count"</span>
+    <span class="s">from</span><span class="pi">:</span> <span class="s2">"</span><span class="s">splitsentence"</span>
+    <span class="s">to</span><span class="pi">:</span> <span class="s2">"</span><span class="s">count"</span>
+    <span class="s">grouping</span><span class="pi">:</span>
+      <span class="s">type</span><span class="pi">:</span> <span class="s">FIELDS</span>
+      <span class="s">args</span><span class="pi">:</span> <span class="pi">[</span><span class="s2">"</span><span class="s">word"</span><span class="pi">]</span>
 
-  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="s">&quot;count</span><span class="nv"> </span><span class="s">--&gt;</span><span class="nv"> </span><span class="s">log&quot;</span>
-    <span class="l-Scalar-Plain">from</span><span class="p-Indicator">:</span> <span class="s">&quot;count&quot;</span>
-    <span class="l-Scalar-Plain">to</span><span class="p-Indicator">:</span> <span class="s">&quot;log&quot;</span>
-    <span class="l-Scalar-Plain">grouping</span><span class="p-Indicator">:</span>
-      <span class="l-Scalar-Plain">type</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">SHUFFLE</span>
+  <span class="pi">-</span> <span class="s">name</span><span class="pi">:</span> <span class="s2">"</span><span class="s">count</span><span class="nv"> </span><span class="s">--&gt;</span><span class="nv"> </span><span class="s">log"</span>
+    <span class="s">from</span><span class="pi">:</span> <span class="s2">"</span><span class="s">count"</span>
+    <span class="s">to</span><span class="pi">:</span> <span class="s2">"</span><span class="s">log"</span>
+    <span class="s">grouping</span><span class="pi">:</span>
+      <span class="s">type</span><span class="pi">:</span> <span class="s">SHUFFLE</span>
 </code></pre></div>
 <h3 id="custom-stream-groupings">Custom Stream Groupings</h3>
 
@@ -723,15 +729,15 @@
 
 <p>The example below creates a Stream with an instance of the <code>backtype.storm.testing.NGrouping</code> custom stream grouping
 class.</p>
-<div class="highlight"><pre><code class="language-yaml" data-lang="yaml">  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="s">&quot;bolt-1</span><span class="nv"> </span><span class="s">--&gt;</span><span class="nv"> </span><span class="s">bolt2&quot;</span>
-    <span class="l-Scalar-Plain">from</span><span class="p-Indicator">:</span> <span class="s">&quot;bolt-1&quot;</span>
-    <span class="l-Scalar-Plain">to</span><span class="p-Indicator">:</span> <span class="s">&quot;bolt-2&quot;</span>
-    <span class="l-Scalar-Plain">grouping</span><span class="p-Indicator">:</span>
-      <span class="l-Scalar-Plain">type</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">CUSTOM</span>
-      <span class="l-Scalar-Plain">customClass</span><span class="p-Indicator">:</span>
-        <span class="l-Scalar-Plain">className</span><span class="p-Indicator">:</span> <span class="s">&quot;backtype.storm.testing.NGrouping&quot;</span>
-        <span class="l-Scalar-Plain">constructorArgs</span><span class="p-Indicator">:</span>
-          <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">1</span>
+<div class="highlight"><pre><code class="language-yaml" data-lang="yaml">  <span class="pi">-</span> <span class="s">name</span><span class="pi">:</span> <span class="s2">"</span><span class="s">bolt-1</span><span class="nv"> </span><span class="s">--&gt;</span><span class="nv"> </span><span class="s">bolt2"</span>
+    <span class="s">from</span><span class="pi">:</span> <span class="s2">"</span><span class="s">bolt-1"</span>
+    <span class="s">to</span><span class="pi">:</span> <span class="s2">"</span><span class="s">bolt-2"</span>
+    <span class="s">grouping</span><span class="pi">:</span>
+      <span class="s">type</span><span class="pi">:</span> <span class="s">CUSTOM</span>
+      <span class="s">customClass</span><span class="pi">:</span>
+        <span class="s">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">backtype.storm.testing.NGrouping"</span>
+        <span class="s">constructorArgs</span><span class="pi">:</span>
+          <span class="pi">-</span> <span class="s">1</span>
 </code></pre></div>
 <h2 id="includes-and-overrides">Includes and Overrides</h2>
 
@@ -739,10 +745,10 @@
 same file. Includes may be either files, or classpath resources.</p>
 
 <p>Includes are specified as a list of maps:</p>
-<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="l-Scalar-Plain">includes</span><span class="p-Indicator">:</span>
-  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">resource</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">false</span>
-    <span class="l-Scalar-Plain">file</span><span class="p-Indicator">:</span> <span class="s">&quot;src/test/resources/configs/shell_test.yaml&quot;</span>
-    <span class="l-Scalar-Plain">override</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">false</span>
+<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="s">includes</span><span class="pi">:</span>
+  <span class="pi">-</span> <span class="s">resource</span><span class="pi">:</span> <span class="s">false</span>
+    <span class="s">file</span><span class="pi">:</span> <span class="s2">"</span><span class="s">src/test/resources/configs/shell_test.yaml"</span>
+    <span class="s">override</span><span class="pi">:</span> <span class="s">false</span>
 </code></pre></div>
 <p>If the <code>resource</code> property is set to <code>true</code>, the include will be loaded as a classpath resource from the value of the
 <code>file</code> attribute, otherwise it will be treated as a regular file.</p>
@@ -759,80 +765,80 @@
 
 <p>Topology YAML config:</p>
 <div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="nn">---</span>
-<span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="s">&quot;shell-topology&quot;</span>
-<span class="l-Scalar-Plain">config</span><span class="p-Indicator">:</span>
-  <span class="l-Scalar-Plain">topology.workers</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">1</span>
+<span class="s">name</span><span class="pi">:</span> <span class="s2">"</span><span class="s">shell-topology"</span>
+<span class="s">config</span><span class="pi">:</span>
+  <span class="s">topology.workers</span><span class="pi">:</span> <span class="s">1</span>
 
 <span class="c1"># spout definitions</span>
-<span class="l-Scalar-Plain">spouts</span><span class="p-Indicator">:</span>
-  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">id</span><span class="p-Indicator">:</span> <span class="s">&quot;sentence-spout&quot;</span>
-    <span class="l-Scalar-Plain">className</span><span class="p-Indicator">:</span> <span class="s">&quot;org.apache.storm.flux.spouts.GenericShellSpout&quot;</span>
+<span class="s">spouts</span><span class="pi">:</span>
+  <span class="pi">-</span> <span class="s">id</span><span class="pi">:</span> <span class="s2">"</span><span class="s">sentence-spout"</span>
+    <span class="s">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">org.apache.storm.flux.spouts.GenericShellSpout"</span>
     <span class="c1"># shell spout constructor takes 2 arguments: String[], String[]</span>
-    <span class="l-Scalar-Plain">constructorArgs</span><span class="p-Indicator">:</span>
+    <span class="s">constructorArgs</span><span class="pi">:</span>
       <span class="c1"># command line</span>
-      <span class="p-Indicator">-</span> <span class="p-Indicator">[</span><span class="s">&quot;node&quot;</span><span class="p-Indicator">,</span> <span class="s">&quot;randomsentence.js&quot;</span><span class="p-Indicator">]</span>
+      <span class="pi">-</span> <span class="pi">[</span><span class="s2">"</span><span class="s">node"</span><span class="pi">,</span> <span class="s2">"</span><span class="s">randomsentence.js"</span><span class="pi">]</span>
       <span class="c1"># output fields</span>
-      <span class="p-Indicator">-</span> <span class="p-Indicator">[</span><span class="s">&quot;word&quot;</span><span class="p-Indicator">]</span>
-    <span class="l-Scalar-Plain">parallelism</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">1</span>
+      <span class="pi">-</span> <span class="pi">[</span><span class="s2">"</span><span class="s">word"</span><span class="pi">]</span>
+    <span class="s">parallelism</span><span class="pi">:</span> <span class="s">1</span>
 
 <span class="c1"># bolt definitions</span>
-<span class="l-Scalar-Plain">bolts</span><span class="p-Indicator">:</span>
-  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">id</span><span class="p-Indicator">:</span> <span class="s">&quot;splitsentence&quot;</span>
-    <span class="l-Scalar-Plain">className</span><span class="p-Indicator">:</span> <span class="s">&quot;org.apache.storm.flux.bolts.GenericShellBolt&quot;</span>
-    <span class="l-Scalar-Plain">constructorArgs</span><span class="p-Indicator">:</span>
+<span class="s">bolts</span><span class="pi">:</span>
+  <span class="pi">-</span> <span class="s">id</span><span class="pi">:</span> <span class="s2">"</span><span class="s">splitsentence"</span>
+    <span class="s">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">org.apache.storm.flux.bolts.GenericShellBolt"</span>
+    <span class="s">constructorArgs</span><span class="pi">:</span>
       <span class="c1"># command line</span>
-      <span class="p-Indicator">-</span> <span class="p-Indicator">[</span><span class="s">&quot;python&quot;</span><span class="p-Indicator">,</span> <span class="s">&quot;splitsentence.py&quot;</span><span class="p-Indicator">]</span>
+      <span class="pi">-</span> <span class="pi">[</span><span class="s2">"</span><span class="s">python"</span><span class="pi">,</span> <span class="s2">"</span><span class="s">splitsentence.py"</span><span class="pi">]</span>
       <span class="c1"># output fields</span>
-      <span class="p-Indicator">-</span> <span class="p-Indicator">[</span><span class="s">&quot;word&quot;</span><span class="p-Indicator">]</span>
-    <span class="l-Scalar-Plain">parallelism</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">1</span>
+      <span class="pi">-</span> <span class="pi">[</span><span class="s2">"</span><span class="s">word"</span><span class="pi">]</span>
+    <span class="s">parallelism</span><span class="pi">:</span> <span class="s">1</span>
 
-  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">id</span><span class="p-Indicator">:</span> <span class="s">&quot;log&quot;</span>
-    <span class="l-Scalar-Plain">className</span><span class="p-Indicator">:</span> <span class="s">&quot;org.apache.storm.flux.wrappers.bolts.LogInfoBolt&quot;</span>
-    <span class="l-Scalar-Plain">parallelism</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">1</span>
+  <span class="pi">-</span> <span class="s">id</span><span class="pi">:</span> <span class="s2">"</span><span class="s">log"</span>
+    <span class="s">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">org.apache.storm.flux.wrappers.bolts.LogInfoBolt"</span>
+    <span class="s">parallelism</span><span class="pi">:</span> <span class="s">1</span>
 
-  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">id</span><span class="p-Indicator">:</span> <span class="s">&quot;count&quot;</span>
-    <span class="l-Scalar-Plain">className</span><span class="p-Indicator">:</span> <span class="s">&quot;backtype.storm.testing.TestWordCounter&quot;</span>
-    <span class="l-Scalar-Plain">parallelism</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">1</span>
+  <span class="pi">-</span> <span class="s">id</span><span class="pi">:</span> <span class="s2">"</span><span class="s">count"</span>
+    <span class="s">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">backtype.storm.testing.TestWordCounter"</span>
+    <span class="s">parallelism</span><span class="pi">:</span> <span class="s">1</span>
 
 <span class="c1">#stream definitions</span>
 <span class="c1"># stream definitions define connections between spouts and bolts.</span>
 <span class="c1"># note that such connections can be cyclical</span>
 <span class="c1"># custom stream groupings are also supported</span>
 
-<span class="l-Scalar-Plain">streams</span><span class="p-Indicator">:</span>
-  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="s">&quot;spout</span><span class="nv"> </span><span class="s">--&gt;</span><span class="nv"> </span><span class="s">split&quot;</span> <span class="c1"># name isn&#39;t used (placeholder for logging, UI, etc.)</span>
-    <span class="l-Scalar-Plain">from</span><span class="p-Indicator">:</span> <span class="s">&quot;sentence-spout&quot;</span>
-    <span class="l-Scalar-Plain">to</span><span class="p-Indicator">:</span> <span class="s">&quot;splitsentence&quot;</span>
-    <span class="l-Scalar-Plain">grouping</span><span class="p-Indicator">:</span>
-      <span class="l-Scalar-Plain">type</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">SHUFFLE</span>
+<span class="s">streams</span><span class="pi">:</span>
+  <span class="pi">-</span> <span class="s">name</span><span class="pi">:</span> <span class="s2">"</span><span class="s">spout</span><span class="nv"> </span><span class="s">--&gt;</span><span class="nv"> </span><span class="s">split"</span> <span class="c1"># name isn't used (placeholder for logging, UI, etc.)</span>
+    <span class="s">from</span><span class="pi">:</span> <span class="s2">"</span><span class="s">sentence-spout"</span>
+    <span class="s">to</span><span class="pi">:</span> <span class="s2">"</span><span class="s">splitsentence"</span>
+    <span class="s">grouping</span><span class="pi">:</span>
+      <span class="s">type</span><span class="pi">:</span> <span class="s">SHUFFLE</span>
 
-  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="s">&quot;split</span><span class="nv"> </span><span class="s">--&gt;</span><span class="nv"> </span><span class="s">count&quot;</span>
-    <span class="l-Scalar-Plain">from</span><span class="p-Indicator">:</span> <span class="s">&quot;splitsentence&quot;</span>
-    <span class="l-Scalar-Plain">to</span><span class="p-Indicator">:</span> <span class="s">&quot;count&quot;</span>
-    <span class="l-Scalar-Plain">grouping</span><span class="p-Indicator">:</span>
-      <span class="l-Scalar-Plain">type</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">FIELDS</span>
-      <span class="l-Scalar-Plain">args</span><span class="p-Indicator">:</span> <span class="p-Indicator">[</span><span class="s">&quot;word&quot;</span><span class="p-Indicator">]</span>
+  <span class="pi">-</span> <span class="s">name</span><span class="pi">:</span> <span class="s2">"</span><span class="s">split</span><span class="nv"> </span><span class="s">--&gt;</span><span class="nv"> </span><span class="s">count"</span>
+    <span class="s">from</span><span class="pi">:</span> <span class="s2">"</span><span class="s">splitsentence"</span>
+    <span class="s">to</span><span class="pi">:</span> <span class="s2">"</span><span class="s">count"</span>
+    <span class="s">grouping</span><span class="pi">:</span>
+      <span class="s">type</span><span class="pi">:</span> <span class="s">FIELDS</span>
+      <span class="s">args</span><span class="pi">:</span> <span class="pi">[</span><span class="s2">"</span><span class="s">word"</span><span class="pi">]</span>
 
-  <span class="p-Indicator">-</span> <span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="s">&quot;count</span><span class="nv"> </span><span class="s">--&gt;</span><span class="nv"> </span><span class="s">log&quot;</span>
-    <span class="l-Scalar-Plain">from</span><span class="p-Indicator">:</span> <span class="s">&quot;count&quot;</span>
-    <span class="l-Scalar-Plain">to</span><span class="p-Indicator">:</span> <span class="s">&quot;log&quot;</span>
-    <span class="l-Scalar-Plain">grouping</span><span class="p-Indicator">:</span>
-      <span class="l-Scalar-Plain">type</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">SHUFFLE</span>
+  <span class="pi">-</span> <span class="s">name</span><span class="pi">:</span> <span class="s2">"</span><span class="s">count</span><span class="nv"> </span><span class="s">--&gt;</span><span class="nv"> </span><span class="s">log"</span>
+    <span class="s">from</span><span class="pi">:</span> <span class="s2">"</span><span class="s">count"</span>
+    <span class="s">to</span><span class="pi">:</span> <span class="s2">"</span><span class="s">log"</span>
+    <span class="s">grouping</span><span class="pi">:</span>
+      <span class="s">type</span><span class="pi">:</span> <span class="s">SHUFFLE</span>
 </code></pre></div>
-<h2 id="micro-batching-(trident)-api-support">Micro-Batching (Trident) API Support</h2>
+<h2 id="micro-batching-trident-api-support">Micro-Batching (Trident) API Support</h2>
 
 <p>Currenty, the Flux YAML DSL only supports the Core Storm API, but support for Storm&#39;s micro-batching API is planned.</p>
 
 <p>To use Flux with a Trident topology, define a topology getter method and reference it in your YAML config:</p>
-<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="l-Scalar-Plain">name</span><span class="p-Indicator">:</span> <span class="s">&quot;my-trident-topology&quot;</span>
+<div class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="s">name</span><span class="pi">:</span> <span class="s2">"</span><span class="s">my-trident-topology"</span>
 
-<span class="l-Scalar-Plain">config</span><span class="p-Indicator">:</span>
-  <span class="l-Scalar-Plain">topology.workers</span><span class="p-Indicator">:</span> <span class="l-Scalar-Plain">1</span>
+<span class="s">config</span><span class="pi">:</span>
+  <span class="s">topology.workers</span><span class="pi">:</span> <span class="s">1</span>
 
-<span class="l-Scalar-Plain">topologySource</span><span class="p-Indicator">:</span>
-  <span class="l-Scalar-Plain">className</span><span class="p-Indicator">:</span> <span class="s">&quot;org.apache.storm.flux.test.TridentTopologySource&quot;</span>
-  <span class="c1"># Flux will look for &quot;getTopology&quot;, this will override that.</span>
-  <span class="l-Scalar-Plain">methodName</span><span class="p-Indicator">:</span> <span class="s">&quot;getTopologyWithDifferentMethodName&quot;</span>
+<span class="s">topologySource</span><span class="pi">:</span>
+  <span class="s">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">org.apache.storm.flux.test.TridentTopologySource"</span>
+  <span class="c1"># Flux will look for "getTopology", this will override that.</span>
+  <span class="s">methodName</span><span class="pi">:</span> <span class="s2">"</span><span class="s">getTopologyWithDifferentMethodName"</span>
 </code></pre></div>
 
 
diff --git a/_site/documentation/getting-help.html b/_site/documentation/getting-help.html
index c15cb44..4cc2276 100644
--- a/_site/documentation/getting-help.html
+++ b/_site/documentation/getting-help.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -106,13 +106,13 @@
 
 <p>You can view the archives of the mailing list <a href="http://mail-archives.apache.org/mod_mbox/storm-dev/">here</a>.</p>
 
-<h4 id="which-list-should-i-send/subscribe-to?">Which list should I send/subscribe to?</h4>
+<h4 id="which-list-should-i-send-subscribe-to">Which list should I send/subscribe to?</h4>
 
 <p>If you are using a pre-built binary distribution of Storm, then chances are you should send questions, comments, storm-related announcements, etc. to <a href="user@storm.apache.org">user@storm.apache.org</a>. </p>
 
 <p>If you are building storm from source, developing new features, or otherwise hacking storm source code, then <a href="dev@storm.apache.org">dev@storm.apache.org</a> is more appropriate. </p>
 
-<h4 id="what-will-happen-with-storm-user@googlegroups.com?">What will happen with <a href="mailto:storm-user@googlegroups.com">storm-user@googlegroups.com</a>?</h4>
+<h4 id="what-will-happen-with-storm-user-googlegroups-com">What will happen with <a href="mailto:storm-user@googlegroups.com">storm-user@googlegroups.com</a>?</h4>
 
 <p>All existing messages will remain archived there, and can be accessed/searched <a href="https://groups.google.com/forum/#!forum/storm-user">here</a>.</p>
 
diff --git a/_site/documentation/storm-eventhubs.html b/_site/documentation/storm-eventhubs.html
index 7aa9493..49cb0aa 100644
--- a/_site/documentation/storm-eventhubs.html
+++ b/_site/documentation/storm-eventhubs.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -93,19 +93,19 @@
 <p>Storm spout and bolt implementation for Microsoft Azure Eventhubs</p>
 
 <h3 id="build">build</h3>
-<div class="highlight"><pre><code class="language-text" data-lang="text">mvn clean package
+<div class="highlight"><pre><code class="language-" data-lang="">mvn clean package
 </code></pre></div>
 <h3 id="run-sample-topology">run sample topology</h3>
 
 <p>To run the sample topology, you need to modify the config.properties file with
 the eventhubs configurations. Here is an example:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">eventhubspout.username = [username: policy name in EventHubs Portal]
+<div class="highlight"><pre><code class="language-" data-lang="">eventhubspout.username = [username: policy name in EventHubs Portal]
 eventhubspout.password = [password: shared access key in EventHubs Portal]
 eventhubspout.namespace = [namespace]
 eventhubspout.entitypath = [entitypath]
 eventhubspout.partitions.count = [partitioncount]
 
-# if not provided, will use storm&#39;s zookeeper settings
+# if not provided, will use storm's zookeeper settings
 # zookeeper.connectionstring=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
 
 eventhubspout.checkpoint.interval = 10
@@ -123,7 +123,7 @@
 If you want to send messages to all partitions, use &quot;-1&quot; as partitionId.</p>
 
 <h3 id="windows-azure-eventhubs">Windows Azure Eventhubs</h3>
-<div class="highlight"><pre><code class="language-text" data-lang="text">http://azure.microsoft.com/en-us/services/event-hubs/
+<div class="highlight"><pre><code class="language-" data-lang="">http://azure.microsoft.com/en-us/services/event-hubs/
 </code></pre></div>
 
 
diff --git a/_site/documentation/storm-hbase.html b/_site/documentation/storm-hbase.html
index 678760d..5c638f8 100644
--- a/_site/documentation/storm-hbase.html
+++ b/_site/documentation/storm-hbase.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -97,9 +97,9 @@
 <p>The main API for interacting with HBase is the <code>org.apache.storm.hbase.bolt.mapper.HBaseMapper</code>
 interface:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">interface</span> <span class="nc">HBaseMapper</span> <span class="kd">extends</span> <span class="n">Serializable</span> <span class="o">{</span>
-    <span class="kt">byte</span><span class="o">[]</span> <span class="nf">rowKey</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">tuple</span><span class="o">);</span>
+    <span class="kt">byte</span><span class="o">[]</span> <span class="n">rowKey</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">tuple</span><span class="o">);</span>
 
-    <span class="n">ColumnList</span> <span class="nf">columns</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">tuple</span><span class="o">);</span>
+    <span class="n">ColumnList</span> <span class="n">columns</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">tuple</span><span class="o">);</span>
 <span class="o">}</span>
 </code></pre></div>
 <p>The <code>rowKey()</code> method is straightforward: given a Storm tuple, return a byte array representing the
@@ -109,23 +109,23 @@
 to add both standard HBase columns as well as HBase counter columns.</p>
 
 <p>To add a standard column, use one of the <code>addColumn()</code> methods:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">ColumnList</span> <span class="n">cols</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">ColumnList</span><span class="o">();</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">ColumnList</span> <span class="n">cols</span> <span class="o">=</span> <span class="k">new</span> <span class="n">ColumnList</span><span class="o">();</span>
 <span class="n">cols</span><span class="o">.</span><span class="na">addColumn</span><span class="o">(</span><span class="k">this</span><span class="o">.</span><span class="na">columnFamily</span><span class="o">,</span> <span class="n">field</span><span class="o">.</span><span class="na">getBytes</span><span class="o">(),</span> <span class="n">toBytes</span><span class="o">(</span><span class="n">tuple</span><span class="o">.</span><span class="na">getValueByField</span><span class="o">(</span><span class="n">field</span><span class="o">)));</span>
 </code></pre></div>
 <p>To add a counter column, use one of the <code>addCounter()</code> methods:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">ColumnList</span> <span class="n">cols</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">ColumnList</span><span class="o">();</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">ColumnList</span> <span class="n">cols</span> <span class="o">=</span> <span class="k">new</span> <span class="n">ColumnList</span><span class="o">();</span>
 <span class="n">cols</span><span class="o">.</span><span class="na">addCounter</span><span class="o">(</span><span class="k">this</span><span class="o">.</span><span class="na">columnFamily</span><span class="o">,</span> <span class="n">field</span><span class="o">.</span><span class="na">getBytes</span><span class="o">(),</span> <span class="n">toLong</span><span class="o">(</span><span class="n">tuple</span><span class="o">.</span><span class="na">getValueByField</span><span class="o">(</span><span class="n">field</span><span class="o">)));</span>
 </code></pre></div>
 <p>When the remote HBase is security enabled, a kerberos keytab and the corresponding principal name need to be
 provided for the storm-hbase connector. Specifically, the Config object passed into the topology should contain
 {(“storm.keytab.file”, “$keytab”), (&quot;storm.kerberos.principal&quot;, “$principal”)}. Example:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">Config</span> <span class="n">config</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">Config</span><span class="o">();</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">Config</span> <span class="n">config</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Config</span><span class="o">();</span>
 <span class="o">...</span>
-<span class="n">config</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">&quot;storm.keytab.file&quot;</span><span class="o">,</span> <span class="s">&quot;$keytab&quot;</span><span class="o">);</span>
-<span class="n">config</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">&quot;storm.kerberos.principal&quot;</span><span class="o">,</span> <span class="s">&quot;$principle&quot;</span><span class="o">);</span>
-<span class="n">StormSubmitter</span><span class="o">.</span><span class="na">submitTopology</span><span class="o">(</span><span class="s">&quot;$topologyName&quot;</span><span class="o">,</span> <span class="n">config</span><span class="o">,</span> <span class="n">builder</span><span class="o">.</span><span class="na">createTopology</span><span class="o">());</span>
+<span class="n">config</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">"storm.keytab.file"</span><span class="o">,</span> <span class="s">"$keytab"</span><span class="o">);</span>
+<span class="n">config</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">"storm.kerberos.principal"</span><span class="o">,</span> <span class="s">"$principle"</span><span class="o">);</span>
+<span class="n">StormSubmitter</span><span class="o">.</span><span class="na">submitTopology</span><span class="o">(</span><span class="s">"$topologyName"</span><span class="o">,</span> <span class="n">config</span><span class="o">,</span> <span class="n">builder</span><span class="o">.</span><span class="na">createTopology</span><span class="o">());</span>
 </code></pre></div>
-<h2 id="working-with-secure-hbase-using-delegation-tokens.">Working with Secure HBASE using delegation tokens.</h2>
+<h2 id="working-with-secure-hbase-using-delegation-tokens">Working with Secure HBASE using delegation tokens.</h2>
 
 <p>If your topology is going to interact with secure HBase, your bolts/states needs to be authenticated by HBase. 
 The approach described above requires that all potential worker hosts have &quot;storm.keytab.file&quot; on them. If you have 
@@ -176,16 +176,16 @@
 <li>Adds an HBase counter column for the tuple field <code>count</code>.</li>
 <li>Writes values to the <code>cf</code> column family.</li>
 </ol>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">SimpleHBaseMapper</span> <span class="n">mapper</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">SimpleHBaseMapper</span><span class="o">()</span> 
-        <span class="o">.</span><span class="na">withRowKeyField</span><span class="o">(</span><span class="s">&quot;word&quot;</span><span class="o">)</span>
-        <span class="o">.</span><span class="na">withColumnFields</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;word&quot;</span><span class="o">))</span>
-        <span class="o">.</span><span class="na">withCounterFields</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;count&quot;</span><span class="o">))</span>
-        <span class="o">.</span><span class="na">withColumnFamily</span><span class="o">(</span><span class="s">&quot;cf&quot;</span><span class="o">);</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">SimpleHBaseMapper</span> <span class="n">mapper</span> <span class="o">=</span> <span class="k">new</span> <span class="n">SimpleHBaseMapper</span><span class="o">()</span> 
+        <span class="o">.</span><span class="na">withRowKeyField</span><span class="o">(</span><span class="s">"word"</span><span class="o">)</span>
+        <span class="o">.</span><span class="na">withColumnFields</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"word"</span><span class="o">))</span>
+        <span class="o">.</span><span class="na">withCounterFields</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"count"</span><span class="o">))</span>
+        <span class="o">.</span><span class="na">withColumnFamily</span><span class="o">(</span><span class="s">"cf"</span><span class="o">);</span>
 </code></pre></div>
 <h3 id="hbasebolt">HBaseBolt</h3>
 
 <p>To use the <code>HBaseBolt</code>, construct it with the name of the table to write to, an a <code>HBaseMapper</code> implementation:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">HBaseBolt</span> <span class="n">hbase</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">HBaseBolt</span><span class="o">(</span><span class="s">&quot;WordCount&quot;</span><span class="o">,</span> <span class="n">mapper</span><span class="o">);</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">HBaseBolt</span> <span class="n">hbase</span> <span class="o">=</span> <span class="k">new</span> <span class="n">HBaseBolt</span><span class="o">(</span><span class="s">"WordCount"</span><span class="o">,</span> <span class="n">mapper</span><span class="o">);</span>
 </code></pre></div>
 <p>The <code>HBaseBolt</code> will delegate to the <code>mapper</code> instance to figure out how to persist tuple data to HBase.</p>
 
@@ -193,8 +193,8 @@
 
 <p>This class allows you to transform the HBase lookup result into storm Values that will be emitted by the <code>HBaseLookupBolt</code>.</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">interface</span> <span class="nc">HBaseValueMapper</span> <span class="kd">extends</span> <span class="n">Serializable</span> <span class="o">{</span>
-    <span class="kd">public</span> <span class="n">List</span><span class="o">&lt;</span><span class="n">Values</span><span class="o">&gt;</span> <span class="nf">toTuples</span><span class="o">(</span><span class="n">Result</span> <span class="n">result</span><span class="o">)</span> <span class="kd">throws</span> <span class="n">Exception</span><span class="o">;</span>
-    <span class="kt">void</span> <span class="nf">declareOutputFields</span><span class="o">(</span><span class="n">OutputFieldsDeclarer</span> <span class="n">declarer</span><span class="o">);</span>
+    <span class="kd">public</span> <span class="n">List</span><span class="o">&lt;</span><span class="n">Values</span><span class="o">&gt;</span> <span class="n">toTuples</span><span class="o">(</span><span class="n">Result</span> <span class="n">result</span><span class="o">)</span> <span class="kd">throws</span> <span class="n">Exception</span><span class="o">;</span>
+    <span class="kt">void</span> <span class="n">declareOutputFields</span><span class="o">(</span><span class="n">OutputFieldsDeclarer</span> <span class="n">declarer</span><span class="o">);</span>
 <span class="o">}</span>
 </code></pre></div>
 <p>The <code>toTuples</code> method takes in a HBase <code>Result</code> instance and expects a List of <code>Values</code> instant. 
@@ -209,8 +209,8 @@
 <p>This class allows you to specify the projection criteria for your HBase Get function. This is optional parameter
 for the lookupBolt and if you do not specify this instance all the columns will be returned by <code>HBaseLookupBolt</code>.</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">class</span> <span class="nc">HBaseProjectionCriteria</span> <span class="kd">implements</span> <span class="n">Serializable</span> <span class="o">{</span>
-    <span class="kd">public</span> <span class="n">HBaseProjectionCriteria</span> <span class="nf">addColumnFamily</span><span class="o">(</span><span class="n">String</span> <span class="n">columnFamily</span><span class="o">);</span>
-    <span class="kd">public</span> <span class="n">HBaseProjectionCriteria</span> <span class="nf">addColumn</span><span class="o">(</span><span class="n">ColumnMetaData</span> <span class="n">column</span><span class="o">);</span>
+    <span class="kd">public</span> <span class="n">HBaseProjectionCriteria</span> <span class="n">addColumnFamily</span><span class="o">(</span><span class="n">String</span> <span class="n">columnFamily</span><span class="o">);</span>
+    <span class="kd">public</span> <span class="n">HBaseProjectionCriteria</span> <span class="n">addColumn</span><span class="o">(</span><span class="n">ColumnMetaData</span> <span class="n">column</span><span class="o">);</span>
 </code></pre></div>
 <p><code>addColumnFamily</code> takes in columnFamily. Setting this parameter means all columns for this family will be included
  in the projection.</p>
@@ -223,9 +223,9 @@
 <li>includes count column from column family cf.</li>
 <li>includes all columns from column family cf2.</li>
 </ol>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">HBaseProjectionCriteria</span> <span class="n">projectionCriteria</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">HBaseProjectionCriteria</span><span class="o">()</span>
-    <span class="o">.</span><span class="na">addColumn</span><span class="o">(</span><span class="k">new</span> <span class="n">HBaseProjectionCriteria</span><span class="o">.</span><span class="na">ColumnMetaData</span><span class="o">(</span><span class="s">&quot;cf&quot;</span><span class="o">,</span> <span class="s">&quot;count&quot;</span><span class="o">))</span>
-    <span class="o">.</span><span class="na">addColumnFamily</span><span class="o">(</span><span class="s">&quot;cf2&quot;</span><span class="o">);</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">HBaseProjectionCriteria</span> <span class="n">projectionCriteria</span> <span class="o">=</span> <span class="k">new</span> <span class="n">HBaseProjectionCriteria</span><span class="o">()</span>
+    <span class="o">.</span><span class="na">addColumn</span><span class="o">(</span><span class="k">new</span> <span class="n">HBaseProjectionCriteria</span><span class="o">.</span><span class="na">ColumnMetaData</span><span class="o">(</span><span class="s">"cf"</span><span class="o">,</span> <span class="s">"count"</span><span class="o">))</span>
+    <span class="o">.</span><span class="na">addColumnFamily</span><span class="o">(</span><span class="s">"cf2"</span><span class="o">);</span>
 </code></pre></div>
 <h3 id="hbaselookupbolt">HBaseLookupBolt</h3>
 
@@ -238,7 +238,7 @@
 
 <p>You can look at an example topology LookupWordCount.java under <code>src/test/java</code>.</p>
 
-<h2 id="example:-persistent-word-count">Example: Persistent Word Count</h2>
+<h2 id="example-persistent-word-count">Example: Persistent Word Count</h2>
 
 <p>A runnable example can be found in the <code>src/test/java</code> directory.</p>
 
@@ -248,7 +248,7 @@
 classpath pointing to your HBase cluster.</p>
 
 <p>Use the <code>hbase shell</code> command to create the schema:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">&gt; create &#39;WordCount&#39;, &#39;cf&#39;
+<div class="highlight"><pre><code class="language-" data-lang="">&gt; create 'WordCount', 'cf'
 </code></pre></div>
 <h3 id="execution">Execution</h3>
 
@@ -256,47 +256,47 @@
 
 <p>After (or while) the word count topology is running, run the <code>org.apache.storm.hbase.topology.WordCountClient</code> class
 to view the counter values stored in HBase. You should see something like to following:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">Word: &#39;apple&#39;, Count: 6867
-Word: &#39;orange&#39;, Count: 6645
-Word: &#39;pineapple&#39;, Count: 6954
-Word: &#39;banana&#39;, Count: 6787
-Word: &#39;watermelon&#39;, Count: 6806
+<div class="highlight"><pre><code class="language-" data-lang="">Word: 'apple', Count: 6867
+Word: 'orange', Count: 6645
+Word: 'pineapple', Count: 6954
+Word: 'banana', Count: 6787
+Word: 'watermelon', Count: 6806
 </code></pre></div>
 <p>For reference, the sample topology is listed below:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">class</span> <span class="nc">PersistentWordCount</span> <span class="o">{</span>
-    <span class="kd">private</span> <span class="kd">static</span> <span class="kd">final</span> <span class="n">String</span> <span class="n">WORD_SPOUT</span> <span class="o">=</span> <span class="s">&quot;WORD_SPOUT&quot;</span><span class="o">;</span>
-    <span class="kd">private</span> <span class="kd">static</span> <span class="kd">final</span> <span class="n">String</span> <span class="n">COUNT_BOLT</span> <span class="o">=</span> <span class="s">&quot;COUNT_BOLT&quot;</span><span class="o">;</span>
-    <span class="kd">private</span> <span class="kd">static</span> <span class="kd">final</span> <span class="n">String</span> <span class="n">HBASE_BOLT</span> <span class="o">=</span> <span class="s">&quot;HBASE_BOLT&quot;</span><span class="o">;</span>
+    <span class="kd">private</span> <span class="kd">static</span> <span class="kd">final</span> <span class="n">String</span> <span class="n">WORD_SPOUT</span> <span class="o">=</span> <span class="s">"WORD_SPOUT"</span><span class="o">;</span>
+    <span class="kd">private</span> <span class="kd">static</span> <span class="kd">final</span> <span class="n">String</span> <span class="n">COUNT_BOLT</span> <span class="o">=</span> <span class="s">"COUNT_BOLT"</span><span class="o">;</span>
+    <span class="kd">private</span> <span class="kd">static</span> <span class="kd">final</span> <span class="n">String</span> <span class="n">HBASE_BOLT</span> <span class="o">=</span> <span class="s">"HBASE_BOLT"</span><span class="o">;</span>
 
 
-    <span class="kd">public</span> <span class="kd">static</span> <span class="kt">void</span> <span class="nf">main</span><span class="o">(</span><span class="n">String</span><span class="o">[]</span> <span class="n">args</span><span class="o">)</span> <span class="kd">throws</span> <span class="n">Exception</span> <span class="o">{</span>
-        <span class="n">Config</span> <span class="n">config</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">Config</span><span class="o">();</span>
+    <span class="kd">public</span> <span class="kd">static</span> <span class="kt">void</span> <span class="n">main</span><span class="o">(</span><span class="n">String</span><span class="o">[]</span> <span class="n">args</span><span class="o">)</span> <span class="kd">throws</span> <span class="n">Exception</span> <span class="o">{</span>
+        <span class="n">Config</span> <span class="n">config</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Config</span><span class="o">();</span>
 
-        <span class="n">WordSpout</span> <span class="n">spout</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">WordSpout</span><span class="o">();</span>
-        <span class="n">WordCounter</span> <span class="n">bolt</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">WordCounter</span><span class="o">();</span>
+        <span class="n">WordSpout</span> <span class="n">spout</span> <span class="o">=</span> <span class="k">new</span> <span class="n">WordSpout</span><span class="o">();</span>
+        <span class="n">WordCounter</span> <span class="n">bolt</span> <span class="o">=</span> <span class="k">new</span> <span class="n">WordCounter</span><span class="o">();</span>
 
-        <span class="n">SimpleHBaseMapper</span> <span class="n">mapper</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">SimpleHBaseMapper</span><span class="o">()</span>
-                <span class="o">.</span><span class="na">withRowKeyField</span><span class="o">(</span><span class="s">&quot;word&quot;</span><span class="o">)</span>
-                <span class="o">.</span><span class="na">withColumnFields</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;word&quot;</span><span class="o">))</span>
-                <span class="o">.</span><span class="na">withCounterFields</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;count&quot;</span><span class="o">))</span>
-                <span class="o">.</span><span class="na">withColumnFamily</span><span class="o">(</span><span class="s">&quot;cf&quot;</span><span class="o">);</span>
+        <span class="n">SimpleHBaseMapper</span> <span class="n">mapper</span> <span class="o">=</span> <span class="k">new</span> <span class="n">SimpleHBaseMapper</span><span class="o">()</span>
+                <span class="o">.</span><span class="na">withRowKeyField</span><span class="o">(</span><span class="s">"word"</span><span class="o">)</span>
+                <span class="o">.</span><span class="na">withColumnFields</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"word"</span><span class="o">))</span>
+                <span class="o">.</span><span class="na">withCounterFields</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"count"</span><span class="o">))</span>
+                <span class="o">.</span><span class="na">withColumnFamily</span><span class="o">(</span><span class="s">"cf"</span><span class="o">);</span>
 
-        <span class="n">HBaseBolt</span> <span class="n">hbase</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">HBaseBolt</span><span class="o">(</span><span class="s">&quot;WordCount&quot;</span><span class="o">,</span> <span class="n">mapper</span><span class="o">);</span>
+        <span class="n">HBaseBolt</span> <span class="n">hbase</span> <span class="o">=</span> <span class="k">new</span> <span class="n">HBaseBolt</span><span class="o">(</span><span class="s">"WordCount"</span><span class="o">,</span> <span class="n">mapper</span><span class="o">);</span>
 
 
         <span class="c1">// wordSpout ==&gt; countBolt ==&gt; HBaseBolt</span>
-        <span class="n">TopologyBuilder</span> <span class="n">builder</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">TopologyBuilder</span><span class="o">();</span>
+        <span class="n">TopologyBuilder</span> <span class="n">builder</span> <span class="o">=</span> <span class="k">new</span> <span class="n">TopologyBuilder</span><span class="o">();</span>
 
         <span class="n">builder</span><span class="o">.</span><span class="na">setSpout</span><span class="o">(</span><span class="n">WORD_SPOUT</span><span class="o">,</span> <span class="n">spout</span><span class="o">,</span> <span class="mi">1</span><span class="o">);</span>
         <span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="n">COUNT_BOLT</span><span class="o">,</span> <span class="n">bolt</span><span class="o">,</span> <span class="mi">1</span><span class="o">).</span><span class="na">shuffleGrouping</span><span class="o">(</span><span class="n">WORD_SPOUT</span><span class="o">);</span>
-        <span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="n">HBASE_BOLT</span><span class="o">,</span> <span class="n">hbase</span><span class="o">,</span> <span class="mi">1</span><span class="o">).</span><span class="na">fieldsGrouping</span><span class="o">(</span><span class="n">COUNT_BOLT</span><span class="o">,</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;word&quot;</span><span class="o">));</span>
+        <span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="n">HBASE_BOLT</span><span class="o">,</span> <span class="n">hbase</span><span class="o">,</span> <span class="mi">1</span><span class="o">).</span><span class="na">fieldsGrouping</span><span class="o">(</span><span class="n">COUNT_BOLT</span><span class="o">,</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"word"</span><span class="o">));</span>
 
 
         <span class="k">if</span> <span class="o">(</span><span class="n">args</span><span class="o">.</span><span class="na">length</span> <span class="o">==</span> <span class="mi">0</span><span class="o">)</span> <span class="o">{</span>
-            <span class="n">LocalCluster</span> <span class="n">cluster</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">LocalCluster</span><span class="o">();</span>
-            <span class="n">cluster</span><span class="o">.</span><span class="na">submitTopology</span><span class="o">(</span><span class="s">&quot;test&quot;</span><span class="o">,</span> <span class="n">config</span><span class="o">,</span> <span class="n">builder</span><span class="o">.</span><span class="na">createTopology</span><span class="o">());</span>
+            <span class="n">LocalCluster</span> <span class="n">cluster</span> <span class="o">=</span> <span class="k">new</span> <span class="n">LocalCluster</span><span class="o">();</span>
+            <span class="n">cluster</span><span class="o">.</span><span class="na">submitTopology</span><span class="o">(</span><span class="s">"test"</span><span class="o">,</span> <span class="n">config</span><span class="o">,</span> <span class="n">builder</span><span class="o">.</span><span class="na">createTopology</span><span class="o">());</span>
             <span class="n">Thread</span><span class="o">.</span><span class="na">sleep</span><span class="o">(</span><span class="mi">10000</span><span class="o">);</span>
-            <span class="n">cluster</span><span class="o">.</span><span class="na">killTopology</span><span class="o">(</span><span class="s">&quot;test&quot;</span><span class="o">);</span>
+            <span class="n">cluster</span><span class="o">.</span><span class="na">killTopology</span><span class="o">(</span><span class="s">"test"</span><span class="o">);</span>
             <span class="n">cluster</span><span class="o">.</span><span class="na">shutdown</span><span class="o">();</span>
             <span class="n">System</span><span class="o">.</span><span class="na">exit</span><span class="o">(</span><span class="mi">0</span><span class="o">);</span>
         <span class="o">}</span> <span class="k">else</span> <span class="o">{</span>
diff --git a/_site/documentation/storm-hdfs.html b/_site/documentation/storm-hdfs.html
index 5895997..2372d3d 100644
--- a/_site/documentation/storm-hdfs.html
+++ b/_site/documentation/storm-hdfs.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -97,21 +97,21 @@
 <p>The following example will write pipe(&quot;|&quot;)-delimited files to the HDFS path hdfs://localhost:54310/foo. After every
 1,000 tuples it will sync filesystem, making that data visible to other HDFS clients. It will rotate files when they
 reach 5 megabytes in size.</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="c1">// use &quot;|&quot; instead of &quot;,&quot; for field delimiter</span>
-<span class="n">RecordFormat</span> <span class="n">format</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">DelimitedRecordFormat</span><span class="o">()</span>
-        <span class="o">.</span><span class="na">withFieldDelimiter</span><span class="o">(</span><span class="s">&quot;|&quot;</span><span class="o">);</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="c1">// use "|" instead of "," for field delimiter</span>
+<span class="n">RecordFormat</span> <span class="n">format</span> <span class="o">=</span> <span class="k">new</span> <span class="n">DelimitedRecordFormat</span><span class="o">()</span>
+        <span class="o">.</span><span class="na">withFieldDelimiter</span><span class="o">(</span><span class="s">"|"</span><span class="o">);</span>
 
 <span class="c1">// sync the filesystem after every 1k tuples</span>
-<span class="n">SyncPolicy</span> <span class="n">syncPolicy</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">CountSyncPolicy</span><span class="o">(</span><span class="mi">1000</span><span class="o">);</span>
+<span class="n">SyncPolicy</span> <span class="n">syncPolicy</span> <span class="o">=</span> <span class="k">new</span> <span class="n">CountSyncPolicy</span><span class="o">(</span><span class="mi">1000</span><span class="o">);</span>
 
 <span class="c1">// rotate files when they reach 5MB</span>
-<span class="n">FileRotationPolicy</span> <span class="n">rotationPolicy</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">FileSizeRotationPolicy</span><span class="o">(</span><span class="mf">5.0f</span><span class="o">,</span> <span class="n">Units</span><span class="o">.</span><span class="na">MB</span><span class="o">);</span>
+<span class="n">FileRotationPolicy</span> <span class="n">rotationPolicy</span> <span class="o">=</span> <span class="k">new</span> <span class="n">FileSizeRotationPolicy</span><span class="o">(</span><span class="mf">5.0f</span><span class="o">,</span> <span class="n">Units</span><span class="o">.</span><span class="na">MB</span><span class="o">);</span>
 
-<span class="n">FileNameFormat</span> <span class="n">fileNameFormat</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">DefaultFileNameFormat</span><span class="o">()</span>
-        <span class="o">.</span><span class="na">withPath</span><span class="o">(</span><span class="s">&quot;/foo/&quot;</span><span class="o">);</span>
+<span class="n">FileNameFormat</span> <span class="n">fileNameFormat</span> <span class="o">=</span> <span class="k">new</span> <span class="n">DefaultFileNameFormat</span><span class="o">()</span>
+        <span class="o">.</span><span class="na">withPath</span><span class="o">(</span><span class="s">"/foo/"</span><span class="o">);</span>
 
-<span class="n">HdfsBolt</span> <span class="n">bolt</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">HdfsBolt</span><span class="o">()</span>
-        <span class="o">.</span><span class="na">withFsUrl</span><span class="o">(</span><span class="s">&quot;hdfs://localhost:54310&quot;</span><span class="o">)</span>
+<span class="n">HdfsBolt</span> <span class="n">bolt</span> <span class="o">=</span> <span class="k">new</span> <span class="n">HdfsBolt</span><span class="o">()</span>
+        <span class="o">.</span><span class="na">withFsUrl</span><span class="o">(</span><span class="s">"hdfs://localhost:54310"</span><span class="o">)</span>
         <span class="o">.</span><span class="na">withFileNameFormat</span><span class="o">(</span><span class="n">fileNameFormat</span><span class="o">)</span>
         <span class="o">.</span><span class="na">withRecordFormat</span><span class="o">(</span><span class="n">format</span><span class="o">)</span>
         <span class="o">.</span><span class="na">withRotationPolicy</span><span class="o">(</span><span class="n">rotationPolicy</span><span class="o">)</span>
@@ -126,7 +126,7 @@
 resolution.</p>
 
 <p>If you experience errors such as the following:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">java.lang.RuntimeException: Error preparing HdfsBolt: No FileSystem for scheme: hdfs
+<div class="highlight"><pre><code class="language-" data-lang="">java.lang.RuntimeException: Error preparing HdfsBolt: No FileSystem for scheme: hdfs
 </code></pre></div>
 <p>it&#39;s an indication that your topology jar file isn&#39;t packaged properly.</p>
 
@@ -148,9 +148,9 @@
             <span class="nt">&lt;configuration&gt;</span>
                 <span class="nt">&lt;transformers&gt;</span>
                     <span class="nt">&lt;transformer</span>
-                            <span class="na">implementation=</span><span class="s">&quot;org.apache.maven.plugins.shade.resource.ServicesResourceTransformer&quot;</span><span class="nt">/&gt;</span>
+                            <span class="na">implementation=</span><span class="s">"org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"</span><span class="nt">/&gt;</span>
                     <span class="nt">&lt;transformer</span>
-                            <span class="na">implementation=</span><span class="s">&quot;org.apache.maven.plugins.shade.resource.ManifestResourceTransformer&quot;</span><span class="nt">&gt;</span>
+                            <span class="na">implementation=</span><span class="s">"org.apache.maven.plugins.shade.resource.ManifestResourceTransformer"</span><span class="nt">&gt;</span>
                         <span class="nt">&lt;mainClass&gt;&lt;/mainClass&gt;</span>
                     <span class="nt">&lt;/transformer&gt;</span>
                 <span class="nt">&lt;/transformers&gt;</span>
@@ -158,6 +158,7 @@
         <span class="nt">&lt;/execution&gt;</span>
     <span class="nt">&lt;/executions&gt;</span>
 <span class="nt">&lt;/plugin&gt;</span>
+
 </code></pre></div>
 <h3 id="specifying-a-hadoop-version">Specifying a Hadoop Version</h3>
 
@@ -189,7 +190,7 @@
 and add the dependencies for your preferred version in your pom.</p>
 
 <p>Hadoop client version incompatibilites can manifest as errors like:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">com.google.protobuf.InvalidProtocolBufferException: Protocol message contained an invalid tag (zero)
+<div class="highlight"><pre><code class="language-" data-lang="">com.google.protobuf.InvalidProtocolBufferException: Protocol message contained an invalid tag (zero)
 </code></pre></div>
 <h2 id="customization">Customization</h2>
 
@@ -198,7 +199,7 @@
 <p>Record format can be controlled by providing an implementation of the <code>org.apache.storm.hdfs.format.RecordFormat</code>
 interface:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">interface</span> <span class="nc">RecordFormat</span> <span class="kd">extends</span> <span class="n">Serializable</span> <span class="o">{</span>
-    <span class="kt">byte</span><span class="o">[]</span> <span class="nf">format</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">tuple</span><span class="o">);</span>
+    <span class="kt">byte</span><span class="o">[]</span> <span class="n">format</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">tuple</span><span class="o">);</span>
 <span class="o">}</span>
 </code></pre></div>
 <p>The provided <code>org.apache.storm.hdfs.format.DelimitedRecordFormat</code> is capable of producing formats such as CSV and
@@ -209,16 +210,16 @@
 <p>File naming can be controlled by providing an implementation of the <code>org.apache.storm.hdfs.format.FileNameFormat</code>
 interface:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">interface</span> <span class="nc">FileNameFormat</span> <span class="kd">extends</span> <span class="n">Serializable</span> <span class="o">{</span>
-    <span class="kt">void</span> <span class="nf">prepare</span><span class="o">(</span><span class="n">Map</span> <span class="n">conf</span><span class="o">,</span> <span class="n">TopologyContext</span> <span class="n">topologyContext</span><span class="o">);</span>
-    <span class="n">String</span> <span class="nf">getName</span><span class="o">(</span><span class="kt">long</span> <span class="n">rotation</span><span class="o">,</span> <span class="kt">long</span> <span class="n">timeStamp</span><span class="o">);</span>
-    <span class="n">String</span> <span class="nf">getPath</span><span class="o">();</span>
+    <span class="kt">void</span> <span class="n">prepare</span><span class="o">(</span><span class="n">Map</span> <span class="n">conf</span><span class="o">,</span> <span class="n">TopologyContext</span> <span class="n">topologyContext</span><span class="o">);</span>
+    <span class="n">String</span> <span class="n">getName</span><span class="o">(</span><span class="kt">long</span> <span class="n">rotation</span><span class="o">,</span> <span class="kt">long</span> <span class="n">timeStamp</span><span class="o">);</span>
+    <span class="n">String</span> <span class="n">getPath</span><span class="o">();</span>
 <span class="o">}</span>
 </code></pre></div>
 <p>The provided <code>org.apache.storm.hdfs.format.DefaultFileNameFormat</code>  will create file names with the following format:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text"> {prefix}{componentId}-{taskId}-{rotationNum}-{timestamp}{extension}
-</code></pre></div>
+<div class="highlight"><pre><code class="language-" data-lang=""><span class="w"> </span><span class="p">{</span><span class="err">prefix</span><span class="p">}{</span><span class="err">componentId</span><span class="p">}</span><span class="err">-</span><span class="p">{</span><span class="err">taskId</span><span class="p">}</span><span class="err">-</span><span class="p">{</span><span class="err">rotationNum</span><span class="p">}</span><span class="err">-</span><span class="p">{</span><span class="err">timestamp</span><span class="p">}{</span><span class="err">extension</span><span class="p">}</span><span class="w">
+</span></code></pre></div>
 <p>For example:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text"> MyBolt-5-7-1390579837830.txt
+<div class="highlight"><pre><code class="language-" data-lang=""> MyBolt-5-7-1390579837830.txt
 </code></pre></div>
 <p>By default, prefix is empty and extenstion is &quot;.txt&quot;.</p>
 
@@ -227,8 +228,8 @@
 <p>Sync policies allow you to control when buffered data is flushed to the underlying filesystem (thus making it available
 to clients reading the data) by implementing the <code>org.apache.storm.hdfs.sync.SyncPolicy</code> interface:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">interface</span> <span class="nc">SyncPolicy</span> <span class="kd">extends</span> <span class="n">Serializable</span> <span class="o">{</span>
-    <span class="kt">boolean</span> <span class="nf">mark</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">tuple</span><span class="o">,</span> <span class="kt">long</span> <span class="n">offset</span><span class="o">);</span>
-    <span class="kt">void</span> <span class="nf">reset</span><span class="o">();</span>
+    <span class="kt">boolean</span> <span class="n">mark</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">tuple</span><span class="o">,</span> <span class="kt">long</span> <span class="n">offset</span><span class="o">);</span>
+    <span class="kt">void</span> <span class="n">reset</span><span class="o">();</span>
 <span class="o">}</span>
 </code></pre></div>
 <p>The <code>HdfsBolt</code> will call the <code>mark()</code> method for every tuple it processes. Returning <code>true</code> will trigger the <code>HdfsBolt</code>
@@ -242,13 +243,13 @@
 <p>Similar to sync policies, file rotation policies allow you to control when data files are rotated by providing a
 <code>org.apache.storm.hdfs.rotation.FileRotation</code> interface:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">interface</span> <span class="nc">FileRotationPolicy</span> <span class="kd">extends</span> <span class="n">Serializable</span> <span class="o">{</span>
-    <span class="kt">boolean</span> <span class="nf">mark</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">tuple</span><span class="o">,</span> <span class="kt">long</span> <span class="n">offset</span><span class="o">);</span>
-    <span class="kt">void</span> <span class="nf">reset</span><span class="o">();</span>
+    <span class="kt">boolean</span> <span class="n">mark</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">tuple</span><span class="o">,</span> <span class="kt">long</span> <span class="n">offset</span><span class="o">);</span>
+    <span class="kt">void</span> <span class="n">reset</span><span class="o">();</span>
 <span class="o">}</span>
 </code></pre></div>
 <p>The <code>org.apache.storm.hdfs.rotation.FileSizeRotationPolicy</code> implementation allows you to trigger file rotation when
 data files reach a specific file size:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">FileRotationPolicy</span> <span class="n">rotationPolicy</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">FileSizeRotationPolicy</span><span class="o">(</span><span class="mf">5.0f</span><span class="o">,</span> <span class="n">Units</span><span class="o">.</span><span class="na">MB</span><span class="o">);</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">FileRotationPolicy</span> <span class="n">rotationPolicy</span> <span class="o">=</span> <span class="k">new</span> <span class="n">FileSizeRotationPolicy</span><span class="o">(</span><span class="mf">5.0f</span><span class="o">,</span> <span class="n">Units</span><span class="o">.</span><span class="na">MB</span><span class="o">);</span>
 </code></pre></div>
 <h3 id="file-rotation-actions">File Rotation Actions</h3>
 
@@ -256,7 +257,7 @@
 What <code>RotationAction</code>s do is provide a hook to allow you to perform some action right after a file is rotated. For
 example, moving a file to a different location or renaming it.</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">interface</span> <span class="nc">RotationAction</span> <span class="kd">extends</span> <span class="n">Serializable</span> <span class="o">{</span>
-    <span class="kt">void</span> <span class="nf">execute</span><span class="o">(</span><span class="n">FileSystem</span> <span class="n">fileSystem</span><span class="o">,</span> <span class="n">Path</span> <span class="n">filePath</span><span class="o">)</span> <span class="kd">throws</span> <span class="n">IOException</span><span class="o">;</span>
+    <span class="kt">void</span> <span class="n">execute</span><span class="o">(</span><span class="n">FileSystem</span> <span class="n">fileSystem</span><span class="o">,</span> <span class="n">Path</span> <span class="n">filePath</span><span class="o">)</span> <span class="kd">throws</span> <span class="n">IOException</span><span class="o">;</span>
 <span class="o">}</span>
 </code></pre></div>
 <p>Storm-HDFS includes a simple action that will move a file after rotation:</p>
@@ -265,15 +266,15 @@
 
     <span class="kd">private</span> <span class="n">String</span> <span class="n">destination</span><span class="o">;</span>
 
-    <span class="kd">public</span> <span class="n">MoveFileAction</span> <span class="nf">withDestination</span><span class="o">(</span><span class="n">String</span> <span class="n">destDir</span><span class="o">){</span>
+    <span class="kd">public</span> <span class="n">MoveFileAction</span> <span class="n">withDestination</span><span class="o">(</span><span class="n">String</span> <span class="n">destDir</span><span class="o">){</span>
         <span class="n">destination</span> <span class="o">=</span> <span class="n">destDir</span><span class="o">;</span>
         <span class="k">return</span> <span class="k">this</span><span class="o">;</span>
     <span class="o">}</span>
 
     <span class="nd">@Override</span>
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">execute</span><span class="o">(</span><span class="n">FileSystem</span> <span class="n">fileSystem</span><span class="o">,</span> <span class="n">Path</span> <span class="n">filePath</span><span class="o">)</span> <span class="kd">throws</span> <span class="n">IOException</span> <span class="o">{</span>
-        <span class="n">Path</span> <span class="n">destPath</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">Path</span><span class="o">(</span><span class="n">destination</span><span class="o">,</span> <span class="n">filePath</span><span class="o">.</span><span class="na">getName</span><span class="o">());</span>
-        <span class="n">LOG</span><span class="o">.</span><span class="na">info</span><span class="o">(</span><span class="s">&quot;Moving file {} to {}&quot;</span><span class="o">,</span> <span class="n">filePath</span><span class="o">,</span> <span class="n">destPath</span><span class="o">);</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">execute</span><span class="o">(</span><span class="n">FileSystem</span> <span class="n">fileSystem</span><span class="o">,</span> <span class="n">Path</span> <span class="n">filePath</span><span class="o">)</span> <span class="kd">throws</span> <span class="n">IOException</span> <span class="o">{</span>
+        <span class="n">Path</span> <span class="n">destPath</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Path</span><span class="o">(</span><span class="n">destination</span><span class="o">,</span> <span class="n">filePath</span><span class="o">.</span><span class="na">getName</span><span class="o">());</span>
+        <span class="n">LOG</span><span class="o">.</span><span class="na">info</span><span class="o">(</span><span class="s">"Moving file {} to {}"</span><span class="o">,</span> <span class="n">filePath</span><span class="o">,</span> <span class="n">destPath</span><span class="o">);</span>
         <span class="kt">boolean</span> <span class="n">success</span> <span class="o">=</span> <span class="n">fileSystem</span><span class="o">.</span><span class="na">rename</span><span class="o">(</span><span class="n">filePath</span><span class="o">,</span> <span class="n">destPath</span><span class="o">);</span>
         <span class="k">return</span><span class="o">;</span>
     <span class="o">}</span>
@@ -282,80 +283,80 @@
 <p>If you are using Trident and sequence files you can do something like this:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java">        <span class="n">HdfsState</span><span class="o">.</span><span class="na">Options</span> <span class="n">seqOpts</span> <span class="o">=</span> <span class="k">new</span> <span class="n">HdfsState</span><span class="o">.</span><span class="na">SequenceFileOptions</span><span class="o">()</span>
                 <span class="o">.</span><span class="na">withFileNameFormat</span><span class="o">(</span><span class="n">fileNameFormat</span><span class="o">)</span>
-                <span class="o">.</span><span class="na">withSequenceFormat</span><span class="o">(</span><span class="k">new</span> <span class="nf">DefaultSequenceFormat</span><span class="o">(</span><span class="s">&quot;key&quot;</span><span class="o">,</span> <span class="s">&quot;data&quot;</span><span class="o">))</span>
+                <span class="o">.</span><span class="na">withSequenceFormat</span><span class="o">(</span><span class="k">new</span> <span class="n">DefaultSequenceFormat</span><span class="o">(</span><span class="s">"key"</span><span class="o">,</span> <span class="s">"data"</span><span class="o">))</span>
                 <span class="o">.</span><span class="na">withRotationPolicy</span><span class="o">(</span><span class="n">rotationPolicy</span><span class="o">)</span>
-                <span class="o">.</span><span class="na">withFsUrl</span><span class="o">(</span><span class="s">&quot;hdfs://localhost:54310&quot;</span><span class="o">)</span>
-                <span class="o">.</span><span class="na">addRotationAction</span><span class="o">(</span><span class="k">new</span> <span class="nf">MoveFileAction</span><span class="o">().</span><span class="na">withDestination</span><span class="o">(</span><span class="s">&quot;/dest2/&quot;</span><span class="o">));</span>
+                <span class="o">.</span><span class="na">withFsUrl</span><span class="o">(</span><span class="s">"hdfs://localhost:54310"</span><span class="o">)</span>
+                <span class="o">.</span><span class="na">addRotationAction</span><span class="o">(</span><span class="k">new</span> <span class="n">MoveFileAction</span><span class="o">().</span><span class="na">withDestination</span><span class="o">(</span><span class="s">"/dest2/"</span><span class="o">));</span>
 </code></pre></div>
 <h2 id="support-for-hdfs-sequence-files">Support for HDFS Sequence Files</h2>
 
 <p>The <code>org.apache.storm.hdfs.bolt.SequenceFileBolt</code> class allows you to write storm data to HDFS sequence files:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java">        <span class="c1">// sync the filesystem after every 1k tuples</span>
-        <span class="n">SyncPolicy</span> <span class="n">syncPolicy</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">CountSyncPolicy</span><span class="o">(</span><span class="mi">1000</span><span class="o">);</span>
+        <span class="n">SyncPolicy</span> <span class="n">syncPolicy</span> <span class="o">=</span> <span class="k">new</span> <span class="n">CountSyncPolicy</span><span class="o">(</span><span class="mi">1000</span><span class="o">);</span>
 
         <span class="c1">// rotate files when they reach 5MB</span>
-        <span class="n">FileRotationPolicy</span> <span class="n">rotationPolicy</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">FileSizeRotationPolicy</span><span class="o">(</span><span class="mf">5.0f</span><span class="o">,</span> <span class="n">Units</span><span class="o">.</span><span class="na">MB</span><span class="o">);</span>
+        <span class="n">FileRotationPolicy</span> <span class="n">rotationPolicy</span> <span class="o">=</span> <span class="k">new</span> <span class="n">FileSizeRotationPolicy</span><span class="o">(</span><span class="mf">5.0f</span><span class="o">,</span> <span class="n">Units</span><span class="o">.</span><span class="na">MB</span><span class="o">);</span>
 
-        <span class="n">FileNameFormat</span> <span class="n">fileNameFormat</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">DefaultFileNameFormat</span><span class="o">()</span>
-                <span class="o">.</span><span class="na">withExtension</span><span class="o">(</span><span class="s">&quot;.seq&quot;</span><span class="o">)</span>
-                <span class="o">.</span><span class="na">withPath</span><span class="o">(</span><span class="s">&quot;/data/&quot;</span><span class="o">);</span>
+        <span class="n">FileNameFormat</span> <span class="n">fileNameFormat</span> <span class="o">=</span> <span class="k">new</span> <span class="n">DefaultFileNameFormat</span><span class="o">()</span>
+                <span class="o">.</span><span class="na">withExtension</span><span class="o">(</span><span class="s">".seq"</span><span class="o">)</span>
+                <span class="o">.</span><span class="na">withPath</span><span class="o">(</span><span class="s">"/data/"</span><span class="o">);</span>
 
         <span class="c1">// create sequence format instance.</span>
-        <span class="n">DefaultSequenceFormat</span> <span class="n">format</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">DefaultSequenceFormat</span><span class="o">(</span><span class="s">&quot;timestamp&quot;</span><span class="o">,</span> <span class="s">&quot;sentence&quot;</span><span class="o">);</span>
+        <span class="n">DefaultSequenceFormat</span> <span class="n">format</span> <span class="o">=</span> <span class="k">new</span> <span class="n">DefaultSequenceFormat</span><span class="o">(</span><span class="s">"timestamp"</span><span class="o">,</span> <span class="s">"sentence"</span><span class="o">);</span>
 
-        <span class="n">SequenceFileBolt</span> <span class="n">bolt</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">SequenceFileBolt</span><span class="o">()</span>
-                <span class="o">.</span><span class="na">withFsUrl</span><span class="o">(</span><span class="s">&quot;hdfs://localhost:54310&quot;</span><span class="o">)</span>
+        <span class="n">SequenceFileBolt</span> <span class="n">bolt</span> <span class="o">=</span> <span class="k">new</span> <span class="n">SequenceFileBolt</span><span class="o">()</span>
+                <span class="o">.</span><span class="na">withFsUrl</span><span class="o">(</span><span class="s">"hdfs://localhost:54310"</span><span class="o">)</span>
                 <span class="o">.</span><span class="na">withFileNameFormat</span><span class="o">(</span><span class="n">fileNameFormat</span><span class="o">)</span>
                 <span class="o">.</span><span class="na">withSequenceFormat</span><span class="o">(</span><span class="n">format</span><span class="o">)</span>
                 <span class="o">.</span><span class="na">withRotationPolicy</span><span class="o">(</span><span class="n">rotationPolicy</span><span class="o">)</span>
                 <span class="o">.</span><span class="na">withSyncPolicy</span><span class="o">(</span><span class="n">syncPolicy</span><span class="o">)</span>
                 <span class="o">.</span><span class="na">withCompressionType</span><span class="o">(</span><span class="n">SequenceFile</span><span class="o">.</span><span class="na">CompressionType</span><span class="o">.</span><span class="na">RECORD</span><span class="o">)</span>
-                <span class="o">.</span><span class="na">withCompressionCodec</span><span class="o">(</span><span class="s">&quot;deflate&quot;</span><span class="o">);</span>
+                <span class="o">.</span><span class="na">withCompressionCodec</span><span class="o">(</span><span class="s">"deflate"</span><span class="o">);</span>
 </code></pre></div>
 <p>The <code>SequenceFileBolt</code> requires that you provide a <code>org.apache.storm.hdfs.bolt.format.SequenceFormat</code> that maps tuples to
 key/value pairs:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">interface</span> <span class="nc">SequenceFormat</span> <span class="kd">extends</span> <span class="n">Serializable</span> <span class="o">{</span>
-    <span class="n">Class</span> <span class="nf">keyClass</span><span class="o">();</span>
-    <span class="n">Class</span> <span class="nf">valueClass</span><span class="o">();</span>
+    <span class="n">Class</span> <span class="n">keyClass</span><span class="o">();</span>
+    <span class="n">Class</span> <span class="n">valueClass</span><span class="o">();</span>
 
-    <span class="n">Writable</span> <span class="nf">key</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">tuple</span><span class="o">);</span>
-    <span class="n">Writable</span> <span class="nf">value</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">tuple</span><span class="o">);</span>
+    <span class="n">Writable</span> <span class="n">key</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">tuple</span><span class="o">);</span>
+    <span class="n">Writable</span> <span class="n">value</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">tuple</span><span class="o">);</span>
 <span class="o">}</span>
 </code></pre></div>
 <h2 id="trident-api">Trident API</h2>
 
 <p>storm-hdfs also includes a Trident <code>state</code> implementation for writing data to HDFS, with an API that closely mirrors
 that of the bolts.</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java">         <span class="n">Fields</span> <span class="n">hdfsFields</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;field1&quot;</span><span class="o">,</span> <span class="s">&quot;field2&quot;</span><span class="o">);</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java">         <span class="n">Fields</span> <span class="n">hdfsFields</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"field1"</span><span class="o">,</span> <span class="s">"field2"</span><span class="o">);</span>
 
-         <span class="n">FileNameFormat</span> <span class="n">fileNameFormat</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">DefaultFileNameFormat</span><span class="o">()</span>
-                 <span class="o">.</span><span class="na">withPath</span><span class="o">(</span><span class="s">&quot;/trident&quot;</span><span class="o">)</span>
-                 <span class="o">.</span><span class="na">withPrefix</span><span class="o">(</span><span class="s">&quot;trident&quot;</span><span class="o">)</span>
-                 <span class="o">.</span><span class="na">withExtension</span><span class="o">(</span><span class="s">&quot;.txt&quot;</span><span class="o">);</span>
+         <span class="n">FileNameFormat</span> <span class="n">fileNameFormat</span> <span class="o">=</span> <span class="k">new</span> <span class="n">DefaultFileNameFormat</span><span class="o">()</span>
+                 <span class="o">.</span><span class="na">withPath</span><span class="o">(</span><span class="s">"/trident"</span><span class="o">)</span>
+                 <span class="o">.</span><span class="na">withPrefix</span><span class="o">(</span><span class="s">"trident"</span><span class="o">)</span>
+                 <span class="o">.</span><span class="na">withExtension</span><span class="o">(</span><span class="s">".txt"</span><span class="o">);</span>
 
-         <span class="n">RecordFormat</span> <span class="n">recordFormat</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">DelimitedRecordFormat</span><span class="o">()</span>
+         <span class="n">RecordFormat</span> <span class="n">recordFormat</span> <span class="o">=</span> <span class="k">new</span> <span class="n">DelimitedRecordFormat</span><span class="o">()</span>
                  <span class="o">.</span><span class="na">withFields</span><span class="o">(</span><span class="n">hdfsFields</span><span class="o">);</span>
 
-         <span class="n">FileRotationPolicy</span> <span class="n">rotationPolicy</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">FileSizeRotationPolicy</span><span class="o">(</span><span class="mf">5.0f</span><span class="o">,</span> <span class="n">FileSizeRotationPolicy</span><span class="o">.</span><span class="na">Units</span><span class="o">.</span><span class="na">MB</span><span class="o">);</span>
+         <span class="n">FileRotationPolicy</span> <span class="n">rotationPolicy</span> <span class="o">=</span> <span class="k">new</span> <span class="n">FileSizeRotationPolicy</span><span class="o">(</span><span class="mf">5.0f</span><span class="o">,</span> <span class="n">FileSizeRotationPolicy</span><span class="o">.</span><span class="na">Units</span><span class="o">.</span><span class="na">MB</span><span class="o">);</span>
 
         <span class="n">HdfsState</span><span class="o">.</span><span class="na">Options</span> <span class="n">options</span> <span class="o">=</span> <span class="k">new</span> <span class="n">HdfsState</span><span class="o">.</span><span class="na">HdfsFileOptions</span><span class="o">()</span>
                 <span class="o">.</span><span class="na">withFileNameFormat</span><span class="o">(</span><span class="n">fileNameFormat</span><span class="o">)</span>
                 <span class="o">.</span><span class="na">withRecordFormat</span><span class="o">(</span><span class="n">recordFormat</span><span class="o">)</span>
                 <span class="o">.</span><span class="na">withRotationPolicy</span><span class="o">(</span><span class="n">rotationPolicy</span><span class="o">)</span>
-                <span class="o">.</span><span class="na">withFsUrl</span><span class="o">(</span><span class="s">&quot;hdfs://localhost:54310&quot;</span><span class="o">);</span>
+                <span class="o">.</span><span class="na">withFsUrl</span><span class="o">(</span><span class="s">"hdfs://localhost:54310"</span><span class="o">);</span>
 
-         <span class="n">StateFactory</span> <span class="n">factory</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">HdfsStateFactory</span><span class="o">().</span><span class="na">withOptions</span><span class="o">(</span><span class="n">options</span><span class="o">);</span>
+         <span class="n">StateFactory</span> <span class="n">factory</span> <span class="o">=</span> <span class="k">new</span> <span class="n">HdfsStateFactory</span><span class="o">().</span><span class="na">withOptions</span><span class="o">(</span><span class="n">options</span><span class="o">);</span>
 
          <span class="n">TridentState</span> <span class="n">state</span> <span class="o">=</span> <span class="n">stream</span>
-                 <span class="o">.</span><span class="na">partitionPersist</span><span class="o">(</span><span class="n">factory</span><span class="o">,</span> <span class="n">hdfsFields</span><span class="o">,</span> <span class="k">new</span> <span class="nf">HdfsUpdater</span><span class="o">(),</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">());</span>
+                 <span class="o">.</span><span class="na">partitionPersist</span><span class="o">(</span><span class="n">factory</span><span class="o">,</span> <span class="n">hdfsFields</span><span class="o">,</span> <span class="k">new</span> <span class="n">HdfsUpdater</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">());</span>
 </code></pre></div>
 <p>To use the sequence file <code>State</code> implementation, use the <code>HdfsState.SequenceFileOptions</code>:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java">        <span class="n">HdfsState</span><span class="o">.</span><span class="na">Options</span> <span class="n">seqOpts</span> <span class="o">=</span> <span class="k">new</span> <span class="n">HdfsState</span><span class="o">.</span><span class="na">SequenceFileOptions</span><span class="o">()</span>
                 <span class="o">.</span><span class="na">withFileNameFormat</span><span class="o">(</span><span class="n">fileNameFormat</span><span class="o">)</span>
-                <span class="o">.</span><span class="na">withSequenceFormat</span><span class="o">(</span><span class="k">new</span> <span class="nf">DefaultSequenceFormat</span><span class="o">(</span><span class="s">&quot;key&quot;</span><span class="o">,</span> <span class="s">&quot;data&quot;</span><span class="o">))</span>
+                <span class="o">.</span><span class="na">withSequenceFormat</span><span class="o">(</span><span class="k">new</span> <span class="n">DefaultSequenceFormat</span><span class="o">(</span><span class="s">"key"</span><span class="o">,</span> <span class="s">"data"</span><span class="o">))</span>
                 <span class="o">.</span><span class="na">withRotationPolicy</span><span class="o">(</span><span class="n">rotationPolicy</span><span class="o">)</span>
-                <span class="o">.</span><span class="na">withFsUrl</span><span class="o">(</span><span class="s">&quot;hdfs://localhost:54310&quot;</span><span class="o">)</span>
-                <span class="o">.</span><span class="na">addRotationAction</span><span class="o">(</span><span class="k">new</span> <span class="nf">MoveFileAction</span><span class="o">().</span><span class="na">toDestination</span><span class="o">(</span><span class="s">&quot;/dest2/&quot;</span><span class="o">));</span>
+                <span class="o">.</span><span class="na">withFsUrl</span><span class="o">(</span><span class="s">"hdfs://localhost:54310"</span><span class="o">)</span>
+                <span class="o">.</span><span class="na">addRotationAction</span><span class="o">(</span><span class="k">new</span> <span class="n">MoveFileAction</span><span class="o">().</span><span class="na">toDestination</span><span class="o">(</span><span class="s">"/dest2/"</span><span class="o">));</span>
 </code></pre></div>
 <h2 id="working-with-secure-hdfs">Working with Secure HDFS</h2>
 
diff --git a/_site/documentation/storm-hive.html b/_site/documentation/storm-hive.html
index 0a2dbac..8db4dff 100644
--- a/_site/documentation/storm-hive.html
+++ b/_site/documentation/storm-hive.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -97,18 +97,18 @@
 
 <p>With the help of Hive Streaming API, HiveBolt and HiveState allows users to stream data from Storm into Hive directly.
   To use Hive streaming API users need to create a bucketed table with ORC format.  Example below</p>
-<div class="highlight"><pre><code class="language-sql" data-lang="sql">  <span class="k">create</span> <span class="k">table</span> <span class="n">test_table</span> <span class="p">(</span> <span class="n">id</span> <span class="nb">INT</span><span class="p">,</span> <span class="n">name</span> <span class="n">STRING</span><span class="p">,</span> <span class="n">phone</span> <span class="n">STRING</span><span class="p">,</span> <span class="n">street</span> <span class="n">STRING</span><span class="p">)</span> <span class="n">partitioned</span> <span class="k">by</span> <span class="p">(</span><span class="n">city</span> <span class="n">STRING</span><span class="p">,</span> <span class="k">state</span> <span class="n">STRING</span><span class="p">)</span> <span class="n">stored</span> <span class="k">as</span> <span class="n">orc</span> <span class="n">tblproperties</span> <span class="p">(</span><span class="ss">&quot;orc.compress&quot;</span><span class="o">=</span><span class="ss">&quot;NONE&quot;</span><span class="p">);</span>
+<div class="highlight"><pre><code class="language-sql" data-lang="sql">  <span class="k">create</span> <span class="k">table</span> <span class="n">test_table</span> <span class="p">(</span> <span class="n">id</span> <span class="n">INT</span><span class="p">,</span> <span class="n">name</span> <span class="n">STRING</span><span class="p">,</span> <span class="n">phone</span> <span class="n">STRING</span><span class="p">,</span> <span class="n">street</span> <span class="n">STRING</span><span class="p">)</span> <span class="n">partitioned</span> <span class="k">by</span> <span class="p">(</span><span class="n">city</span> <span class="n">STRING</span><span class="p">,</span> <span class="k">state</span> <span class="n">STRING</span><span class="p">)</span> <span class="n">stored</span> <span class="k">as</span> <span class="n">orc</span> <span class="n">tblproperties</span> <span class="p">(</span><span class="nv">"orc.compress"</span><span class="o">=</span><span class="nv">"NONE"</span><span class="p">);</span>
 </code></pre></div>
-<h2 id="hivebolt-(org.apache.storm.hive.bolt.hivebolt)">HiveBolt (org.apache.storm.hive.bolt.HiveBolt)</h2>
+<h2 id="hivebolt-org-apache-storm-hive-bolt-hivebolt">HiveBolt (org.apache.storm.hive.bolt.HiveBolt)</h2>
 
 <p>HiveBolt streams tuples directly into Hive. Tuples are written using Hive Transactions. 
 Partitions to which HiveBolt will stream to can either created or pre-created or optionally
 HiveBolt can create them if they are missing. Fields from Tuples are mapped to table columns.
 User should make sure that Tuple field names are matched to the table column names.</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">DelimitedRecordHiveMapper</span> <span class="n">mapper</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">DelimitedRecordHiveMapper</span><span class="o">()</span>
-            <span class="o">.</span><span class="na">withColumnFields</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="n">colNames</span><span class="o">));</span>
-<span class="n">HiveOptions</span> <span class="n">hiveOptions</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">HiveOptions</span><span class="o">(</span><span class="n">metaStoreURI</span><span class="o">,</span><span class="n">dbName</span><span class="o">,</span><span class="n">tblName</span><span class="o">,</span><span class="n">mapper</span><span class="o">);</span>
-<span class="n">HiveBolt</span> <span class="n">hiveBolt</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">HiveBolt</span><span class="o">(</span><span class="n">hiveOptions</span><span class="o">);</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">DelimitedRecordHiveMapper</span> <span class="n">mapper</span> <span class="o">=</span> <span class="k">new</span> <span class="n">DelimitedRecordHiveMapper</span><span class="o">()</span>
+            <span class="o">.</span><span class="na">withColumnFields</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="n">colNames</span><span class="o">));</span>
+<span class="n">HiveOptions</span> <span class="n">hiveOptions</span> <span class="o">=</span> <span class="k">new</span> <span class="n">HiveOptions</span><span class="o">(</span><span class="n">metaStoreURI</span><span class="o">,</span><span class="n">dbName</span><span class="o">,</span><span class="n">tblName</span><span class="o">,</span><span class="n">mapper</span><span class="o">);</span>
+<span class="n">HiveBolt</span> <span class="n">hiveBolt</span> <span class="o">=</span> <span class="k">new</span> <span class="n">HiveBolt</span><span class="o">(</span><span class="n">hiveOptions</span><span class="o">);</span>
 </code></pre></div>
 <h3 id="recordhivemapper">RecordHiveMapper</h3>
 
@@ -119,13 +119,13 @@
 <li>DelimitedRecordHiveMapper (org.apache.storm.hive.bolt.mapper.DelimitedRecordHiveMapper)</li>
 <li>JsonRecordHiveMapper (org.apache.storm.hive.bolt.mapper.JsonRecordHiveMapper)</li>
 </ul>
-<div class="highlight"><pre><code class="language-java" data-lang="java">   <span class="n">DelimitedRecordHiveMapper</span> <span class="n">mapper</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">DelimitedRecordHiveMapper</span><span class="o">()</span>
-            <span class="o">.</span><span class="na">withColumnFields</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="n">colNames</span><span class="o">))</span>
-            <span class="o">.</span><span class="na">withPartitionFields</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="n">partNames</span><span class="o">));</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java">   <span class="n">DelimitedRecordHiveMapper</span> <span class="n">mapper</span> <span class="o">=</span> <span class="k">new</span> <span class="n">DelimitedRecordHiveMapper</span><span class="o">()</span>
+            <span class="o">.</span><span class="na">withColumnFields</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="n">colNames</span><span class="o">))</span>
+            <span class="o">.</span><span class="na">withPartitionFields</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="n">partNames</span><span class="o">));</span>
     <span class="n">or</span>
-   <span class="n">DelimitedRecordHiveMapper</span> <span class="n">mapper</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">DelimitedRecordHiveMapper</span><span class="o">()</span>
-            <span class="o">.</span><span class="na">withColumnFields</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="n">colNames</span><span class="o">))</span>
-            <span class="o">.</span><span class="na">withTimeAsPartitionField</span><span class="o">(</span><span class="s">&quot;YYYY/MM/DD&quot;</span><span class="o">);</span>
+   <span class="n">DelimitedRecordHiveMapper</span> <span class="n">mapper</span> <span class="o">=</span> <span class="k">new</span> <span class="n">DelimitedRecordHiveMapper</span><span class="o">()</span>
+            <span class="o">.</span><span class="na">withColumnFields</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="n">colNames</span><span class="o">))</span>
+            <span class="o">.</span><span class="na">withTimeAsPartitionField</span><span class="o">(</span><span class="s">"YYYY/MM/DD"</span><span class="o">);</span>
 </code></pre></div>
 <table><thead>
 <tr>
@@ -151,10 +151,10 @@
 </tr>
 </tbody></table>
 
-<h3 id="hiveoptions-(org.apache.storm.hive.common.hiveoptions)">HiveOptions (org.apache.storm.hive.common.HiveOptions)</h3>
+<h3 id="hiveoptions-org-apache-storm-hive-common-hiveoptions">HiveOptions (org.apache.storm.hive.common.HiveOptions)</h3>
 
 <p>HiveBolt takes in HiveOptions as a constructor arg.</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java">  <span class="n">HiveOptions</span> <span class="n">hiveOptions</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">HiveOptions</span><span class="o">(</span><span class="n">metaStoreURI</span><span class="o">,</span><span class="n">dbName</span><span class="o">,</span><span class="n">tblName</span><span class="o">,</span><span class="n">mapper</span><span class="o">)</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java">  <span class="n">HiveOptions</span> <span class="n">hiveOptions</span> <span class="o">=</span> <span class="k">new</span> <span class="n">HiveOptions</span><span class="o">(</span><span class="n">metaStoreURI</span><span class="o">,</span><span class="n">dbName</span><span class="o">,</span><span class="n">tblName</span><span class="o">,</span><span class="n">mapper</span><span class="o">)</span>
                                 <span class="o">.</span><span class="na">withTxnsPerBatch</span><span class="o">(</span><span class="mi">10</span><span class="o">)</span>
                                 <span class="o">.</span><span class="na">withBatchSize</span><span class="o">(</span><span class="mi">1000</span><span class="o">)</span>
                                 <span class="o">.</span><span class="na">withIdleTimeout</span><span class="o">(</span><span class="mi">10</span><span class="o">)</span>
@@ -235,20 +235,20 @@
 </tr>
 </tbody></table>
 
-<h2 id="hivestate-(org.apache.storm.hive.trident.hivetrident)">HiveState (org.apache.storm.hive.trident.HiveTrident)</h2>
+<h2 id="hivestate-org-apache-storm-hive-trident-hivetrident">HiveState (org.apache.storm.hive.trident.HiveTrident)</h2>
 
 <p>Hive Trident state also follows similar pattern to HiveBolt it takes in HiveOptions as an arg.</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java">   <span class="n">DelimitedRecordHiveMapper</span> <span class="n">mapper</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">DelimitedRecordHiveMapper</span><span class="o">()</span>
-            <span class="o">.</span><span class="na">withColumnFields</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="n">colNames</span><span class="o">))</span>
-            <span class="o">.</span><span class="na">withTimeAsPartitionField</span><span class="o">(</span><span class="s">&quot;YYYY/MM/DD&quot;</span><span class="o">);</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java">   <span class="n">DelimitedRecordHiveMapper</span> <span class="n">mapper</span> <span class="o">=</span> <span class="k">new</span> <span class="n">DelimitedRecordHiveMapper</span><span class="o">()</span>
+            <span class="o">.</span><span class="na">withColumnFields</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="n">colNames</span><span class="o">))</span>
+            <span class="o">.</span><span class="na">withTimeAsPartitionField</span><span class="o">(</span><span class="s">"YYYY/MM/DD"</span><span class="o">);</span>
 
-   <span class="n">HiveOptions</span> <span class="n">hiveOptions</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">HiveOptions</span><span class="o">(</span><span class="n">metaStoreURI</span><span class="o">,</span><span class="n">dbName</span><span class="o">,</span><span class="n">tblName</span><span class="o">,</span><span class="n">mapper</span><span class="o">)</span>
+   <span class="n">HiveOptions</span> <span class="n">hiveOptions</span> <span class="o">=</span> <span class="k">new</span> <span class="n">HiveOptions</span><span class="o">(</span><span class="n">metaStoreURI</span><span class="o">,</span><span class="n">dbName</span><span class="o">,</span><span class="n">tblName</span><span class="o">,</span><span class="n">mapper</span><span class="o">)</span>
                                 <span class="o">.</span><span class="na">withTxnsPerBatch</span><span class="o">(</span><span class="mi">10</span><span class="o">)</span>
                                 <span class="o">.</span><span class="na">withBatchSize</span><span class="o">(</span><span class="mi">1000</span><span class="o">)</span>
                                 <span class="o">.</span><span class="na">withIdleTimeout</span><span class="o">(</span><span class="mi">10</span><span class="o">)</span>
 
-   <span class="n">StateFactory</span> <span class="n">factory</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">HiveStateFactory</span><span class="o">().</span><span class="na">withOptions</span><span class="o">(</span><span class="n">hiveOptions</span><span class="o">);</span>
-   <span class="n">TridentState</span> <span class="n">state</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">partitionPersist</span><span class="o">(</span><span class="n">factory</span><span class="o">,</span> <span class="n">hiveFields</span><span class="o">,</span> <span class="k">new</span> <span class="nf">HiveUpdater</span><span class="o">(),</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">());</span>
+   <span class="n">StateFactory</span> <span class="n">factory</span> <span class="o">=</span> <span class="k">new</span> <span class="n">HiveStateFactory</span><span class="o">().</span><span class="na">withOptions</span><span class="o">(</span><span class="n">hiveOptions</span><span class="o">);</span>
+   <span class="n">TridentState</span> <span class="n">state</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">partitionPersist</span><span class="o">(</span><span class="n">factory</span><span class="o">,</span> <span class="n">hiveFields</span><span class="o">,</span> <span class="k">new</span> <span class="n">HiveUpdater</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">());</span>
 </code></pre></div>
 
 
diff --git a/_site/documentation/storm-jdbc.html b/_site/documentation/storm-jdbc.html
index 8342cfb..f95c1ee 100644
--- a/_site/documentation/storm-jdbc.html
+++ b/_site/documentation/storm-jdbc.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -96,7 +96,7 @@
 
 <p><strong>Note</strong>: Throughout the examples below, we make use of com.google.common.collect.Lists and com.google.common.collect.Maps.</p>
 
-<h2 id="inserting-into-a-database.">Inserting into a database.</h2>
+<h2 id="inserting-into-a-database">Inserting into a database.</h2>
 
 <p>The bolt and trident state included in this package for inserting data into a database tables are tied to a single table.</p>
 
@@ -104,21 +104,21 @@
 
 <p>An interface that should be implemented by different connection pooling mechanism <code>org.apache.storm.jdbc.common.ConnectionProvider</code></p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">interface</span> <span class="nc">ConnectionProvider</span> <span class="kd">extends</span> <span class="n">Serializable</span> <span class="o">{</span>
-    <span class="cm">/**</span>
-<span class="cm">     * method must be idempotent.</span>
-<span class="cm">     */</span>
-    <span class="kt">void</span> <span class="nf">prepare</span><span class="o">();</span>
+    <span class="cm">/**
+     * method must be idempotent.
+     */</span>
+    <span class="kt">void</span> <span class="n">prepare</span><span class="o">();</span>
 
-    <span class="cm">/**</span>
-<span class="cm">     *</span>
-<span class="cm">     * @return a DB connection over which the queries can be executed.</span>
-<span class="cm">     */</span>
-    <span class="n">Connection</span> <span class="nf">getConnection</span><span class="o">();</span>
+    <span class="cm">/**
+     *
+     * @return a DB connection over which the queries can be executed.
+     */</span>
+    <span class="n">Connection</span> <span class="n">getConnection</span><span class="o">();</span>
 
-    <span class="cm">/**</span>
-<span class="cm">     * called once when the system is shutting down, should be idempotent.</span>
-<span class="cm">     */</span>
-    <span class="kt">void</span> <span class="nf">cleanup</span><span class="o">();</span>
+    <span class="cm">/**
+     * called once when the system is shutting down, should be idempotent.
+     */</span>
+    <span class="kt">void</span> <span class="n">cleanup</span><span class="o">();</span>
 <span class="o">}</span>
 </code></pre></div>
 <p>Out of the box we support <code>org.apache.storm.jdbc.common.HikariCPConnectionProvider</code> which is an implementation that uses HikariCP.</p>
@@ -127,7 +127,7 @@
 
 <p>The main API for inserting data in a table using JDBC is the <code>org.apache.storm.jdbc.mapper.JdbcMapper</code> interface:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">interface</span> <span class="nc">JdbcMapper</span>  <span class="kd">extends</span> <span class="n">Serializable</span> <span class="o">{</span>
-    <span class="n">List</span><span class="o">&lt;</span><span class="n">Column</span><span class="o">&gt;</span> <span class="nf">getColumns</span><span class="o">(</span><span class="n">ITuple</span> <span class="n">tuple</span><span class="o">);</span>
+    <span class="n">List</span><span class="o">&lt;</span><span class="n">Column</span><span class="o">&gt;</span> <span class="n">getColumns</span><span class="o">(</span><span class="n">ITuple</span> <span class="n">tuple</span><span class="o">);</span>
 <span class="o">}</span>
 </code></pre></div>
 <p>The <code>getColumns()</code> method defines how a storm tuple maps to a list of columns representing a row in a database. 
@@ -147,21 +147,21 @@
 The default is set to value of topology.message.timeout.secs and a value of -1 will indicate not to set any query timeout.
 You should set the query timeout value to be &lt;= topology.message.timeout.secs.</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">Map</span> <span class="n">hikariConfigMap</span> <span class="o">=</span> <span class="n">Maps</span><span class="o">.</span><span class="na">newHashMap</span><span class="o">();</span>
-<span class="n">hikariConfigMap</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">&quot;dataSourceClassName&quot;</span><span class="o">,</span><span class="s">&quot;com.mysql.jdbc.jdbc2.optional.MysqlDataSource&quot;</span><span class="o">);</span>
-<span class="n">hikariConfigMap</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">&quot;dataSource.url&quot;</span><span class="o">,</span> <span class="s">&quot;jdbc:mysql://localhost/test&quot;</span><span class="o">);</span>
-<span class="n">hikariConfigMap</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">&quot;dataSource.user&quot;</span><span class="o">,</span><span class="s">&quot;root&quot;</span><span class="o">);</span>
-<span class="n">hikariConfigMap</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">&quot;dataSource.password&quot;</span><span class="o">,</span><span class="s">&quot;password&quot;</span><span class="o">);</span>
-<span class="n">ConnectionProvider</span> <span class="n">connectionProvider</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">HikariCPConnectionProvider</span><span class="o">(</span><span class="n">hikariConfigMap</span><span class="o">);</span>
+<span class="n">hikariConfigMap</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">"dataSourceClassName"</span><span class="o">,</span><span class="s">"com.mysql.jdbc.jdbc2.optional.MysqlDataSource"</span><span class="o">);</span>
+<span class="n">hikariConfigMap</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">"dataSource.url"</span><span class="o">,</span> <span class="s">"jdbc:mysql://localhost/test"</span><span class="o">);</span>
+<span class="n">hikariConfigMap</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">"dataSource.user"</span><span class="o">,</span><span class="s">"root"</span><span class="o">);</span>
+<span class="n">hikariConfigMap</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">"dataSource.password"</span><span class="o">,</span><span class="s">"password"</span><span class="o">);</span>
+<span class="n">ConnectionProvider</span> <span class="n">connectionProvider</span> <span class="o">=</span> <span class="k">new</span> <span class="n">HikariCPConnectionProvider</span><span class="o">(</span><span class="n">hikariConfigMap</span><span class="o">);</span>
 
-<span class="n">String</span> <span class="n">tableName</span> <span class="o">=</span> <span class="s">&quot;user_details&quot;</span><span class="o">;</span>
-<span class="n">JdbcMapper</span> <span class="n">simpleJdbcMapper</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">SimpleJdbcMapper</span><span class="o">(</span><span class="n">tableName</span><span class="o">,</span> <span class="n">connectionProvider</span><span class="o">);</span>
+<span class="n">String</span> <span class="n">tableName</span> <span class="o">=</span> <span class="s">"user_details"</span><span class="o">;</span>
+<span class="n">JdbcMapper</span> <span class="n">simpleJdbcMapper</span> <span class="o">=</span> <span class="k">new</span> <span class="n">SimpleJdbcMapper</span><span class="o">(</span><span class="n">tableName</span><span class="o">,</span> <span class="n">connectionProvider</span><span class="o">);</span>
 
-<span class="n">JdbcInsertBolt</span> <span class="n">userPersistanceBolt</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">JdbcInsertBolt</span><span class="o">(</span><span class="n">connectionProvider</span><span class="o">,</span> <span class="n">simpleJdbcMapper</span><span class="o">)</span>
-                                    <span class="o">.</span><span class="na">withTableName</span><span class="o">(</span><span class="s">&quot;user&quot;</span><span class="o">)</span>
+<span class="n">JdbcInsertBolt</span> <span class="n">userPersistanceBolt</span> <span class="o">=</span> <span class="k">new</span> <span class="n">JdbcInsertBolt</span><span class="o">(</span><span class="n">connectionProvider</span><span class="o">,</span> <span class="n">simpleJdbcMapper</span><span class="o">)</span>
+                                    <span class="o">.</span><span class="na">withTableName</span><span class="o">(</span><span class="s">"user"</span><span class="o">)</span>
                                     <span class="o">.</span><span class="na">withQueryTimeoutSecs</span><span class="o">(</span><span class="mi">30</span><span class="o">);</span>
                                     <span class="n">Or</span>
-<span class="n">JdbcInsertBolt</span> <span class="n">userPersistanceBolt</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">JdbcInsertBolt</span><span class="o">(</span><span class="n">connectionProvider</span><span class="o">,</span> <span class="n">simpleJdbcMapper</span><span class="o">)</span>
-                                    <span class="o">.</span><span class="na">withInsertQuery</span><span class="o">(</span><span class="s">&quot;insert into user values (?,?)&quot;</span><span class="o">)</span>
+<span class="n">JdbcInsertBolt</span> <span class="n">userPersistanceBolt</span> <span class="o">=</span> <span class="k">new</span> <span class="n">JdbcInsertBolt</span><span class="o">(</span><span class="n">connectionProvider</span><span class="o">,</span> <span class="n">simpleJdbcMapper</span><span class="o">)</span>
+                                    <span class="o">.</span><span class="na">withInsertQuery</span><span class="o">(</span><span class="s">"insert into user values (?,?)"</span><span class="o">)</span>
                                     <span class="o">.</span><span class="na">withQueryTimeoutSecs</span><span class="o">(</span><span class="mi">30</span><span class="o">);</span>                                    
 </code></pre></div>
 <h3 id="simplejdbcmapper">SimpleJdbcMapper</h3>
@@ -181,13 +181,13 @@
 Please see <a href="https://github.com/brettwooldridge/HikariCP#configuration-knobs-baby">https://github.com/brettwooldridge/HikariCP#configuration-knobs-baby</a> to learn more about hikari configuration properties.</li>
 </ol>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">Map</span> <span class="n">hikariConfigMap</span> <span class="o">=</span> <span class="n">Maps</span><span class="o">.</span><span class="na">newHashMap</span><span class="o">();</span>
-<span class="n">hikariConfigMap</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">&quot;dataSourceClassName&quot;</span><span class="o">,</span><span class="s">&quot;com.mysql.jdbc.jdbc2.optional.MysqlDataSource&quot;</span><span class="o">);</span>
-<span class="n">hikariConfigMap</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">&quot;dataSource.url&quot;</span><span class="o">,</span> <span class="s">&quot;jdbc:mysql://localhost/test&quot;</span><span class="o">);</span>
-<span class="n">hikariConfigMap</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">&quot;dataSource.user&quot;</span><span class="o">,</span><span class="s">&quot;root&quot;</span><span class="o">);</span>
-<span class="n">hikariConfigMap</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">&quot;dataSource.password&quot;</span><span class="o">,</span><span class="s">&quot;password&quot;</span><span class="o">);</span>
-<span class="n">ConnectionProvider</span> <span class="n">connectionProvider</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">HikariCPConnectionProvider</span><span class="o">(</span><span class="n">hikariConfigMap</span><span class="o">);</span>
-<span class="n">String</span> <span class="n">tableName</span> <span class="o">=</span> <span class="s">&quot;user_details&quot;</span><span class="o">;</span>
-<span class="n">JdbcMapper</span> <span class="n">simpleJdbcMapper</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">SimpleJdbcMapper</span><span class="o">(</span><span class="n">tableName</span><span class="o">,</span> <span class="n">connectionProvider</span><span class="o">);</span>
+<span class="n">hikariConfigMap</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">"dataSourceClassName"</span><span class="o">,</span><span class="s">"com.mysql.jdbc.jdbc2.optional.MysqlDataSource"</span><span class="o">);</span>
+<span class="n">hikariConfigMap</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">"dataSource.url"</span><span class="o">,</span> <span class="s">"jdbc:mysql://localhost/test"</span><span class="o">);</span>
+<span class="n">hikariConfigMap</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">"dataSource.user"</span><span class="o">,</span><span class="s">"root"</span><span class="o">);</span>
+<span class="n">hikariConfigMap</span><span class="o">.</span><span class="na">put</span><span class="o">(</span><span class="s">"dataSource.password"</span><span class="o">,</span><span class="s">"password"</span><span class="o">);</span>
+<span class="n">ConnectionProvider</span> <span class="n">connectionProvider</span> <span class="o">=</span> <span class="k">new</span> <span class="n">HikariCPConnectionProvider</span><span class="o">(</span><span class="n">hikariConfigMap</span><span class="o">);</span>
+<span class="n">String</span> <span class="n">tableName</span> <span class="o">=</span> <span class="s">"user_details"</span><span class="o">;</span>
+<span class="n">JdbcMapper</span> <span class="n">simpleJdbcMapper</span> <span class="o">=</span> <span class="k">new</span> <span class="n">SimpleJdbcMapper</span><span class="o">(</span><span class="n">tableName</span><span class="o">,</span> <span class="n">connectionProvider</span><span class="o">);</span>
 </code></pre></div>
 <p>The mapper initialized in the example above assumes a storm tuple has value for all the columns of the table you intend to insert data into and its <code>getColumn</code>
 method will return the columns in the order in which Jdbc connection instance&#39;s <code>connection.getMetaData().getColumns();</code> method returns them.</p>
@@ -206,10 +206,10 @@
 In this table the create_time column has a default value. To ensure only the columns with no default values are inserted 
 you can initialize the <code>jdbcMapper</code> as below:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">List</span><span class="o">&lt;</span><span class="n">Column</span><span class="o">&gt;</span> <span class="n">columnSchema</span> <span class="o">=</span> <span class="n">Lists</span><span class="o">.</span><span class="na">newArrayList</span><span class="o">(</span>
-    <span class="k">new</span> <span class="nf">Column</span><span class="o">(</span><span class="s">&quot;user_id&quot;</span><span class="o">,</span> <span class="n">java</span><span class="o">.</span><span class="na">sql</span><span class="o">.</span><span class="na">Types</span><span class="o">.</span><span class="na">INTEGER</span><span class="o">),</span>
-    <span class="k">new</span> <span class="nf">Column</span><span class="o">(</span><span class="s">&quot;user_name&quot;</span><span class="o">,</span> <span class="n">java</span><span class="o">.</span><span class="na">sql</span><span class="o">.</span><span class="na">Types</span><span class="o">.</span><span class="na">VARCHAR</span><span class="o">),</span>
-    <span class="k">new</span> <span class="nf">Column</span><span class="o">(</span><span class="s">&quot;dept_name&quot;</span><span class="o">,</span> <span class="n">java</span><span class="o">.</span><span class="na">sql</span><span class="o">.</span><span class="na">Types</span><span class="o">.</span><span class="na">VARCHAR</span><span class="o">));</span>
-<span class="n">JdbcMapper</span> <span class="n">simpleJdbcMapper</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">SimpleJdbcMapper</span><span class="o">(</span><span class="n">columnSchema</span><span class="o">);</span>
+    <span class="k">new</span> <span class="n">Column</span><span class="o">(</span><span class="s">"user_id"</span><span class="o">,</span> <span class="n">java</span><span class="o">.</span><span class="na">sql</span><span class="o">.</span><span class="na">Types</span><span class="o">.</span><span class="na">INTEGER</span><span class="o">),</span>
+    <span class="k">new</span> <span class="n">Column</span><span class="o">(</span><span class="s">"user_name"</span><span class="o">,</span> <span class="n">java</span><span class="o">.</span><span class="na">sql</span><span class="o">.</span><span class="na">Types</span><span class="o">.</span><span class="na">VARCHAR</span><span class="o">),</span>
+    <span class="k">new</span> <span class="n">Column</span><span class="o">(</span><span class="s">"dept_name"</span><span class="o">,</span> <span class="n">java</span><span class="o">.</span><span class="na">sql</span><span class="o">.</span><span class="na">Types</span><span class="o">.</span><span class="na">VARCHAR</span><span class="o">));</span>
+<span class="n">JdbcMapper</span> <span class="n">simpleJdbcMapper</span> <span class="o">=</span> <span class="k">new</span> <span class="n">SimpleJdbcMapper</span><span class="o">(</span><span class="n">columnSchema</span><span class="o">);</span>
 </code></pre></div>
 <h3 id="jdbctridentstate">JdbcTridentState</h3>
 
@@ -219,9 +219,9 @@
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">JdbcState</span><span class="o">.</span><span class="na">Options</span> <span class="n">options</span> <span class="o">=</span> <span class="k">new</span> <span class="n">JdbcState</span><span class="o">.</span><span class="na">Options</span><span class="o">()</span>
         <span class="o">.</span><span class="na">withConnectionProvider</span><span class="o">(</span><span class="n">connectionProvider</span><span class="o">)</span>
         <span class="o">.</span><span class="na">withMapper</span><span class="o">(</span><span class="n">jdbcMapper</span><span class="o">)</span>
-        <span class="o">.</span><span class="na">withTableName</span><span class="o">(</span><span class="s">&quot;user_details&quot;</span><span class="o">)</span>
+        <span class="o">.</span><span class="na">withTableName</span><span class="o">(</span><span class="s">"user_details"</span><span class="o">)</span>
         <span class="o">.</span><span class="na">withQueryTimeoutSecs</span><span class="o">(</span><span class="mi">30</span><span class="o">);</span>
-<span class="n">JdbcStateFactory</span> <span class="n">jdbcStateFactory</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">JdbcStateFactory</span><span class="o">(</span><span class="n">options</span><span class="o">);</span>
+<span class="n">JdbcStateFactory</span> <span class="n">jdbcStateFactory</span> <span class="o">=</span> <span class="k">new</span> <span class="n">JdbcStateFactory</span><span class="o">(</span><span class="n">options</span><span class="o">);</span>
 </code></pre></div>
 <p>similar to <code>JdbcInsertBolt</code> you can specify a custom insert query using <code>withInsertQuery</code> instead of specifying a table name.</p>
 
@@ -229,9 +229,9 @@
 
 <p>We support <code>select</code> queries from databases to allow enrichment of storm tuples in a topology. The main API for 
 executing select queries against a database using JDBC is the <code>org.apache.storm.jdbc.mapper.JdbcLookupMapper</code> interface:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java">    <span class="kt">void</span> <span class="nf">declareOutputFields</span><span class="o">(</span><span class="n">OutputFieldsDeclarer</span> <span class="n">declarer</span><span class="o">);</span>
-    <span class="n">List</span><span class="o">&lt;</span><span class="n">Column</span><span class="o">&gt;</span> <span class="nf">getColumns</span><span class="o">(</span><span class="n">ITuple</span> <span class="n">tuple</span><span class="o">);</span>
-    <span class="n">List</span><span class="o">&lt;</span><span class="n">Values</span><span class="o">&gt;</span> <span class="nf">toTuple</span><span class="o">(</span><span class="n">ITuple</span> <span class="n">input</span><span class="o">,</span> <span class="n">List</span><span class="o">&lt;</span><span class="n">Column</span><span class="o">&gt;</span> <span class="n">columns</span><span class="o">);</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java">    <span class="kt">void</span> <span class="nf">declareOutputFields</span><span class="p">(</span><span class="n">OutputFieldsDeclarer</span> <span class="n">declarer</span><span class="o">);</span>
+    <span class="n">List</span><span class="o">&lt;</span><span class="n">Column</span><span class="o">&gt;</span> <span class="n">getColumns</span><span class="o">(</span><span class="n">ITuple</span> <span class="n">tuple</span><span class="o">);</span>
+    <span class="n">List</span><span class="o">&lt;</span><span class="n">Values</span><span class="o">&gt;</span> <span class="n">toTuple</span><span class="o">(</span><span class="n">ITuple</span> <span class="n">input</span><span class="o">,</span> <span class="n">List</span><span class="o">&lt;</span><span class="n">Column</span><span class="o">&gt;</span> <span class="n">columns</span><span class="o">);</span>
 </code></pre></div>
 <p>The <code>declareOutputFields</code> method is used to indicate what fields will be emitted as part of output tuple of processing a storm 
 tuple. </p>
@@ -265,18 +265,18 @@
 will return the value of <code>tuple.getValueByField(&quot;user_id&quot;)</code> which will be used as the value in <code>?</code> of select query. 
 For each output row from DB, <code>SimpleJdbcLookupMapper.toTuple()</code> will use the <code>user_id, create_date</code> from the input tuple as 
 is adding only <code>user_name</code> from the resulting row and returning these 3 fields as a single output tuple.</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">Fields</span> <span class="n">outputFields</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;user_id&quot;</span><span class="o">,</span> <span class="s">&quot;user_name&quot;</span><span class="o">,</span> <span class="s">&quot;create_date&quot;</span><span class="o">);</span>
-<span class="n">List</span><span class="o">&lt;</span><span class="n">Column</span><span class="o">&gt;</span> <span class="n">queryParamColumns</span> <span class="o">=</span> <span class="n">Lists</span><span class="o">.</span><span class="na">newArrayList</span><span class="o">(</span><span class="k">new</span> <span class="nf">Column</span><span class="o">(</span><span class="s">&quot;user_id&quot;</span><span class="o">,</span> <span class="n">Types</span><span class="o">.</span><span class="na">INTEGER</span><span class="o">));</span>
-<span class="k">this</span><span class="o">.</span><span class="na">jdbcLookupMapper</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">SimpleJdbcLookupMapper</span><span class="o">(</span><span class="n">outputFields</span><span class="o">,</span> <span class="n">queryParamColumns</span><span class="o">);</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">Fields</span> <span class="n">outputFields</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"user_id"</span><span class="o">,</span> <span class="s">"user_name"</span><span class="o">,</span> <span class="s">"create_date"</span><span class="o">);</span>
+<span class="n">List</span><span class="o">&lt;</span><span class="n">Column</span><span class="o">&gt;</span> <span class="n">queryParamColumns</span> <span class="o">=</span> <span class="n">Lists</span><span class="o">.</span><span class="na">newArrayList</span><span class="o">(</span><span class="k">new</span> <span class="n">Column</span><span class="o">(</span><span class="s">"user_id"</span><span class="o">,</span> <span class="n">Types</span><span class="o">.</span><span class="na">INTEGER</span><span class="o">));</span>
+<span class="k">this</span><span class="o">.</span><span class="na">jdbcLookupMapper</span> <span class="o">=</span> <span class="k">new</span> <span class="n">SimpleJdbcLookupMapper</span><span class="o">(</span><span class="n">outputFields</span><span class="o">,</span> <span class="n">queryParamColumns</span><span class="o">);</span>
 </code></pre></div>
 <h3 id="jdbclookupbolt">JdbcLookupBolt</h3>
 
 <p>To use the <code>JdbcLookupBolt</code>, construct an instance of it using a <code>ConnectionProvider</code> instance, <code>JdbcLookupMapper</code> instance and the select query to execute.
 You can optionally specify a query timeout seconds param that specifies max seconds the select query can take. 
 The default is set to value of topology.message.timeout.secs. You should set this value to be &lt;= topology.message.timeout.secs.</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">String</span> <span class="n">selectSql</span> <span class="o">=</span> <span class="s">&quot;select user_name from user_details where user_id = ?&quot;</span><span class="o">;</span>
-<span class="n">SimpleJdbcLookupMapper</span> <span class="n">lookupMapper</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">SimpleJdbcLookupMapper</span><span class="o">(</span><span class="n">outputFields</span><span class="o">,</span> <span class="n">queryParamColumns</span><span class="o">)</span>
-<span class="n">JdbcLookupBolt</span> <span class="n">userNameLookupBolt</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">JdbcLookupBolt</span><span class="o">(</span><span class="n">connectionProvider</span><span class="o">,</span> <span class="n">selectSql</span><span class="o">,</span> <span class="n">lookupMapper</span><span class="o">)</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">String</span> <span class="n">selectSql</span> <span class="o">=</span> <span class="s">"select user_name from user_details where user_id = ?"</span><span class="o">;</span>
+<span class="n">SimpleJdbcLookupMapper</span> <span class="n">lookupMapper</span> <span class="o">=</span> <span class="k">new</span> <span class="n">SimpleJdbcLookupMapper</span><span class="o">(</span><span class="n">outputFields</span><span class="o">,</span> <span class="n">queryParamColumns</span><span class="o">)</span>
+<span class="n">JdbcLookupBolt</span> <span class="n">userNameLookupBolt</span> <span class="o">=</span> <span class="k">new</span> <span class="n">JdbcLookupBolt</span><span class="o">(</span><span class="n">connectionProvider</span><span class="o">,</span> <span class="n">selectSql</span><span class="o">,</span> <span class="n">lookupMapper</span><span class="o">)</span>
         <span class="o">.</span><span class="na">withQueryTimeoutSecs</span><span class="o">(</span><span class="mi">30</span><span class="o">);</span>
 </code></pre></div>
 <h3 id="jdbctridentstate-for-lookup">JdbcTridentState for lookup</h3>
@@ -284,11 +284,11 @@
 <p>We also support a trident query state that can be used with trident topologies. </p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">JdbcState</span><span class="o">.</span><span class="na">Options</span> <span class="n">options</span> <span class="o">=</span> <span class="k">new</span> <span class="n">JdbcState</span><span class="o">.</span><span class="na">Options</span><span class="o">()</span>
         <span class="o">.</span><span class="na">withConnectionProvider</span><span class="o">(</span><span class="n">connectionProvider</span><span class="o">)</span>
-        <span class="o">.</span><span class="na">withJdbcLookupMapper</span><span class="o">(</span><span class="k">new</span> <span class="nf">SimpleJdbcLookupMapper</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;user_name&quot;</span><span class="o">),</span> <span class="n">Lists</span><span class="o">.</span><span class="na">newArrayList</span><span class="o">(</span><span class="k">new</span> <span class="nf">Column</span><span class="o">(</span><span class="s">&quot;user_id&quot;</span><span class="o">,</span> <span class="n">Types</span><span class="o">.</span><span class="na">INTEGER</span><span class="o">))))</span>
-        <span class="o">.</span><span class="na">withSelectQuery</span><span class="o">(</span><span class="s">&quot;select user_name from user_details where user_id = ?&quot;</span><span class="o">);</span>
+        <span class="o">.</span><span class="na">withJdbcLookupMapper</span><span class="o">(</span><span class="k">new</span> <span class="n">SimpleJdbcLookupMapper</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"user_name"</span><span class="o">),</span> <span class="n">Lists</span><span class="o">.</span><span class="na">newArrayList</span><span class="o">(</span><span class="k">new</span> <span class="n">Column</span><span class="o">(</span><span class="s">"user_id"</span><span class="o">,</span> <span class="n">Types</span><span class="o">.</span><span class="na">INTEGER</span><span class="o">))))</span>
+        <span class="o">.</span><span class="na">withSelectQuery</span><span class="o">(</span><span class="s">"select user_name from user_details where user_id = ?"</span><span class="o">);</span>
         <span class="o">.</span><span class="na">withQueryTimeoutSecs</span><span class="o">(</span><span class="mi">30</span><span class="o">);</span>
 </code></pre></div>
-<h2 id="example:">Example:</h2>
+<h2 id="example">Example:</h2>
 
 <p>A runnable example can be found in the <code>src/test/java/topology</code> directory.</p>
 
@@ -318,7 +318,7 @@
 </ul>
 
 <p>To make it work with Mysql, you can add the following to the pom.xml</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">&lt;dependency&gt;
+<div class="highlight"><pre><code class="language-" data-lang="">&lt;dependency&gt;
     &lt;groupId&gt;mysql&lt;/groupId&gt;
     &lt;artifactId&gt;mysql-connector-java&lt;/artifactId&gt;
     &lt;version&gt;5.1.31&lt;/version&gt;
@@ -326,7 +326,7 @@
 </code></pre></div>
 <p>You can generate a single jar with dependencies using mvn assembly plugin. To use the plugin add the following to your pom.xml and execute 
 <code>mvn clean compile assembly:single</code></p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">&lt;plugin&gt;
+<div class="highlight"><pre><code class="language-" data-lang="">&lt;plugin&gt;
     &lt;artifactId&gt;maven-assembly-plugin&lt;/artifactId&gt;
     &lt;configuration&gt;
         &lt;archive&gt;
@@ -346,7 +346,7 @@
 </code></p>
 
 <p>You can execute a select query against the user table which should show newly inserted rows:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">select * from user;
+<div class="highlight"><pre><code class="language-" data-lang="">select * from user;
 </code></pre></div>
 <p>For trident you can view <code>org.apache.storm.jdbc.topology.UserPersistanceTridentTopology</code>.</p>
 
diff --git a/_site/documentation/storm-kafka.html b/_site/documentation/storm-kafka.html
index cda7ed7..3fa9124 100644
--- a/_site/documentation/storm-kafka.html
+++ b/_site/documentation/storm-kafka.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -120,14 +120,14 @@
 
 <p>This is an alternative implementation where broker -&gt; partition information is static. In order to construct an instance
 of this class, you need to first construct an instance of GlobalPartitionInformation.</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java">    <span class="n">Broker</span> <span class="n">brokerForPartition0</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">Broker</span><span class="o">(</span><span class="s">&quot;localhost&quot;</span><span class="o">);</span><span class="c1">//localhost:9092</span>
-    <span class="n">Broker</span> <span class="n">brokerForPartition1</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">Broker</span><span class="o">(</span><span class="s">&quot;localhost&quot;</span><span class="o">,</span> <span class="mi">9092</span><span class="o">);</span><span class="c1">//localhost:9092 but we specified the port explicitly</span>
-    <span class="n">Broker</span> <span class="n">brokerForPartition2</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">Broker</span><span class="o">(</span><span class="s">&quot;localhost:9092&quot;</span><span class="o">);</span><span class="c1">//localhost:9092 specified as one string.</span>
-    <span class="n">GlobalPartitionInformation</span> <span class="n">partitionInfo</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">GlobalPartitionInformation</span><span class="o">();</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java">    <span class="n">Broker</span> <span class="n">brokerForPartition0</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Broker</span><span class="o">(</span><span class="s">"localhost"</span><span class="o">);</span><span class="c1">//localhost:9092</span>
+    <span class="n">Broker</span> <span class="n">brokerForPartition1</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Broker</span><span class="o">(</span><span class="s">"localhost"</span><span class="o">,</span> <span class="mi">9092</span><span class="o">);</span><span class="c1">//localhost:9092 but we specified the port explicitly</span>
+    <span class="n">Broker</span> <span class="n">brokerForPartition2</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Broker</span><span class="o">(</span><span class="s">"localhost:9092"</span><span class="o">);</span><span class="c1">//localhost:9092 specified as one string.</span>
+    <span class="n">GlobalPartitionInformation</span> <span class="n">partitionInfo</span> <span class="o">=</span> <span class="k">new</span> <span class="n">GlobalPartitionInformation</span><span class="o">();</span>
     <span class="n">partitionInfo</span><span class="o">.</span><span class="na">addPartition</span><span class="o">(</span><span class="mi">0</span><span class="o">,</span> <span class="n">brokerForPartition0</span><span class="o">);</span><span class="c1">//mapping from partition 0 to brokerForPartition0</span>
     <span class="n">partitionInfo</span><span class="o">.</span><span class="na">addPartition</span><span class="o">(</span><span class="mi">1</span><span class="o">,</span> <span class="n">brokerForPartition1</span><span class="o">);</span><span class="c1">//mapping from partition 1 to brokerForPartition1</span>
     <span class="n">partitionInfo</span><span class="o">.</span><span class="na">addPartition</span><span class="o">(</span><span class="mi">2</span><span class="o">,</span> <span class="n">brokerForPartition2</span><span class="o">);</span><span class="c1">//mapping from partition 2 to brokerForPartition2</span>
-    <span class="n">StaticHosts</span> <span class="n">hosts</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">StaticHosts</span><span class="o">(</span><span class="n">partitionInfo</span><span class="o">);</span>
+    <span class="n">StaticHosts</span> <span class="n">hosts</span> <span class="o">=</span> <span class="k">new</span> <span class="n">StaticHosts</span><span class="o">(</span><span class="n">partitionInfo</span><span class="o">);</span>
 </code></pre></div>
 <h3 id="kafkaconfig">KafkaConfig</h3>
 
@@ -153,7 +153,7 @@
 ```java
     // setting for how often to save the current Kafka offset to ZooKeeper
     public long stateUpdateIntervalMs = 2000;</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">// Exponential back-off retry settings.  These are used when retrying messages after a bolt
+<div class="highlight"><pre><code class="language-" data-lang="">// Exponential back-off retry settings.  These are used when retrying messages after a bolt
 // calls OutputCollector.fail().
 // Note: be sure to set backtype.storm.Config.MESSAGE_TIMEOUT_SECS appropriately to prevent
 // resubmitting the message while still retrying.
@@ -163,12 +163,12 @@
 
 // if set to true, spout will set Kafka topic as the emitted Stream ID
 public boolean topicAsStreamId = false;
-</code></pre></div><div class="highlight"><pre><code class="language-text" data-lang="text">Core KafkaSpout only accepts an instance of SpoutConfig.
+</code></pre></div><div class="highlight"><pre><code class="language-" data-lang="">Core KafkaSpout only accepts an instance of SpoutConfig.
 
 TridentKafkaConfig is another extension of KafkaConfig.
 TridentKafkaEmitter only accepts TridentKafkaConfig.
 
-The KafkaConfig class also has bunch of public variables that controls your application&#39;s behavior. Here are defaults:
+The KafkaConfig class also has bunch of public variables that controls your application's behavior. Here are defaults:
 ```java
     public int fetchSizeBytes = 1024 * 1024;
     public int socketTimeoutMs = 10000;
@@ -187,8 +187,8 @@
 
 <p>MultiScheme is an interface that dictates how the byte[] consumed from Kafka gets transformed into a storm tuple. It
 also controls the naming of your output field.</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java">  <span class="kd">public</span> <span class="n">Iterable</span><span class="o">&lt;</span><span class="n">List</span><span class="o">&lt;</span><span class="n">Object</span><span class="o">&gt;&gt;</span> <span class="nf">deserialize</span><span class="o">(</span><span class="kt">byte</span><span class="o">[]</span> <span class="n">ser</span><span class="o">);</span>
-  <span class="kd">public</span> <span class="n">Fields</span> <span class="nf">getOutputFields</span><span class="o">();</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java">  <span class="kd">public</span> <span class="n">Iterable</span><span class="o">&lt;</span><span class="n">List</span><span class="o">&lt;</span><span class="n">Object</span><span class="o">&gt;&gt;</span> <span class="n">deserialize</span><span class="o">(</span><span class="kt">byte</span><span class="o">[]</span> <span class="n">ser</span><span class="o">);</span>
+  <span class="kd">public</span> <span class="n">Fields</span> <span class="n">getOutputFields</span><span class="o">();</span>
 </code></pre></div>
 <p>The default <code>RawMultiScheme</code> just takes the <code>byte[]</code> and returns a tuple with <code>byte[]</code> as is. The name of the
 outputField is &quot;bytes&quot;.  There are alternative implementation like <code>SchemeAsMultiScheme</code> and
@@ -197,17 +197,17 @@
 <h3 id="examples">Examples</h3>
 
 <h4 id="core-spout">Core Spout</h4>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">BrokerHosts</span> <span class="n">hosts</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">ZkHosts</span><span class="o">(</span><span class="n">zkConnString</span><span class="o">);</span>
-<span class="n">SpoutConfig</span> <span class="n">spoutConfig</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">SpoutConfig</span><span class="o">(</span><span class="n">hosts</span><span class="o">,</span> <span class="n">topicName</span><span class="o">,</span> <span class="s">&quot;/&quot;</span> <span class="o">+</span> <span class="n">topicName</span><span class="o">,</span> <span class="n">UUID</span><span class="o">.</span><span class="na">randomUUID</span><span class="o">().</span><span class="na">toString</span><span class="o">());</span>
-<span class="n">spoutConfig</span><span class="o">.</span><span class="na">scheme</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">SchemeAsMultiScheme</span><span class="o">(</span><span class="k">new</span> <span class="nf">StringScheme</span><span class="o">());</span>
-<span class="n">KafkaSpout</span> <span class="n">kafkaSpout</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">KafkaSpout</span><span class="o">(</span><span class="n">spoutConfig</span><span class="o">);</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">BrokerHosts</span> <span class="n">hosts</span> <span class="o">=</span> <span class="k">new</span> <span class="n">ZkHosts</span><span class="o">(</span><span class="n">zkConnString</span><span class="o">);</span>
+<span class="n">SpoutConfig</span> <span class="n">spoutConfig</span> <span class="o">=</span> <span class="k">new</span> <span class="n">SpoutConfig</span><span class="o">(</span><span class="n">hosts</span><span class="o">,</span> <span class="n">topicName</span><span class="o">,</span> <span class="s">"/"</span> <span class="o">+</span> <span class="n">topicName</span><span class="o">,</span> <span class="n">UUID</span><span class="o">.</span><span class="na">randomUUID</span><span class="o">().</span><span class="na">toString</span><span class="o">());</span>
+<span class="n">spoutConfig</span><span class="o">.</span><span class="na">scheme</span> <span class="o">=</span> <span class="k">new</span> <span class="n">SchemeAsMultiScheme</span><span class="o">(</span><span class="k">new</span> <span class="n">StringScheme</span><span class="o">());</span>
+<span class="n">KafkaSpout</span> <span class="n">kafkaSpout</span> <span class="o">=</span> <span class="k">new</span> <span class="n">KafkaSpout</span><span class="o">(</span><span class="n">spoutConfig</span><span class="o">);</span>
 </code></pre></div>
 <h4 id="trident-spout">Trident Spout</h4>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">TridentTopology</span> <span class="n">topology</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">TridentTopology</span><span class="o">();</span>
-<span class="n">BrokerHosts</span> <span class="n">zk</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">ZkHosts</span><span class="o">(</span><span class="s">&quot;localhost&quot;</span><span class="o">);</span>
-<span class="n">TridentKafkaConfig</span> <span class="n">spoutConf</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">TridentKafkaConfig</span><span class="o">(</span><span class="n">zk</span><span class="o">,</span> <span class="s">&quot;test-topic&quot;</span><span class="o">);</span>
-<span class="n">spoutConf</span><span class="o">.</span><span class="na">scheme</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">SchemeAsMultiScheme</span><span class="o">(</span><span class="k">new</span> <span class="nf">StringScheme</span><span class="o">());</span>
-<span class="n">OpaqueTridentKafkaSpout</span> <span class="n">spout</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">OpaqueTridentKafkaSpout</span><span class="o">(</span><span class="n">spoutConf</span><span class="o">);</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">TridentTopology</span> <span class="n">topology</span> <span class="o">=</span> <span class="k">new</span> <span class="n">TridentTopology</span><span class="o">();</span>
+<span class="n">BrokerHosts</span> <span class="n">zk</span> <span class="o">=</span> <span class="k">new</span> <span class="n">ZkHosts</span><span class="o">(</span><span class="s">"localhost"</span><span class="o">);</span>
+<span class="n">TridentKafkaConfig</span> <span class="n">spoutConf</span> <span class="o">=</span> <span class="k">new</span> <span class="n">TridentKafkaConfig</span><span class="o">(</span><span class="n">zk</span><span class="o">,</span> <span class="s">"test-topic"</span><span class="o">);</span>
+<span class="n">spoutConf</span><span class="o">.</span><span class="na">scheme</span> <span class="o">=</span> <span class="k">new</span> <span class="n">SchemeAsMultiScheme</span><span class="o">(</span><span class="k">new</span> <span class="n">StringScheme</span><span class="o">());</span>
+<span class="n">OpaqueTridentKafkaSpout</span> <span class="n">spout</span> <span class="o">=</span> <span class="k">new</span> <span class="n">OpaqueTridentKafkaSpout</span><span class="o">(</span><span class="n">spoutConf</span><span class="o">);</span>
 </code></pre></div>
 <h3 id="how-kafkaspout-stores-offsets-of-a-kafka-topic-and-recovers-in-case-of-failures">How KafkaSpout stores offsets of a Kafka topic and recovers in case of failures</h3>
 
@@ -274,8 +274,8 @@
 <h3 id="tupletokafkamapper-and-tridenttupletokafkamapper">TupleToKafkaMapper and TridentTupleToKafkaMapper</h3>
 
 <p>These interfaces have 2 methods defined:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java">    <span class="n">K</span> <span class="nf">getKeyFromTuple</span><span class="o">(</span><span class="n">Tuple</span><span class="o">/</span><span class="n">TridentTuple</span> <span class="n">tuple</span><span class="o">);</span>
-    <span class="n">V</span> <span class="nf">getMessageFromTuple</span><span class="o">(</span><span class="n">Tuple</span><span class="o">/</span><span class="n">TridentTuple</span> <span class="n">tuple</span><span class="o">);</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java">    <span class="n">K</span> <span class="nf">getKeyFromTuple</span><span class="p">(</span><span class="n">Tuple</span><span class="o">/</span><span class="n">TridentTuple</span> <span class="n">tuple</span><span class="o">);</span>
+    <span class="n">V</span> <span class="n">getMessageFromTuple</span><span class="o">(</span><span class="n">Tuple</span><span class="o">/</span><span class="n">TridentTuple</span> <span class="n">tuple</span><span class="o">);</span>
 </code></pre></div>
 <p>As the name suggests, these methods are called to map a tuple to Kafka key and Kafka message. If you just want one field
 as key and one field as value, then you can use the provided FieldNameBasedTupleToKafkaMapper.java 
@@ -308,57 +308,58 @@
 <p>For the bolt :
 ```java
         TopologyBuilder builder = new TopologyBuilder();</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">    Fields fields = new Fields(&quot;key&quot;, &quot;message&quot;);
+<div class="highlight"><pre><code class="language-" data-lang="">    Fields fields = new Fields("key", "message");
     FixedBatchSpout spout = new FixedBatchSpout(fields, 4,
-                new Values(&quot;storm&quot;, &quot;1&quot;),
-                new Values(&quot;trident&quot;, &quot;1&quot;),
-                new Values(&quot;needs&quot;, &quot;1&quot;),
-                new Values(&quot;javadoc&quot;, &quot;1&quot;)
+                new Values("storm", "1"),
+                new Values("trident", "1"),
+                new Values("needs", "1"),
+                new Values("javadoc", "1")
     );
     spout.setCycle(true);
-    builder.setSpout(&quot;spout&quot;, spout, 5);
+    builder.setSpout("spout", spout, 5);
     KafkaBolt bolt = new KafkaBolt()
-            .withTopicSelector(new DefaultTopicSelector(&quot;test&quot;))
+            .withTopicSelector(new DefaultTopicSelector("test"))
             .withTupleToKafkaMapper(new FieldNameBasedTupleToKafkaMapper());
-    builder.setBolt(&quot;forwardToKafka&quot;, bolt, 8).shuffleGrouping(&quot;spout&quot;);
+    builder.setBolt("forwardToKafka", bolt, 8).shuffleGrouping("spout");
 
     Config conf = new Config();
     //set producer properties.
     Properties props = new Properties();
-    props.put(&quot;metadata.broker.list&quot;, &quot;localhost:9092&quot;);
-    props.put(&quot;request.required.acks&quot;, &quot;1&quot;);
-    props.put(&quot;serializer.class&quot;, &quot;kafka.serializer.StringEncoder&quot;);
+    props.put("metadata.broker.list", "localhost:9092");
+    props.put("request.required.acks", "1");
+    props.put("serializer.class", "kafka.serializer.StringEncoder");
     conf.put(KafkaBolt.KAFKA_BROKER_PROPERTIES, props);
 
-    StormSubmitter.submitTopology(&quot;kafkaboltTest&quot;, conf, builder.createTopology());
-</code></pre></div><div class="highlight"><pre><code class="language-text" data-lang="text">For Trident:
+    StormSubmitter.submitTopology("kafkaboltTest", conf, builder.createTopology());
+</code></pre></div><div class="highlight"><pre><code class="language-" data-lang="">
+For Trident:
 
 ```java
-        Fields fields = new Fields(&quot;word&quot;, &quot;count&quot;);
+        Fields fields = new Fields("word", "count");
         FixedBatchSpout spout = new FixedBatchSpout(fields, 4,
-                new Values(&quot;storm&quot;, &quot;1&quot;),
-                new Values(&quot;trident&quot;, &quot;1&quot;),
-                new Values(&quot;needs&quot;, &quot;1&quot;),
-                new Values(&quot;javadoc&quot;, &quot;1&quot;)
+                new Values("storm", "1"),
+                new Values("trident", "1"),
+                new Values("needs", "1"),
+                new Values("javadoc", "1")
         );
         spout.setCycle(true);
 
         TridentTopology topology = new TridentTopology();
-        Stream stream = topology.newStream(&quot;spout1&quot;, spout);
+        Stream stream = topology.newStream("spout1", spout);
 
         TridentKafkaStateFactory stateFactory = new TridentKafkaStateFactory()
-                .withKafkaTopicSelector(new DefaultTopicSelector(&quot;test&quot;))
-                .withTridentTupleToKafkaMapper(new FieldNameBasedTupleToKafkaMapper(&quot;word&quot;, &quot;count&quot;));
+                .withKafkaTopicSelector(new DefaultTopicSelector("test"))
+                .withTridentTupleToKafkaMapper(new FieldNameBasedTupleToKafkaMapper("word", "count"));
         stream.partitionPersist(stateFactory, fields, new TridentKafkaUpdater(), new Fields());
 
         Config conf = new Config();
         //set producer properties.
         Properties props = new Properties();
-        props.put(&quot;metadata.broker.list&quot;, &quot;localhost:9092&quot;);
-        props.put(&quot;request.required.acks&quot;, &quot;1&quot;);
-        props.put(&quot;serializer.class&quot;, &quot;kafka.serializer.StringEncoder&quot;);
+        props.put("metadata.broker.list", "localhost:9092");
+        props.put("request.required.acks", "1");
+        props.put("serializer.class", "kafka.serializer.StringEncoder");
         conf.put(TridentKafkaState.KAFKA_BROKER_PROPERTIES, props);
-        StormSubmitter.submitTopology(&quot;kafkaTridentTest&quot;, conf, topology.build());
+        StormSubmitter.submitTopology("kafkaTridentTest", conf, topology.build());
 </code></pre></div>
 
 
diff --git a/_site/documentation/storm-redis.html b/_site/documentation/storm-redis.html
index 4dffd6b..4a2dc97 100644
--- a/_site/documentation/storm-redis.html
+++ b/_site/documentation/storm-redis.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -96,7 +96,7 @@
 
 <h2 id="usage">Usage</h2>
 
-<h3 id="how-do-i-use-it?">How do I use it?</h3>
+<h3 id="how-do-i-use-it">How do I use it?</h3>
 
 <p>use it as a maven dependency:</p>
 <div class="highlight"><pre><code class="language-xml" data-lang="xml"><span class="nt">&lt;dependency&gt;</span>
@@ -117,111 +117,117 @@
 <p>These interfaces are combined with <code>RedisLookupMapper</code> and <code>RedisStoreMapper</code> which fit <code>RedisLookupBolt</code> and <code>RedisStoreBolt</code> respectively.</p>
 
 <h4 id="redislookupbolt-example">RedisLookupBolt example</h4>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">class</span> <span class="nc">WordCountRedisLookupMapper</span> <span class="kd">implements</span> <span class="n">RedisLookupMapper</span> <span class="o">{</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java">
+<span class="kd">class</span> <span class="nc">WordCountRedisLookupMapper</span> <span class="kd">implements</span> <span class="n">RedisLookupMapper</span> <span class="o">{</span>
     <span class="kd">private</span> <span class="n">RedisDataTypeDescription</span> <span class="n">description</span><span class="o">;</span>
-    <span class="kd">private</span> <span class="kd">final</span> <span class="n">String</span> <span class="n">hashKey</span> <span class="o">=</span> <span class="s">&quot;wordCount&quot;</span><span class="o">;</span>
+    <span class="kd">private</span> <span class="kd">final</span> <span class="n">String</span> <span class="n">hashKey</span> <span class="o">=</span> <span class="s">"wordCount"</span><span class="o">;</span>
 
-    <span class="kd">public</span> <span class="nf">WordCountRedisLookupMapper</span><span class="o">()</span> <span class="o">{</span>
-        <span class="n">description</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">RedisDataTypeDescription</span><span class="o">(</span>
+    <span class="kd">public</span> <span class="n">WordCountRedisLookupMapper</span><span class="o">()</span> <span class="o">{</span>
+        <span class="n">description</span> <span class="o">=</span> <span class="k">new</span> <span class="n">RedisDataTypeDescription</span><span class="o">(</span>
                 <span class="n">RedisDataTypeDescription</span><span class="o">.</span><span class="na">RedisDataType</span><span class="o">.</span><span class="na">HASH</span><span class="o">,</span> <span class="n">hashKey</span><span class="o">);</span>
     <span class="o">}</span>
 
     <span class="nd">@Override</span>
-    <span class="kd">public</span> <span class="n">List</span><span class="o">&lt;</span><span class="n">Values</span><span class="o">&gt;</span> <span class="nf">toTuple</span><span class="o">(</span><span class="n">ITuple</span> <span class="n">input</span><span class="o">,</span> <span class="n">Object</span> <span class="n">value</span><span class="o">)</span> <span class="o">{</span>
+    <span class="kd">public</span> <span class="n">List</span><span class="o">&lt;</span><span class="n">Values</span><span class="o">&gt;</span> <span class="n">toTuple</span><span class="o">(</span><span class="n">ITuple</span> <span class="n">input</span><span class="o">,</span> <span class="n">Object</span> <span class="n">value</span><span class="o">)</span> <span class="o">{</span>
         <span class="n">String</span> <span class="n">member</span> <span class="o">=</span> <span class="n">getKeyFromTuple</span><span class="o">(</span><span class="n">input</span><span class="o">);</span>
         <span class="n">List</span><span class="o">&lt;</span><span class="n">Values</span><span class="o">&gt;</span> <span class="n">values</span> <span class="o">=</span> <span class="n">Lists</span><span class="o">.</span><span class="na">newArrayList</span><span class="o">();</span>
-        <span class="n">values</span><span class="o">.</span><span class="na">add</span><span class="o">(</span><span class="k">new</span> <span class="nf">Values</span><span class="o">(</span><span class="n">member</span><span class="o">,</span> <span class="n">value</span><span class="o">));</span>
+        <span class="n">values</span><span class="o">.</span><span class="na">add</span><span class="o">(</span><span class="k">new</span> <span class="n">Values</span><span class="o">(</span><span class="n">member</span><span class="o">,</span> <span class="n">value</span><span class="o">));</span>
         <span class="k">return</span> <span class="n">values</span><span class="o">;</span>
     <span class="o">}</span>
 
     <span class="nd">@Override</span>
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">declareOutputFields</span><span class="o">(</span><span class="n">OutputFieldsDeclarer</span> <span class="n">declarer</span><span class="o">)</span> <span class="o">{</span>
-        <span class="n">declarer</span><span class="o">.</span><span class="na">declare</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;wordName&quot;</span><span class="o">,</span> <span class="s">&quot;count&quot;</span><span class="o">));</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">declareOutputFields</span><span class="o">(</span><span class="n">OutputFieldsDeclarer</span> <span class="n">declarer</span><span class="o">)</span> <span class="o">{</span>
+        <span class="n">declarer</span><span class="o">.</span><span class="na">declare</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"wordName"</span><span class="o">,</span> <span class="s">"count"</span><span class="o">));</span>
     <span class="o">}</span>
 
     <span class="nd">@Override</span>
-    <span class="kd">public</span> <span class="n">RedisDataTypeDescription</span> <span class="nf">getDataTypeDescription</span><span class="o">()</span> <span class="o">{</span>
+    <span class="kd">public</span> <span class="n">RedisDataTypeDescription</span> <span class="n">getDataTypeDescription</span><span class="o">()</span> <span class="o">{</span>
         <span class="k">return</span> <span class="n">description</span><span class="o">;</span>
     <span class="o">}</span>
 
     <span class="nd">@Override</span>
-    <span class="kd">public</span> <span class="n">String</span> <span class="nf">getKeyFromTuple</span><span class="o">(</span><span class="n">ITuple</span> <span class="n">tuple</span><span class="o">)</span> <span class="o">{</span>
-        <span class="k">return</span> <span class="n">tuple</span><span class="o">.</span><span class="na">getStringByField</span><span class="o">(</span><span class="s">&quot;word&quot;</span><span class="o">);</span>
+    <span class="kd">public</span> <span class="n">String</span> <span class="n">getKeyFromTuple</span><span class="o">(</span><span class="n">ITuple</span> <span class="n">tuple</span><span class="o">)</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="n">tuple</span><span class="o">.</span><span class="na">getStringByField</span><span class="o">(</span><span class="s">"word"</span><span class="o">);</span>
     <span class="o">}</span>
 
     <span class="nd">@Override</span>
-    <span class="kd">public</span> <span class="n">String</span> <span class="nf">getValueFromTuple</span><span class="o">(</span><span class="n">ITuple</span> <span class="n">tuple</span><span class="o">)</span> <span class="o">{</span>
+    <span class="kd">public</span> <span class="n">String</span> <span class="n">getValueFromTuple</span><span class="o">(</span><span class="n">ITuple</span> <span class="n">tuple</span><span class="o">)</span> <span class="o">{</span>
         <span class="k">return</span> <span class="kc">null</span><span class="o">;</span>
     <span class="o">}</span>
 <span class="o">}</span>
-</code></pre></div><div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">JedisPoolConfig</span> <span class="n">poolConfig</span> <span class="o">=</span> <span class="k">new</span> <span class="n">JedisPoolConfig</span><span class="o">.</span><span class="na">Builder</span><span class="o">()</span>
+
+</code></pre></div><div class="highlight"><pre><code class="language-java" data-lang="java">
+<span class="n">JedisPoolConfig</span> <span class="n">poolConfig</span> <span class="o">=</span> <span class="k">new</span> <span class="n">JedisPoolConfig</span><span class="o">.</span><span class="na">Builder</span><span class="o">()</span>
         <span class="o">.</span><span class="na">setHost</span><span class="o">(</span><span class="n">host</span><span class="o">).</span><span class="na">setPort</span><span class="o">(</span><span class="n">port</span><span class="o">).</span><span class="na">build</span><span class="o">();</span>
-<span class="n">RedisLookupMapper</span> <span class="n">lookupMapper</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">WordCountRedisLookupMapper</span><span class="o">();</span>
-<span class="n">RedisLookupBolt</span> <span class="n">lookupBolt</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">RedisLookupBolt</span><span class="o">(</span><span class="n">poolConfig</span><span class="o">,</span> <span class="n">lookupMapper</span><span class="o">);</span>
+<span class="n">RedisLookupMapper</span> <span class="n">lookupMapper</span> <span class="o">=</span> <span class="k">new</span> <span class="n">WordCountRedisLookupMapper</span><span class="o">();</span>
+<span class="n">RedisLookupBolt</span> <span class="n">lookupBolt</span> <span class="o">=</span> <span class="k">new</span> <span class="n">RedisLookupBolt</span><span class="o">(</span><span class="n">poolConfig</span><span class="o">,</span> <span class="n">lookupMapper</span><span class="o">);</span>
 </code></pre></div>
 <h4 id="redisstorebolt-example">RedisStoreBolt example</h4>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">class</span> <span class="nc">WordCountStoreMapper</span> <span class="kd">implements</span> <span class="n">RedisStoreMapper</span> <span class="o">{</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java">
+<span class="kd">class</span> <span class="nc">WordCountStoreMapper</span> <span class="kd">implements</span> <span class="n">RedisStoreMapper</span> <span class="o">{</span>
     <span class="kd">private</span> <span class="n">RedisDataTypeDescription</span> <span class="n">description</span><span class="o">;</span>
-    <span class="kd">private</span> <span class="kd">final</span> <span class="n">String</span> <span class="n">hashKey</span> <span class="o">=</span> <span class="s">&quot;wordCount&quot;</span><span class="o">;</span>
+    <span class="kd">private</span> <span class="kd">final</span> <span class="n">String</span> <span class="n">hashKey</span> <span class="o">=</span> <span class="s">"wordCount"</span><span class="o">;</span>
 
-    <span class="kd">public</span> <span class="nf">WordCountStoreMapper</span><span class="o">()</span> <span class="o">{</span>
-        <span class="n">description</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">RedisDataTypeDescription</span><span class="o">(</span>
+    <span class="kd">public</span> <span class="n">WordCountStoreMapper</span><span class="o">()</span> <span class="o">{</span>
+        <span class="n">description</span> <span class="o">=</span> <span class="k">new</span> <span class="n">RedisDataTypeDescription</span><span class="o">(</span>
             <span class="n">RedisDataTypeDescription</span><span class="o">.</span><span class="na">RedisDataType</span><span class="o">.</span><span class="na">HASH</span><span class="o">,</span> <span class="n">hashKey</span><span class="o">);</span>
     <span class="o">}</span>
 
     <span class="nd">@Override</span>
-    <span class="kd">public</span> <span class="n">RedisDataTypeDescription</span> <span class="nf">getDataTypeDescription</span><span class="o">()</span> <span class="o">{</span>
+    <span class="kd">public</span> <span class="n">RedisDataTypeDescription</span> <span class="n">getDataTypeDescription</span><span class="o">()</span> <span class="o">{</span>
         <span class="k">return</span> <span class="n">description</span><span class="o">;</span>
     <span class="o">}</span>
 
     <span class="nd">@Override</span>
-    <span class="kd">public</span> <span class="n">String</span> <span class="nf">getKeyFromTuple</span><span class="o">(</span><span class="n">ITuple</span> <span class="n">tuple</span><span class="o">)</span> <span class="o">{</span>
-        <span class="k">return</span> <span class="n">tuple</span><span class="o">.</span><span class="na">getStringByField</span><span class="o">(</span><span class="s">&quot;word&quot;</span><span class="o">);</span>
+    <span class="kd">public</span> <span class="n">String</span> <span class="n">getKeyFromTuple</span><span class="o">(</span><span class="n">ITuple</span> <span class="n">tuple</span><span class="o">)</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="n">tuple</span><span class="o">.</span><span class="na">getStringByField</span><span class="o">(</span><span class="s">"word"</span><span class="o">);</span>
     <span class="o">}</span>
 
     <span class="nd">@Override</span>
-    <span class="kd">public</span> <span class="n">String</span> <span class="nf">getValueFromTuple</span><span class="o">(</span><span class="n">ITuple</span> <span class="n">tuple</span><span class="o">)</span> <span class="o">{</span>
-        <span class="k">return</span> <span class="n">tuple</span><span class="o">.</span><span class="na">getStringByField</span><span class="o">(</span><span class="s">&quot;count&quot;</span><span class="o">);</span>
+    <span class="kd">public</span> <span class="n">String</span> <span class="n">getValueFromTuple</span><span class="o">(</span><span class="n">ITuple</span> <span class="n">tuple</span><span class="o">)</span> <span class="o">{</span>
+        <span class="k">return</span> <span class="n">tuple</span><span class="o">.</span><span class="na">getStringByField</span><span class="o">(</span><span class="s">"count"</span><span class="o">);</span>
     <span class="o">}</span>
 <span class="o">}</span>
-</code></pre></div><div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">JedisPoolConfig</span> <span class="n">poolConfig</span> <span class="o">=</span> <span class="k">new</span> <span class="n">JedisPoolConfig</span><span class="o">.</span><span class="na">Builder</span><span class="o">()</span>
+</code></pre></div><div class="highlight"><pre><code class="language-java" data-lang="java">
+<span class="n">JedisPoolConfig</span> <span class="n">poolConfig</span> <span class="o">=</span> <span class="k">new</span> <span class="n">JedisPoolConfig</span><span class="o">.</span><span class="na">Builder</span><span class="o">()</span>
                 <span class="o">.</span><span class="na">setHost</span><span class="o">(</span><span class="n">host</span><span class="o">).</span><span class="na">setPort</span><span class="o">(</span><span class="n">port</span><span class="o">).</span><span class="na">build</span><span class="o">();</span>
-<span class="n">RedisStoreMapper</span> <span class="n">storeMapper</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">WordCountStoreMapper</span><span class="o">();</span>
-<span class="n">RedisStoreBolt</span> <span class="n">storeBolt</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">RedisStoreBolt</span><span class="o">(</span><span class="n">poolConfig</span><span class="o">,</span> <span class="n">storeMapper</span><span class="o">);</span>
+<span class="n">RedisStoreMapper</span> <span class="n">storeMapper</span> <span class="o">=</span> <span class="k">new</span> <span class="n">WordCountStoreMapper</span><span class="o">();</span>
+<span class="n">RedisStoreBolt</span> <span class="n">storeBolt</span> <span class="o">=</span> <span class="k">new</span> <span class="n">RedisStoreBolt</span><span class="o">(</span><span class="n">poolConfig</span><span class="o">,</span> <span class="n">storeMapper</span><span class="o">);</span>
 </code></pre></div>
 <h3 id="for-non-simple-bolt">For non-simple Bolt</h3>
 
 <p>If your scenario doesn&#39;t fit <code>RedisStoreBolt</code> and <code>RedisLookupBolt</code>, storm-redis also provides <code>AbstractRedisBolt</code> to let you extend and apply your business logic.</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java">    <span class="kd">public</span> <span class="kd">static</span> <span class="kd">class</span> <span class="nc">LookupWordTotalCountBolt</span> <span class="kd">extends</span> <span class="n">AbstractRedisBolt</span> <span class="o">{</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java">
+    <span class="kd">public</span> <span class="kd">static</span> <span class="kd">class</span> <span class="nc">LookupWordTotalCountBolt</span> <span class="kd">extends</span> <span class="n">AbstractRedisBolt</span> <span class="o">{</span>
         <span class="kd">private</span> <span class="kd">static</span> <span class="kd">final</span> <span class="n">Logger</span> <span class="n">LOG</span> <span class="o">=</span> <span class="n">LoggerFactory</span><span class="o">.</span><span class="na">getLogger</span><span class="o">(</span><span class="n">LookupWordTotalCountBolt</span><span class="o">.</span><span class="na">class</span><span class="o">);</span>
-        <span class="kd">private</span> <span class="kd">static</span> <span class="kd">final</span> <span class="n">Random</span> <span class="n">RANDOM</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">Random</span><span class="o">();</span>
+        <span class="kd">private</span> <span class="kd">static</span> <span class="kd">final</span> <span class="n">Random</span> <span class="n">RANDOM</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Random</span><span class="o">();</span>
 
-        <span class="kd">public</span> <span class="nf">LookupWordTotalCountBolt</span><span class="o">(</span><span class="n">JedisPoolConfig</span> <span class="n">config</span><span class="o">)</span> <span class="o">{</span>
+        <span class="kd">public</span> <span class="n">LookupWordTotalCountBolt</span><span class="o">(</span><span class="n">JedisPoolConfig</span> <span class="n">config</span><span class="o">)</span> <span class="o">{</span>
             <span class="kd">super</span><span class="o">(</span><span class="n">config</span><span class="o">);</span>
         <span class="o">}</span>
 
-        <span class="kd">public</span> <span class="nf">LookupWordTotalCountBolt</span><span class="o">(</span><span class="n">JedisClusterConfig</span> <span class="n">config</span><span class="o">)</span> <span class="o">{</span>
+        <span class="kd">public</span> <span class="n">LookupWordTotalCountBolt</span><span class="o">(</span><span class="n">JedisClusterConfig</span> <span class="n">config</span><span class="o">)</span> <span class="o">{</span>
             <span class="kd">super</span><span class="o">(</span><span class="n">config</span><span class="o">);</span>
         <span class="o">}</span>
 
         <span class="nd">@Override</span>
-        <span class="kd">public</span> <span class="kt">void</span> <span class="nf">execute</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">input</span><span class="o">)</span> <span class="o">{</span>
+        <span class="kd">public</span> <span class="kt">void</span> <span class="n">execute</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">input</span><span class="o">)</span> <span class="o">{</span>
             <span class="n">JedisCommands</span> <span class="n">jedisCommands</span> <span class="o">=</span> <span class="kc">null</span><span class="o">;</span>
             <span class="k">try</span> <span class="o">{</span>
                 <span class="n">jedisCommands</span> <span class="o">=</span> <span class="n">getInstance</span><span class="o">();</span>
-                <span class="n">String</span> <span class="n">wordName</span> <span class="o">=</span> <span class="n">input</span><span class="o">.</span><span class="na">getStringByField</span><span class="o">(</span><span class="s">&quot;word&quot;</span><span class="o">);</span>
+                <span class="n">String</span> <span class="n">wordName</span> <span class="o">=</span> <span class="n">input</span><span class="o">.</span><span class="na">getStringByField</span><span class="o">(</span><span class="s">"word"</span><span class="o">);</span>
                 <span class="n">String</span> <span class="n">countStr</span> <span class="o">=</span> <span class="n">jedisCommands</span><span class="o">.</span><span class="na">get</span><span class="o">(</span><span class="n">wordName</span><span class="o">);</span>
                 <span class="k">if</span> <span class="o">(</span><span class="n">countStr</span> <span class="o">!=</span> <span class="kc">null</span><span class="o">)</span> <span class="o">{</span>
                     <span class="kt">int</span> <span class="n">count</span> <span class="o">=</span> <span class="n">Integer</span><span class="o">.</span><span class="na">parseInt</span><span class="o">(</span><span class="n">countStr</span><span class="o">);</span>
-                    <span class="k">this</span><span class="o">.</span><span class="na">collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="k">new</span> <span class="nf">Values</span><span class="o">(</span><span class="n">wordName</span><span class="o">,</span> <span class="n">count</span><span class="o">));</span>
+                    <span class="k">this</span><span class="o">.</span><span class="na">collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="k">new</span> <span class="n">Values</span><span class="o">(</span><span class="n">wordName</span><span class="o">,</span> <span class="n">count</span><span class="o">));</span>
 
                     <span class="c1">// print lookup result with low probability</span>
                     <span class="k">if</span><span class="o">(</span><span class="n">RANDOM</span><span class="o">.</span><span class="na">nextInt</span><span class="o">(</span><span class="mi">1000</span><span class="o">)</span> <span class="o">&gt;</span> <span class="mi">995</span><span class="o">)</span> <span class="o">{</span>
-                        <span class="n">LOG</span><span class="o">.</span><span class="na">info</span><span class="o">(</span><span class="s">&quot;Lookup result - word : &quot;</span> <span class="o">+</span> <span class="n">wordName</span> <span class="o">+</span> <span class="s">&quot; / count : &quot;</span> <span class="o">+</span> <span class="n">count</span><span class="o">);</span>
+                        <span class="n">LOG</span><span class="o">.</span><span class="na">info</span><span class="o">(</span><span class="s">"Lookup result - word : "</span> <span class="o">+</span> <span class="n">wordName</span> <span class="o">+</span> <span class="s">" / count : "</span> <span class="o">+</span> <span class="n">count</span><span class="o">);</span>
                     <span class="o">}</span>
                 <span class="o">}</span> <span class="k">else</span> <span class="o">{</span>
                     <span class="c1">// skip</span>
-                    <span class="n">LOG</span><span class="o">.</span><span class="na">warn</span><span class="o">(</span><span class="s">&quot;Word not found in Redis - word : &quot;</span> <span class="o">+</span> <span class="n">wordName</span><span class="o">);</span>
+                    <span class="n">LOG</span><span class="o">.</span><span class="na">warn</span><span class="o">(</span><span class="s">"Word not found in Redis - word : "</span> <span class="o">+</span> <span class="n">wordName</span><span class="o">);</span>
                 <span class="o">}</span>
             <span class="o">}</span> <span class="k">finally</span> <span class="o">{</span>
                 <span class="k">if</span> <span class="o">(</span><span class="n">jedisCommands</span> <span class="o">!=</span> <span class="kc">null</span><span class="o">)</span> <span class="o">{</span>
@@ -232,11 +238,12 @@
         <span class="o">}</span>
 
         <span class="nd">@Override</span>
-        <span class="kd">public</span> <span class="kt">void</span> <span class="nf">declareOutputFields</span><span class="o">(</span><span class="n">OutputFieldsDeclarer</span> <span class="n">declarer</span><span class="o">)</span> <span class="o">{</span>
+        <span class="kd">public</span> <span class="kt">void</span> <span class="n">declareOutputFields</span><span class="o">(</span><span class="n">OutputFieldsDeclarer</span> <span class="n">declarer</span><span class="o">)</span> <span class="o">{</span>
             <span class="c1">// wordName, count</span>
-            <span class="n">declarer</span><span class="o">.</span><span class="na">declare</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;wordName&quot;</span><span class="o">,</span> <span class="s">&quot;count&quot;</span><span class="o">));</span>
+            <span class="n">declarer</span><span class="o">.</span><span class="na">declare</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"wordName"</span><span class="o">,</span> <span class="s">"count"</span><span class="o">));</span>
         <span class="o">}</span>
     <span class="o">}</span>
+
 </code></pre></div>
 <h3 id="trident-state-usage">Trident State usage</h3>
 
@@ -253,8 +260,8 @@
         RedisStoreMapper storeMapper = new WordCountStoreMapper();
         RedisLookupMapper lookupMapper = new WordCountLookupMapper();
         RedisState.Factory factory = new RedisState.Factory(poolConfig);</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">    TridentTopology topology = new TridentTopology();
-    Stream stream = topology.newStream(&quot;spout1&quot;, spout);
+<div class="highlight"><pre><code class="language-" data-lang="">    TridentTopology topology = new TridentTopology();
+    Stream stream = topology.newStream("spout1", spout);
 
     stream.partitionPersist(factory,
                             fields,
@@ -262,14 +269,15 @@
                             new Fields());
 
     TridentState state = topology.newStaticState(factory);
-    stream = stream.stateQuery(state, new Fields(&quot;word&quot;),
+    stream = stream.stateQuery(state, new Fields("word"),
                             new RedisStateQuerier(lookupMapper),
-                            new Fields(&quot;columnName&quot;,&quot;columnValue&quot;));
-</code></pre></div><div class="highlight"><pre><code class="language-text" data-lang="text">RedisClusterState
+                            new Fields("columnName","columnValue"));
+</code></pre></div><div class="highlight"><pre><code class="language-" data-lang="">
+RedisClusterState
 ```java
         Set&lt;InetSocketAddress&gt; nodes = new HashSet&lt;InetSocketAddress&gt;();
-        for (String hostPort : redisHostPort.split(&quot;,&quot;)) {
-            String[] host_port = hostPort.split(&quot;:&quot;);
+        for (String hostPort : redisHostPort.split(",")) {
+            String[] host_port = hostPort.split(":");
             nodes.add(new InetSocketAddress(host_port[0], Integer.valueOf(host_port[1])));
         }
         JedisClusterConfig clusterConfig = new JedisClusterConfig.Builder().setNodes(nodes)
@@ -279,7 +287,7 @@
         RedisClusterState.Factory factory = new RedisClusterState.Factory(clusterConfig);
 
         TridentTopology topology = new TridentTopology();
-        Stream stream = topology.newStream(&quot;spout1&quot;, spout);
+        Stream stream = topology.newStream("spout1", spout);
 
         stream.partitionPersist(factory,
                                 fields,
@@ -287,9 +295,9 @@
                                 new Fields());
 
         TridentState state = topology.newStaticState(factory);
-        stream = stream.stateQuery(state, new Fields(&quot;word&quot;),
+        stream = stream.stateQuery(state, new Fields("word"),
                                 new RedisClusterStateQuerier(lookupMapper),
-                                new Fields(&quot;columnName&quot;,&quot;columnValue&quot;));
+                                new Fields("columnName","columnValue"));
 </code></pre></div>
 <h2 id="license">License</h2>
 
diff --git a/_site/documentation/storm-solr.html b/_site/documentation/storm-solr.html
index 8e5bca2..6d6418b 100644
--- a/_site/documentation/storm-solr.html
+++ b/_site/documentation/storm-solr.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -104,28 +104,28 @@
 describe in detail the two key components of the Storm Solr integration, the <code>SolrUpdateBolt</code>, and the <code>Mappers</code>, <code>SolrFieldsMapper</code>, and <code>SolrJsonMapper</code>.</p>
 
 <h2 id="storm-bolt-with-json-mapper-and-count-based-commit-strategy">Storm Bolt With JSON Mapper and Count Based Commit Strategy</h2>
-<div class="highlight"><pre><code class="language-java" data-lang="java">    <span class="k">new</span> <span class="nf">SolrUpdateBolt</span><span class="o">(</span><span class="n">solrConfig</span><span class="o">,</span> <span class="n">solrMapper</span><span class="o">,</span> <span class="n">solrCommitStgy</span><span class="o">)</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java">    <span class="k">new</span> <span class="nf">SolrUpdateBolt</span><span class="p">(</span><span class="n">solrConfig</span><span class="o">,</span> <span class="n">solrMapper</span><span class="o">,</span> <span class="n">solrCommitStgy</span><span class="o">)</span>
 
-    <span class="c1">// zkHostString for Solr &#39;gettingstarted&#39; example</span>
-    <span class="n">SolrConfig</span> <span class="n">solrConfig</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">SolrConfig</span><span class="o">(</span><span class="s">&quot;127.0.0.1:9983&quot;</span><span class="o">);</span>
+    <span class="c1">// zkHostString for Solr 'gettingstarted' example</span>
+    <span class="n">SolrConfig</span> <span class="n">solrConfig</span> <span class="o">=</span> <span class="k">new</span> <span class="n">SolrConfig</span><span class="o">(</span><span class="s">"127.0.0.1:9983"</span><span class="o">);</span>
 
-    <span class="c1">// JSON Mapper used to generate &#39;SolrRequest&#39; requests to update the &quot;gettingstarted&quot; Solr collection with JSON content declared the tuple field with name &quot;JSON&quot;</span>
-    <span class="n">SolrMapper</span> <span class="n">solrMapper</span> <span class="o">=</span> <span class="k">new</span> <span class="n">SolrJsonMapper</span><span class="o">.</span><span class="na">Builder</span><span class="o">(</span><span class="s">&quot;gettingstarted&quot;</span><span class="o">,</span> <span class="s">&quot;JSON&quot;</span><span class="o">).</span><span class="na">build</span><span class="o">();</span> 
+    <span class="c1">// JSON Mapper used to generate 'SolrRequest' requests to update the "gettingstarted" Solr collection with JSON content declared the tuple field with name "JSON"</span>
+    <span class="n">SolrMapper</span> <span class="n">solrMapper</span> <span class="o">=</span> <span class="k">new</span> <span class="n">SolrJsonMapper</span><span class="o">.</span><span class="na">Builder</span><span class="o">(</span><span class="s">"gettingstarted"</span><span class="o">,</span> <span class="s">"JSON"</span><span class="o">).</span><span class="na">build</span><span class="o">();</span> 
 
     <span class="c1">// Acks every other five tuples. Setting to null acks every tuple</span>
-    <span class="n">SolrCommitStrategy</span> <span class="n">solrCommitStgy</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">CountBasedCommit</span><span class="o">(</span><span class="mi">5</span><span class="o">);</span>          
+    <span class="n">SolrCommitStrategy</span> <span class="n">solrCommitStgy</span> <span class="o">=</span> <span class="k">new</span> <span class="n">CountBasedCommit</span><span class="o">(</span><span class="mi">5</span><span class="o">);</span>          
 </code></pre></div>
 <h2 id="trident-topology-with-fields-mapper">Trident Topology With Fields Mapper</h2>
-<div class="highlight"><pre><code class="language-java" data-lang="java">    <span class="k">new</span> <span class="nf">SolrStateFactory</span><span class="o">(</span><span class="n">solrConfig</span><span class="o">,</span> <span class="n">solrMapper</span><span class="o">);</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java">    <span class="k">new</span> <span class="nf">SolrStateFactory</span><span class="p">(</span><span class="n">solrConfig</span><span class="o">,</span> <span class="n">solrMapper</span><span class="o">);</span>
 
-    <span class="c1">// zkHostString for Solr &#39;gettingstarted&#39; example</span>
-    <span class="n">SolrConfig</span> <span class="n">solrConfig</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">SolrConfig</span><span class="o">(</span><span class="s">&quot;127.0.0.1:9983&quot;</span><span class="o">);</span>
+    <span class="c1">// zkHostString for Solr 'gettingstarted' example</span>
+    <span class="n">SolrConfig</span> <span class="n">solrConfig</span> <span class="o">=</span> <span class="k">new</span> <span class="n">SolrConfig</span><span class="o">(</span><span class="s">"127.0.0.1:9983"</span><span class="o">);</span>
 
-    <span class="cm">/* Solr Fields Mapper used to generate &#39;SolrRequest&#39; requests to update the &quot;gettingstarted&quot; Solr collection. The Solr index is updated using the field values of the tuple fields that match static or dynamic fields declared in the schema object build using schemaBuilder */</span> 
-    <span class="n">SolrMapper</span> <span class="n">solrMapper</span> <span class="o">=</span> <span class="k">new</span> <span class="n">SolrFieldsMapper</span><span class="o">.</span><span class="na">Builder</span><span class="o">(</span><span class="n">schemaBuilder</span><span class="o">,</span> <span class="s">&quot;gettingstarted&quot;</span><span class="o">).</span><span class="na">build</span><span class="o">();</span>
+    <span class="cm">/* Solr Fields Mapper used to generate 'SolrRequest' requests to update the "gettingstarted" Solr collection. The Solr index is updated using the field values of the tuple fields that match static or dynamic fields declared in the schema object build using schemaBuilder */</span> 
+    <span class="n">SolrMapper</span> <span class="n">solrMapper</span> <span class="o">=</span> <span class="k">new</span> <span class="n">SolrFieldsMapper</span><span class="o">.</span><span class="na">Builder</span><span class="o">(</span><span class="n">schemaBuilder</span><span class="o">,</span> <span class="s">"gettingstarted"</span><span class="o">).</span><span class="na">build</span><span class="o">();</span>
 
     <span class="c1">// builds the Schema object from the JSON representation of the schema as returned by the URL http://localhost:8983/solr/gettingstarted/schema/ </span>
-    <span class="n">SchemaBuilder</span> <span class="n">schemaBuilder</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">RestJsonSchemaBuilder</span><span class="o">(</span><span class="s">&quot;localhost&quot;</span><span class="o">,</span> <span class="s">&quot;8983&quot;</span><span class="o">,</span> <span class="s">&quot;gettingstarted&quot;</span><span class="o">)</span>
+    <span class="n">SchemaBuilder</span> <span class="n">schemaBuilder</span> <span class="o">=</span> <span class="k">new</span> <span class="n">RestJsonSchemaBuilder</span><span class="o">(</span><span class="s">"localhost"</span><span class="o">,</span> <span class="s">"8983"</span><span class="o">,</span> <span class="s">"gettingstarted"</span><span class="o">)</span>
 </code></pre></div>
 <h2 id="solrupdatebolt">SolrUpdateBolt</h2>
 
@@ -179,8 +179,8 @@
 field separates each value with the token % instead of the default | . To use the default token you can ommit the call to the method
 <code>setMultiValueFieldToken</code>.</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java">    <span class="k">new</span> <span class="n">SolrFieldsMapper</span><span class="o">.</span><span class="na">Builder</span><span class="o">(</span>
-            <span class="k">new</span> <span class="nf">RestJsonSchemaBuilder</span><span class="o">(</span><span class="s">&quot;localhost&quot;</span><span class="o">,</span> <span class="s">&quot;8983&quot;</span><span class="o">,</span> <span class="s">&quot;gettingstarted&quot;</span><span class="o">),</span> <span class="s">&quot;gettingstarted&quot;</span><span class="o">)</span>
-                <span class="o">.</span><span class="na">setMultiValueFieldToken</span><span class="o">(</span><span class="s">&quot;%&quot;</span><span class="o">).</span><span class="na">build</span><span class="o">();</span>
+            <span class="k">new</span> <span class="n">RestJsonSchemaBuilder</span><span class="o">(</span><span class="s">"localhost"</span><span class="o">,</span> <span class="s">"8983"</span><span class="o">,</span> <span class="s">"gettingstarted"</span><span class="o">),</span> <span class="s">"gettingstarted"</span><span class="o">)</span>
+                <span class="o">.</span><span class="na">setMultiValueFieldToken</span><span class="o">(</span><span class="s">"%"</span><span class="o">).</span><span class="na">build</span><span class="o">();</span>
 </code></pre></div>
 <h1 id="build-and-run-bundled-examples">Build And Run Bundled Examples</h1>
 
@@ -194,7 +194,7 @@
 <h2 id="use-the-maven-shade-plugin-to-build-the-uber-jar">Use the Maven Shade Plugin to Build the Uber Jar</h2>
 
 <p>Add the following to <code>REPO_HOME/storm/external/storm-solr/pom.xml</code></p>
-<div class="highlight"><pre><code class="language-text" data-lang="text"> &lt;plugin&gt;
+<div class="highlight"><pre><code class="language-" data-lang=""> &lt;plugin&gt;
      &lt;groupId&gt;org.apache.maven.plugins&lt;/groupId&gt;
      &lt;artifactId&gt;maven-shade-plugin&lt;/artifactId&gt;
      &lt;version&gt;2.4.1&lt;/version&gt;
@@ -206,7 +206,7 @@
              &lt;/goals&gt;
              &lt;configuration&gt;
                  &lt;transformers&gt;
-                     &lt;transformer implementation=&quot;org.apache.maven.plugins.shade.resource.ManifestResourceTransformer&quot;&gt;
+                     &lt;transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer"&gt;
                          &lt;mainClass&gt;org.apache.storm.solr.topology.SolrJsonTopology&lt;/mainClass&gt;
                      &lt;/transformer&gt;
                  &lt;/transformers&gt;
diff --git a/_site/downloads.html b/_site/downloads.html
index d0157a9..3fb75c1 100644
--- a/_site/downloads.html
+++ b/_site/downloads.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -91,75 +91,81 @@
     	<div class="row">
         	<div class="col-md-12">
 				  <p>
-				  Downloads for Storm are below. Instructions for how to set up a Storm cluster can be found <a href="/documentation/Setting-up-a-Storm-cluster.html">here</a>.
+				  Downloads for Apache Storm are below. Instructions for how to set up a Storm cluster can be found <a href="/documentation/Setting-up-a-Storm-cluster.html">here</a>.
 				  </p>
 
 				  <h3>Source Code</h3>
-				  Current source code is hosted on GitHub, <a href="https://github.com/apache/storm">apache/storm</a>
+				  Current source code is mirrored on GitHub: <a href="https://github.com/apache/storm">apache/storm</a>
 				  
-				  <h3>Current Beta Release</h3>
-				  The current beta release is 0.10.0-beta1. Source and binary distributions can be found below.
+				  <h3>Current 0.10.x Release</h3>
+				  The current 0.10.x release is 0.10.0. Source and binary distributions can be found below.
 				  
-				  The list of changes for this release can be found <a href="https://github.com/apache/storm/blob/v0.10.0-beta1/CHANGELOG.md">here.</a>
+				  The list of changes for this release can be found <a href="https://github.com/apache/storm/blob/v0.10.0/CHANGELOG.md">here.</a>
 
 				  <ul>
-					  <li><a href="http://www.apache.org/dyn/closer.cgi/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1.tar.gz">apache-storm-0.10.0-beta1.tar.gz</a>
-					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1.tar.gz.asc">PGP</a>]
-					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1.tar.gz.sha">SHA512</a>] 
-					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1.tar.gz.md5">MD5</a>]
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.10.0/apache-storm-0.10.0.tar.gz">apache-storm-0.10.0.tar.gz</a>
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0/apache-storm-0.10.0.tar.gz.asc">PGP</a>]
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0/apache-storm-0.10.0.tar.gz.sha">SHA512</a>] 
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0/apache-storm-0.10.0.tar.gz.md5">MD5</a>]
 					  </li>
-					  <li><a href="http://www.apache.org/dyn/closer.cgi/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1.zip">apache-storm-0.10.0-beta1.zip</a>
-					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1.zip.asc">PGP</a>]
-					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1.zip.sha">SHA512</a>] 
-					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1.zip.md5">MD5</a>]
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.10.0/apache-storm-0.10.0.zip">apache-storm-0.10.0.zip</a>
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0/apache-storm-0.10.0.zip.asc">PGP</a>]
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0/apache-storm-0.10.0.zip.sha">SHA512</a>] 
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0/apache-storm-0.10.0.zip.md5">MD5</a>]
 					  </li>
-					  <li><a href="http://www.apache.org/dyn/closer.cgi/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1-src.tar.gz">apache-storm-0.10.0-beta1-src.tar.gz</a>
-					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1-src.tar.gz.asc">PGP</a>]
-					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1-src.tar.gz.sha">SHA512</a>] 
-					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1-src.tar.gz.md5">MD5</a>]
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.10.0/apache-storm-0.10.0-src.tar.gz">apache-storm-0.10.0-src.tar.gz</a>
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0/apache-storm-0.10.0-src.tar.gz.asc">PGP</a>]
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0/apache-storm-0.10.0-src.tar.gz.sha">SHA512</a>] 
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0/apache-storm-0.10.0-src.tar.gz.md5">MD5</a>]
 					  </li>
-					  <li><a href="http://www.apache.org/dyn/closer.cgi/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1-src.zip">apache-storm-0.10.0-beta1-src.zip</a>
-					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1-src.zip.asc">PGP</a>]
-					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1-src.zip.sha">SHA512</a>] 
-					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1-src.zip.md5">MD5</a>]
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.10.0/apache-storm-0.10.0-src.zip">apache-storm-0.10.0-src.zip</a>
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0/apache-storm-0.10.0-src.zip.asc">PGP</a>]
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0/apache-storm-0.10.0-src.zip.sha">SHA512</a>] 
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0/apache-storm-0.10.0-src.zip.md5">MD5</a>]
 					  </li>
 				  </ul>
+				  Storm artifacts are hosted in <a href="http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.storm%22">Maven Central</a>. You can add Storm as a dependency with the following coordinates:
+
+				  <pre>
+groupId: <a href="http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.storm%22">org.apache.storm</a>
+artifactId: storm-core
+version: 0.10.0</pre>				  
 				  
+				  <h3>Current 0.9.x Release</h3>
+				  The current 0.9.x release is 0.9.6. Source and binary distributions can be found below.
 				  
-				  <h3>Current Release</h3>
-				  The current release is 0.9.5. Source and binary distributions can be found below.
-				  
-				  The list of changes for this release can be found <a href="https://github.com/apache/storm/blob/v0.9.5/CHANGELOG.md">here.</a>
+				  The list of changes for this release can be found <a href="https://github.com/apache/storm/blob/v0.9.6/CHANGELOG.md">here.</a>
 
 				  <ul>
-					  <li><a href="http://www.apache.org/dyn/closer.cgi/storm/apache-storm-0.9.5/apache-storm-0.9.5.tar.gz">apache-storm-0.9.5.tar.gz</a>
-					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.5/apache-storm-0.9.5.tar.gz.asc">PGP</a>]
-					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.5/apache-storm-0.9.5.tar.gz.sha">SHA512</a>] 
-					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.5/apache-storm-0.9.5.tar.gz.md5">MD5</a>]
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.9.6/apache-storm-0.9.6.tar.gz">apache-storm-0.9.6.tar.gz</a>
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.6/apache-storm-0.9.6.tar.gz.asc">PGP</a>]
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.6/apache-storm-0.9.6.tar.gz.sha">SHA512</a>] 
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.6/apache-storm-0.9.6.tar.gz.md5">MD5</a>]
 					  </li>
-					  <li><a href="http://www.apache.org/dyn/closer.cgi/storm/apache-storm-0.9.5/apache-storm-0.9.5.zip">apache-storm-0.9.5.zip</a>
-					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.5/apache-storm-0.9.5.zip.asc">PGP</a>]
-					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.5/apache-storm-0.9.5.zip.sha">SHA512</a>] 
-					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.5/apache-storm-0.9.5.zip.md5">MD5</a>]
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.9.6/apache-storm-0.9.6.zip">apache-storm-0.9.6.zip</a>
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.6/apache-storm-0.9.6.zip.asc">PGP</a>]
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.6/apache-storm-0.9.6.zip.sha">SHA512</a>] 
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.6/apache-storm-0.9.6.zip.md5">MD5</a>]
 					  </li>
-					  <li><a href="http://www.apache.org/dyn/closer.cgi/storm/apache-storm-0.9.5/apache-storm-0.9.5-src.tar.gz">apache-storm-0.9.5-src.tar.gz</a>
-					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.5/apache-storm-0.9.5-src.tar.gz.asc">PGP</a>]
-					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.5/apache-storm-0.9.5-src.tar.gz.sha">SHA512</a>] 
-					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.5/apache-storm-0.9.5-src.tar.gz.md5">MD5</a>]
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.9.6/apache-storm-0.9.6-src.tar.gz">apache-storm-0.9.6-src.tar.gz</a>
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.6/apache-storm-0.9.6-src.tar.gz.asc">PGP</a>]
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.6/apache-storm-0.9.6-src.tar.gz.sha">SHA512</a>] 
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.6/apache-storm-0.9.6-src.tar.gz.md5">MD5</a>]
 					  </li>
-					  <li><a href="http://www.apache.org/dyn/closer.cgi/storm/apache-storm-0.9.5/apache-storm-0.9.5-src.zip">apache-storm-0.9.5-src.zip</a>
-					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.5/apache-storm-0.9.5-src.zip.asc">PGP</a>]
-					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.5/apache-storm-0.9.5-src.zip.sha">SHA512</a>] 
-					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.5/apache-storm-0.9.5-src.zip.md5">MD5</a>]
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.9.6/apache-storm-0.9.6-src.zip">apache-storm-0.9.6-src.zip</a>
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.6/apache-storm-0.9.6-src.zip.asc">PGP</a>]
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.6/apache-storm-0.9.6-src.zip.sha">SHA512</a>] 
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.6/apache-storm-0.9.6-src.zip.md5">MD5</a>]
 					  </li>
 				  </ul>
 
 				  Storm artifacts are hosted in <a href="http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.storm%22">Maven Central</a>. You can add Storm as a dependency with the following coordinates:
-				  
+
+
 				  <pre>
 groupId: <a href="http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.storm%22">org.apache.storm</a>
 artifactId: storm-core
-version: 0.9.5</pre>
+version: 0.9.6</pre>
 				  
 				  
 				  The signing keys for releases can be found <a href="http://www.apache.org/dist/storm/KEYS">here.</a>
@@ -168,26 +174,78 @@
 					  
 				  </p>
 				  <h3>Previous Releases</h3>
+                  
+                  <b>0.10.0-beta1</b>
+
+				  <ul>
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1.tar.gz">apache-storm-0.10.0-beta1.tar.gz</a>
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1.tar.gz.asc">PGP</a>]
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1.tar.gz.sha">SHA512</a>] 
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1.tar.gz.md5">MD5</a>]
+					  </li>
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1.zip">apache-storm-0.10.0-beta1.zip</a>
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1.zip.asc">PGP</a>]
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1.zip.sha">SHA512</a>] 
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1.zip.md5">MD5</a>]
+					  </li>
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1-src.tar.gz">apache-storm-0.10.0-beta1-src.tar.gz</a>
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1-src.tar.gz.asc">PGP</a>]
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1-src.tar.gz.sha">SHA512</a>] 
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1-src.tar.gz.md5">MD5</a>]
+					  </li>
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1-src.zip">apache-storm-0.10.0-beta1-src.zip</a>
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1-src.zip.asc">PGP</a>]
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1-src.zip.sha">SHA512</a>] 
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1-src.zip.md5">MD5</a>]
+					  </li>
+				  </ul>
+
+                  
+                  <b>0.9.5</b>
+
+				  <ul>
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.9.5/apache-storm-0.9.5.tar.gz">apache-storm-0.9.5.tar.gz</a>
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.5/apache-storm-0.9.5.tar.gz.asc">PGP</a>]
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.5/apache-storm-0.9.5.tar.gz.sha">SHA512</a>] 
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.5/apache-storm-0.9.5.tar.gz.md5">MD5</a>]
+					  </li>
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.9.5/apache-storm-0.9.5.zip">apache-storm-0.9.5.zip</a>
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.5/apache-storm-0.9.5.zip.asc">PGP</a>]
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.5/apache-storm-0.9.5.zip.sha">SHA512</a>] 
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.5/apache-storm-0.9.5.zip.md5">MD5</a>]
+					  </li>
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.9.5/apache-storm-0.9.5-src.tar.gz">apache-storm-0.9.5-src.tar.gz</a>
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.5/apache-storm-0.9.5-src.tar.gz.asc">PGP</a>]
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.5/apache-storm-0.9.5-src.tar.gz.sha">SHA512</a>] 
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.5/apache-storm-0.9.5-src.tar.gz.md5">MD5</a>]
+					  </li>
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.9.5/apache-storm-0.9.5-src.zip">apache-storm-0.9.5-src.zip</a>
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.5/apache-storm-0.9.5-src.zip.asc">PGP</a>]
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.5/apache-storm-0.9.5-src.zip.sha">SHA512</a>] 
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.5/apache-storm-0.9.5-src.zip.md5">MD5</a>]
+					  </li>
+				  </ul>
+
 				  
 				  <b>0.9.4</b>
 				  
 				  <ul>
-					  <li><a href="http://www.apache.org/dyn/closer.cgi/storm/apache-storm-0.9.4/apache-storm-0.9.4.tar.gz">apache-storm-0.9.4.tar.gz</a>
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.9.4/apache-storm-0.9.4.tar.gz">apache-storm-0.9.4.tar.gz</a>
 					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.4/apache-storm-0.9.4.tar.gz.asc">PGP</a>]
 					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.4/apache-storm-0.9.4.tar.gz.sha">SHA512</a>] 
 					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.4/apache-storm-0.9.4.tar.gz.md5">MD5</a>]
 					  </li>
-					  <li><a href="http://www.apache.org/dyn/closer.cgi/storm/apache-storm-0.9.4/apache-storm-0.9.4.zip">apache-storm-0.9.4.zip</a>
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.9.4/apache-storm-0.9.4.zip">apache-storm-0.9.4.zip</a>
 					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.4/apache-storm-0.9.4.zip.asc">PGP</a>]
 					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.4/apache-storm-0.9.4.zip.sha">SHA512</a>] 
 					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.4/apache-storm-0.9.4.zip.md5">MD5</a>]
 					  </li>
-					  <li><a href="http://www.apache.org/dyn/closer.cgi/storm/apache-storm-0.9.4/apache-storm-0.9.4-src.tar.gz">apache-storm-0.9.4-src.tar.gz</a>
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.9.4/apache-storm-0.9.4-src.tar.gz">apache-storm-0.9.4-src.tar.gz</a>
 					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.4/apache-storm-0.9.4-src.tar.gz.asc">PGP</a>]
 					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.4/apache-storm-0.9.4-src.tar.gz.sha">SHA512</a>] 
 					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.4/apache-storm-0.9.4-src.tar.gz.md5">MD5</a>]
 					  </li>
-					  <li><a href="http://www.apache.org/dyn/closer.cgi/storm/apache-storm-0.9.4/apache-storm-0.9.4-src.zip">apache-storm-0.9.4-src.zip</a>
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.9.4/apache-storm-0.9.4-src.zip">apache-storm-0.9.4-src.zip</a>
 					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.4/apache-storm-0.9.4-src.zip.asc">PGP</a>]
 					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.4/apache-storm-0.9.4-src.zip.sha">SHA512</a>] 
 					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.4/apache-storm-0.9.4-src.zip.md5">MD5</a>]
@@ -197,22 +255,22 @@
 				  <b>0.9.3</b>
 				  
 				  <ul>
-					  <li><a href="http://www.apache.org/dyn/closer.cgi/storm/apache-storm-0.9.3/apache-storm-0.9.3.tar.gz">apache-storm-0.9.3.tar.gz</a>
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.9.3/apache-storm-0.9.3.tar.gz">apache-storm-0.9.3.tar.gz</a>
 					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.3/apache-storm-0.9.3.tar.gz.asc">PGP</a>]
 					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.3/apache-storm-0.9.3.tar.gz.sha">SHA512</a>] 
 					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.3/apache-storm-0.9.3.tar.gz.md5">MD5</a>]
 					  </li>
-					  <li><a href="http://www.apache.org/dyn/closer.cgi/storm/apache-storm-0.9.3/apache-storm-0.9.3.zip">apache-storm-0.9.3.zip</a>
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.9.3/apache-storm-0.9.3.zip">apache-storm-0.9.3.zip</a>
 					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.3/apache-storm-0.9.3.zip.asc">PGP</a>]
 					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.3/apache-storm-0.9.3.zip.sha">SHA512</a>] 
 					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.3/apache-storm-0.9.3.zip.md5">MD5</a>]
 					  </li>
-					  <li><a href="http://www.apache.org/dyn/closer.cgi/storm/apache-storm-0.9.3/apache-storm-0.9.3-src.tar.gz">apache-storm-0.9.3-src.tar.gz</a>
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.9.3/apache-storm-0.9.3-src.tar.gz">apache-storm-0.9.3-src.tar.gz</a>
 					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.3/apache-storm-0.9.3-src.tar.gz.asc">PGP</a>]
 					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.3/apache-storm-0.9.3-src.tar.gz.sha">SHA512</a>] 
 					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.3/apache-storm-0.9.3-src.tar.gz.md5">MD5</a>]
 					  </li>
-					  <li><a href="http://www.apache.org/dyn/closer.cgi/storm/apache-storm-0.9.3/apache-storm-0.9.3-src.zip">apache-storm-0.9.3-src.zip</a>
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.9.3/apache-storm-0.9.3-src.zip">apache-storm-0.9.3-src.zip</a>
 					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.3/apache-storm-0.9.3-src.zip.asc">PGP</a>]
 					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.3/apache-storm-0.9.3-src.zip.sha">SHA512</a>] 
 					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.3/apache-storm-0.9.3-src.zip.md5">MD5</a>]
@@ -223,22 +281,22 @@
 				  <b>0.9.2-incubating</b>
 				  
 				  <ul>
-					  <li><a href="http://www.apache.org/dyn/closer.cgi/storm/apache-storm-0.9.2-incubating/apache-storm-0.9.2-incubating.tar.gz">apache-storm-0.9.2-incubating.tar.gz</a>
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.9.2-incubating/apache-storm-0.9.2-incubating.tar.gz">apache-storm-0.9.2-incubating.tar.gz</a>
 					     [<a href="http://www.apache.org/dist/storm/apache-storm-0.9.2-incubating/apache-storm-0.9.2-incubating.tar.gz.asc">PGP</a>]
 					     [<a href="http://www.apache.org/dist/storm/apache-storm-0.9.2-incubating/apache-storm-0.9.2-incubating.tar.gz.sha">SHA512</a>] 
 					     [<a href="http://www.apache.org/dist/storm/apache-storm-0.9.2-incubating/apache-storm-0.9.2-incubating.tar.gz.md5">MD5</a>]
 					  </li>
-					  <li><a href="http://www.apache.org/dyn/closer.cgi/storm/apache-storm-0.9.2-incubating/apache-storm-0.9.2-incubating.zip">apache-storm-0.9.2-incubating.zip</a>
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.9.2-incubating/apache-storm-0.9.2-incubating.zip">apache-storm-0.9.2-incubating.zip</a>
 					     [<a href="http://www.apache.org/dist/storm/apache-storm-0.9.2-incubating/apache-storm-0.9.2-incubating.zip.asc">PGP</a>]
 					     [<a href="http://www.apache.org/dist/storm/apache-storm-0.9.2-incubating/apache-storm-0.9.2-incubating.zip.sha">SHA512</a>] 
 					     [<a href="http://www.apache.org/dist/storm/apache-storm-0.9.2-incubating/apache-storm-0.9.2-incubating.zip.md5">MD5</a>]
 					  </li>
-					  <li><a href="http://www.apache.org/dyn/closer.cgi/storm/apache-storm-0.9.2-incubating/apache-storm-0.9.2-incubating-src.tar.gz">apache-storm-0.9.2-incubating-src.tar.gz</a>
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.9.2-incubating/apache-storm-0.9.2-incubating-src.tar.gz">apache-storm-0.9.2-incubating-src.tar.gz</a>
 					     [<a href="http://www.apache.org/dist/storm/apache-storm-0.9.2-incubating/apache-storm-0.9.2-incubating-src.tar.gz.asc">PGP</a>]
 					     [<a href="http://www.apache.org/dist/storm/apache-storm-0.9.2-incubating/apache-storm-0.9.2-incubating-src.tar.gz.sha">SHA512</a>] 
 					     [<a href="http://www.apache.org/dist/storm/apache-storm-0.9.2-incubating/apache-storm-0.9.2-incubating-src.tar.gz.md5">MD5</a>]
 					  </li>
-					  <li><a href="http://www.apache.org/dyn/closer.cgi/storm/apache-storm-0.9.2-incubating/apache-storm-0.9.2-incubating-src.zip">apache-storm-0.9.2-incubating-src.zip</a>
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.9.2-incubating/apache-storm-0.9.2-incubating-src.zip">apache-storm-0.9.2-incubating-src.zip</a>
 					     [<a href="http://www.apache.org/dist/storm/apache-storm-0.9.2-incubating/apache-storm-0.9.2-incubating-src.zip.asc">PGP</a>]
 					     [<a href="http://www.apache.org/dist/storm/apache-storm-0.9.2-incubating/apache-storm-0.9.2-incubating-src.zip.sha">SHA512</a>] 
 					     [<a href="http://www.apache.org/dist/storm/apache-storm-0.9.2-incubating/apache-storm-0.9.2-incubating-src.zip.md5">MD5</a>]
@@ -249,22 +307,22 @@
 				  <b>0.9.1-incubating</b>
 				  
 				  <ul>
-					  <li><a href="http://www.apache.org/dyn/closer.cgi/storm/apache-storm-0.9.1-incubating/apache-storm-0.9.1-incubating.tar.gz">apache-storm-0.9.1-incubating.tar.gz</a>
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.9.1-incubating/apache-storm-0.9.1-incubating.tar.gz">apache-storm-0.9.1-incubating.tar.gz</a>
 					     [<a href="http://www.apache.org/dist/storm/apache-storm-0.9.1-incubating/apache-storm-0.9.1-incubating.tar.gz.asc">PGP</a>]
 					     [<a href="http://www.apache.org/dist/storm/apache-storm-0.9.1-incubating/apache-storm-0.9.1-incubating.tar.gz.sha">SHA512</a>] 
 					     [<a href="http://www.apache.org/dist/storm/apache-storm-0.9.1-incubating/apache-storm-0.9.1-incubating.tar.gz.md5">MD5</a>]
 					  </li>
-					  <li><a href="http://www.apache.org/dyn/closer.cgi/storm/apache-storm-0.9.1-incubating/apache-storm-0.9.1-incubating.zip">apache-storm-0.9.1-incubating.zip</a>
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.9.1-incubating/apache-storm-0.9.1-incubating.zip">apache-storm-0.9.1-incubating.zip</a>
 					     [<a href="http://www.apache.org/dist/storm/apache-storm-0.9.1-incubating/apache-storm-0.9.1-incubating.zip.asc">PGP</a>]
 					     [<a href="http://www.apache.org/dist/storm/apache-storm-0.9.1-incubating/apache-storm-0.9.1-incubating.zip.sha">SHA512</a>] 
 					     [<a href="http://www.apache.org/dist/storm/apache-storm-0.9.1-incubating/apache-storm-0.9.1-incubating.zip.md5">MD5</a>]
 					  </li>
-					  <li><a href="http://www.apache.org/dyn/closer.cgi/storm/apache-storm-0.9.1-incubating/apache-storm-0.9.1-incubating-src.tar.gz">apache-storm-0.9.1-incubating-src.tar.gz</a>
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.9.1-incubating/apache-storm-0.9.1-incubating-src.tar.gz">apache-storm-0.9.1-incubating-src.tar.gz</a>
 					     [<a href="http://www.apache.org/dist/storm/apache-storm-0.9.1-incubating/apache-storm-0.9.1-incubating-src.tar.gz.asc">PGP</a>]
 					     [<a href="http://www.apache.org/dist/storm/apache-storm-0.9.1-incubating/apache-storm-0.9.1-incubating-src.tar.gz.sha">SHA512</a>] 
 					     [<a href="http://www.apache.org/dist/storm/apache-storm-0.9.1-incubating/apache-storm-0.9.1-incubating-src.tar.gz.md5">MD5</a>]
 					  </li>
-					  <li><a href="http://www.apache.org/dyn/closer.cgi/storm/apache-storm-0.9.1-incubating/apache-storm-0.9.1-incubating-src.zip">apache-storm-0.9.1-incubating-src.zip</a>
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.9.1-incubating/apache-storm-0.9.1-incubating-src.zip">apache-storm-0.9.1-incubating-src.zip</a>
 					     [<a href="http://www.apache.org/dist/storm/apache-storm-0.9.1-incubating/apache-storm-0.9.1-incubating-src.zip.asc">PGP</a>]
 					     [<a href="http://www.apache.org/dist/storm/apache-storm-0.9.1-incubating/apache-storm-0.9.1-incubating-src.zip.sha">SHA512</a>] 
 					     [<a href="http://www.apache.org/dist/storm/apache-storm-0.9.1-incubating/apache-storm-0.9.1-incubating-src.zip.md5">MD5</a>]
diff --git a/_site/feed.xml b/_site/feed.xml
index a6f3d73..75e99d4 100644
--- a/_site/feed.xml
+++ b/_site/feed.xml
@@ -5,9 +5,98 @@
     <description></description>
     <link>http://storm.apache.org/</link>
     <atom:link href="http://storm.apache.org/feed.xml" rel="self" type="application/rss+xml"/>
-    <pubDate>Tue, 29 Sep 2015 15:32:49 -0400</pubDate>
-    <lastBuildDate>Tue, 29 Sep 2015 15:32:49 -0400</lastBuildDate>
-    <generator>Jekyll v2.5.3</generator>
+    <pubDate>Wed, 06 Jan 2016 14:21:20 -0800</pubDate>
+    <lastBuildDate>Wed, 06 Jan 2016 14:21:20 -0800</lastBuildDate>
+    <generator>Jekyll v3.0.1</generator>
+    
+      <item>
+        <title>Storm 0.9.6 released</title>
+        <description>&lt;p&gt;The Apache Storm community is pleased to announce that version 0.9.6 has been released and is available from &lt;a href=&quot;/downloads.html&quot;&gt;the downloads page&lt;/a&gt;.&lt;/p&gt;
+
+&lt;p&gt;This is a maintenance release that includes a number of important bug fixes that improve Storm&amp;#39;s stability and fault tolerance. We encourage users of previous versions to upgrade to this latest release.&lt;/p&gt;
+
+&lt;h2 id=&quot;thanks&quot;&gt;Thanks&lt;/h2&gt;
+
+&lt;p&gt;Special thanks are due to all those who have contributed to Apache Storm -- whether through direct code contributions, documentation, bug reports, or helping other users on the mailing lists. Your efforts are much appreciated.&lt;/p&gt;
+
+&lt;h2 id=&quot;full-changelog&quot;&gt;Full Changelog&lt;/h2&gt;
+
+&lt;ul&gt;
+&lt;li&gt;STORM-1027: Use overflow buffer for emitting metrics&lt;/li&gt;
+&lt;li&gt;STORM-996: netty-unit-tests/test-batch demonstrates out-of-order delivery&lt;/li&gt;
+&lt;li&gt;STORM-1056: allow supervisor log filename to be configurable via ENV variable&lt;/li&gt;
+&lt;li&gt;STORM-1051: Netty Client.java&amp;#39;s flushMessages produces a NullPointerException&lt;/li&gt;
+&lt;li&gt;STORM-763: nimbus reassigned worker A to another machine, but other worker&amp;#39;s netty client can&amp;#39;t connect to the new worker A&lt;/li&gt;
+&lt;li&gt;STORM-935: Update Disruptor queue version to 2.10.4&lt;/li&gt;
+&lt;li&gt;STORM-503: Short disruptor queue wait time leads to high CPU usage when idle&lt;/li&gt;
+&lt;li&gt;STORM-728: Put emitted and transferred stats under correct columns&lt;/li&gt;
+&lt;li&gt;STORM-643: KafkaUtils repeatedly fetches messages whose offset is out of range&lt;/li&gt;
+&lt;li&gt;STORM-933: NullPointerException during KafkaSpout deactivation&lt;/li&gt;
+&lt;/ul&gt;
+</description>
+        <pubDate>Thu, 05 Nov 2015 00:00:00 -0800</pubDate>
+        <link>http://storm.apache.org/2015/11/05/storm096-released.html</link>
+        <guid isPermaLink="true">http://storm.apache.org/2015/11/05/storm096-released.html</guid>
+        
+        
+      </item>
+    
+      <item>
+        <title>Storm 0.10.0 released</title>
+        <description>&lt;p&gt;The Apache Storm community is pleased to announce that version 0.10.0 Stable has been released and is available from &lt;a href=&quot;/downloads.html&quot;&gt;the downloads page&lt;/a&gt;.&lt;/p&gt;
+
+&lt;p&gt;This release includes a number of improvements and bug fixes identified in the previous beta release. For a description of the new features included in the 0.10.0 release, please &lt;a href=&quot;/2015/06/15/storm0100-beta-released.html&quot;&gt;see the previous announcement of 0.10.0-beta1&lt;/a&gt;.&lt;/p&gt;
+
+&lt;h2 id=&quot;thanks&quot;&gt;Thanks&lt;/h2&gt;
+
+&lt;p&gt;Special thanks are due to all those who have contributed to Apache Storm -- whether through direct code contributions, documentation, bug reports, or helping other users on the mailing lists. Your efforts are much appreciated.&lt;/p&gt;
+
+&lt;h2 id=&quot;full-changelog&quot;&gt;Full Changelog&lt;/h2&gt;
+
+&lt;ul&gt;
+&lt;li&gt;STORM-1108: Fix NPE in simulated time&lt;/li&gt;
+&lt;li&gt;STORM-1106: Netty should not limit attempts to reconnect&lt;/li&gt;
+&lt;li&gt;STORM-1099: Fix worker childopts as arraylist of strings&lt;/li&gt;
+&lt;li&gt;STORM-1096: Fix some issues with impersonation on the UI&lt;/li&gt;
+&lt;li&gt;STORM-912: Support SSL on Logviewer&lt;/li&gt;
+&lt;li&gt;STORM-1094: advance kafka offset when deserializer yields no object&lt;/li&gt;
+&lt;li&gt;STORM-1066: Specify current directory when supervisor launches a worker&lt;/li&gt;
+&lt;li&gt;STORM-1012: Shaded everything that was not already shaded&lt;/li&gt;
+&lt;li&gt;STORM-967: Shaded everything that was not already shaded&lt;/li&gt;
+&lt;li&gt;STORM-922: Shaded everything that was not already shaded&lt;/li&gt;
+&lt;li&gt;STORM-1042: Shaded everything that was not already shaded&lt;/li&gt;
+&lt;li&gt;STORM-1026: Adding external classpath elements does not work&lt;/li&gt;
+&lt;li&gt;STORM-1055: storm-jdbc README needs fixes and context&lt;/li&gt;
+&lt;li&gt;STORM-1044: Setting dop to zero does not raise an error&lt;/li&gt;
+&lt;li&gt;STORM-1050: Topologies with same name run on one cluster&lt;/li&gt;
+&lt;li&gt;STORM-1005: Supervisor do not get running workers after restart.&lt;/li&gt;
+&lt;li&gt;STORM-803: Cleanup travis-ci build and logs&lt;/li&gt;
+&lt;li&gt;STORM-1027: Use overflow buffer for emitting metrics&lt;/li&gt;
+&lt;li&gt;STORM-1024: log4j changes leaving ${sys:storm.log.dir} under STORM_HOME dir&lt;/li&gt;
+&lt;li&gt;STORM-944: storm-hive pom.xml has a dependency conflict with calcite&lt;/li&gt;
+&lt;li&gt;STORM-994: Connection leak between nimbus and supervisors&lt;/li&gt;
+&lt;li&gt;STORM-1001: Undefined STORM_EXT_CLASSPATH adds &amp;#39;::&amp;#39; to classpath of workers&lt;/li&gt;
+&lt;li&gt;STORM-977: Incorrect signal (-9) when as-user is true&lt;/li&gt;
+&lt;li&gt;STORM-843: [storm-redis] Add Javadoc to storm-redis&lt;/li&gt;
+&lt;li&gt;STORM-866: Use storm.log.dir instead of storm.home in log4j2 config&lt;/li&gt;
+&lt;li&gt;STORM-810: PartitionManager in storm-kafka should commit latest offset before close&lt;/li&gt;
+&lt;li&gt;STORM-928: Add sources-&amp;gt;streams-&amp;gt;fields map to Multi-Lang Handshake&lt;/li&gt;
+&lt;li&gt;STORM-945: &lt;DefaultRolloverStrategy&gt; element is not a policy,and should not be putted in the &lt;Policies&gt; element.&lt;/li&gt;
+&lt;li&gt;STORM-857: create logs metadata dir when running securely&lt;/li&gt;
+&lt;li&gt;STORM-793: Made change to logviewer.clj in order to remove the invalid http 500 response&lt;/li&gt;
+&lt;li&gt;STORM-139: hashCode does not work for byte[]&lt;/li&gt;
+&lt;li&gt;STORM-860: UI: while topology is transitioned to killed, &amp;quot;Activate&amp;quot; button is enabled but not functioning&lt;/li&gt;
+&lt;li&gt;STORM-966: ConfigValidation.DoubleValidator doesn&amp;#39;t really validate whether the type of the object is a double&lt;/li&gt;
+&lt;li&gt;STORM-742: Let ShellBolt treat all messages to update heartbeat&lt;/li&gt;
+&lt;li&gt;STORM-992: A bug in the timer.clj might cause unexpected delay to schedule new event&lt;/li&gt;
+&lt;/ul&gt;
+</description>
+        <pubDate>Thu, 05 Nov 2015 00:00:00 -0800</pubDate>
+        <link>http://storm.apache.org/2015/11/05/storm0100-released.html</link>
+        <guid isPermaLink="true">http://storm.apache.org/2015/11/05/storm0100-released.html</guid>
+        
+        
+      </item>
     
       <item>
         <title>Storm 0.10.0 Beta Released</title>
@@ -15,7 +104,7 @@
 
 &lt;p&gt;Aside from many stability and performance improvements, this release includes a number of important new features, some of which are highlighted below.&lt;/p&gt;
 
-&lt;h2 id=&quot;secure,-multi-tenant-deployment&quot;&gt;Secure, Multi-Tenant Deployment&lt;/h2&gt;
+&lt;h2 id=&quot;secure-multi-tenant-deployment&quot;&gt;Secure, Multi-Tenant Deployment&lt;/h2&gt;
 
 &lt;p&gt;Much like the early days of Hadoop, Apache Storm originally evolved in an environment where security was not a high-priority concern. Rather, it was assumed that Storm would be deployed to environments suitably cordoned off from security threats. While a large number of users were comfortable setting up their own security measures for Storm (usually at the Firewall/OS level), this proved a hindrance to broader adoption among larger enterprises where security policies prohibited deployment without specific safeguards.&lt;/p&gt;
 
@@ -103,7 +192,7 @@
 
 &lt;p&gt;Further information can be found in the &lt;a href=&quot;https://github.com/apache/storm/blob/v0.10.0-beta/external/storm-redis/README.md&quot;&gt;storm-redis documentation&lt;/a&gt;.&lt;/p&gt;
 
-&lt;h2 id=&quot;jdbc/rdbms-integration&quot;&gt;JDBC/RDBMS Integration&lt;/h2&gt;
+&lt;h2 id=&quot;jdbc-rdbms-integration&quot;&gt;JDBC/RDBMS Integration&lt;/h2&gt;
 
 &lt;p&gt;Many stream processing data flows require accessing data from or writing data to a relational data store. Storm 0.10.0 introduces highly flexible and customizable support for integrating with virtually any JDBC-compliant database.&lt;/p&gt;
 
@@ -296,7 +385,7 @@
 &lt;li&gt;STORM-130: Supervisor getting killed due to java.io.FileNotFoundException: File &amp;#39;../stormconf.ser&amp;#39; does not exist.&lt;/li&gt;
 &lt;/ul&gt;
 </description>
-        <pubDate>Mon, 15 Jun 2015 00:00:00 -0400</pubDate>
+        <pubDate>Mon, 15 Jun 2015 00:00:00 -0700</pubDate>
         <link>http://storm.apache.org/2015/06/15/storm0100-beta-released.html</link>
         <guid isPermaLink="true">http://storm.apache.org/2015/06/15/storm0100-beta-released.html</guid>
         
@@ -322,7 +411,7 @@
 &lt;li&gt;STORM-130: Supervisor getting killed due to java.io.FileNotFoundException: File &amp;#39;../stormconf.ser&amp;#39; does not exist.&lt;/li&gt;
 &lt;/ul&gt;
 </description>
-        <pubDate>Thu, 04 Jun 2015 00:00:00 -0400</pubDate>
+        <pubDate>Thu, 04 Jun 2015 00:00:00 -0700</pubDate>
         <link>http://storm.apache.org/2015/06/04/storm095-released.html</link>
         <guid isPermaLink="true">http://storm.apache.org/2015/06/04/storm095-released.html</guid>
         
@@ -349,7 +438,7 @@
 &lt;li&gt;STORM-130: Supervisor getting killed due to java.io.FileNotFoundException: File &amp;#39;../stormconf.ser&amp;#39; does not exist.&lt;/li&gt;
 &lt;/ul&gt;
 </description>
-        <pubDate>Wed, 25 Mar 2015 00:00:00 -0400</pubDate>
+        <pubDate>Wed, 25 Mar 2015 00:00:00 -0700</pubDate>
         <link>http://storm.apache.org/2015/03/25/storm094-released.html</link>
         <guid isPermaLink="true">http://storm.apache.org/2015/03/25/storm094-released.html</guid>
         
@@ -545,7 +634,7 @@
 &lt;li&gt;STORM-514: Update storm-starter README now that Storm has graduated from Incubator&lt;/li&gt;
 &lt;/ul&gt;
 </description>
-        <pubDate>Tue, 25 Nov 2014 00:00:00 -0500</pubDate>
+        <pubDate>Tue, 25 Nov 2014 00:00:00 -0800</pubDate>
         <link>http://storm.apache.org/2014/11/25/storm093-released.html</link>
         <guid isPermaLink="true">http://storm.apache.org/2014/11/25/storm093-released.html</guid>
         
@@ -560,7 +649,7 @@
 
 &lt;p&gt;We heartily encourage you to &lt;a href=&quot;http://storm.apache.org/downloads.html&quot;&gt;test the 0.9.3 release candidate&lt;/a&gt; and provide your feedback regarding any issues via &lt;a href=&quot;http://storm.apache.org/community.html&quot;&gt;our mailing lists&lt;/a&gt;, which is an easy and valuable way to contribute back to the Storm community and to help us moving to an official 0.9.3 release.  You can find the &lt;a href=&quot;http://storm.apache.org/downloads.html&quot;&gt;0.9.3 release candidate in our Downloads section&lt;/a&gt;.&lt;/p&gt;
 </description>
-        <pubDate>Mon, 20 Oct 2014 00:00:00 -0400</pubDate>
+        <pubDate>Mon, 20 Oct 2014 00:00:00 -0700</pubDate>
         <link>http://storm.apache.org/2014/10/20/storm093-release-candidate.html</link>
         <guid isPermaLink="true">http://storm.apache.org/2014/10/20/storm093-release-candidate.html</guid>
         
@@ -604,7 +693,7 @@
 &lt;p&gt;The &lt;code&gt;storm-kafka&lt;/code&gt; module can be found in the &lt;code&gt;/external/&lt;/code&gt; directory of the source tree and binary distributions. The &lt;code&gt;external&lt;/code&gt; area has been set up to contain projects that while not required by Storm, are often used in conjunction with Storm to integrate with some other technology. Such projects also come with a maintenance committment from at least one Storm committer to ensure compatibility with Storm&amp;#39;s main codebase as it evolves.&lt;/p&gt;
 
 &lt;p&gt;The &lt;code&gt;storm-kafka&lt;/code&gt; dependency is available now from Maven Central at the following coordinates:&lt;/p&gt;
-&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-text&quot; data-lang=&quot;text&quot;&gt;groupId: org.apache.storm
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-&quot; data-lang=&quot;&quot;&gt;groupId: org.apache.storm
 artifactId: storm-kafka
 version: 0.9.2-incubating
 &lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
@@ -695,7 +784,7 @@
 &lt;li&gt;STORM-146: Unit test regression when storm is compiled with 3.4.5 zookeeper&lt;/li&gt;
 &lt;/ul&gt;
 </description>
-        <pubDate>Wed, 25 Jun 2014 00:00:00 -0400</pubDate>
+        <pubDate>Wed, 25 Jun 2014 00:00:00 -0700</pubDate>
         <link>http://storm.apache.org/2014/06/25/storm092-released.html</link>
         <guid isPermaLink="true">http://storm.apache.org/2014/06/25/storm092-released.html</guid>
         
@@ -742,7 +831,7 @@
 &lt;/tr&gt;
 &lt;/tbody&gt;&lt;/table&gt;
 </description>
-        <pubDate>Tue, 17 Jun 2014 00:00:00 -0400</pubDate>
+        <pubDate>Tue, 17 Jun 2014 00:00:00 -0700</pubDate>
         <link>http://storm.apache.org/2014/06/17/contest-results.html</link>
         <guid isPermaLink="true">http://storm.apache.org/2014/06/17/contest-results.html</guid>
         
@@ -820,207 +909,12 @@
 
 &lt;p&gt;The authors of the selected Apache Storm logo(s) will be required to donate them to the Apache Storm project and complete and &lt;a href=&quot;http://www.apache.org/licenses/icla.txt&quot;&gt;Apache Individual Contributor License Agreement(ICLA)&lt;/a&gt;&lt;/p&gt;
 </description>
-        <pubDate>Thu, 10 Apr 2014 00:00:00 -0400</pubDate>
+        <pubDate>Thu, 10 Apr 2014 00:00:00 -0700</pubDate>
         <link>http://storm.apache.org/2014/04/10/storm-logo-contest.html</link>
         <guid isPermaLink="true">http://storm.apache.org/2014/04/10/storm-logo-contest.html</guid>
         
         
       </item>
     
-      <item>
-        <title>Storm 0.9.0 Released</title>
-        <description>&lt;p&gt;We are pleased to announce that Storm 0.9.0 has been released and is available from &lt;a href=&quot;/downloads.html&quot;&gt;the downloads page&lt;/a&gt;. This release represents an important milestone in the evolution of Storm.&lt;/p&gt;
-
-&lt;p&gt;While a number of new features have been added, a key focus area for this release has been stability-related fixes. Though many users are successfully running work-in-progress builds for Storm 0.9.x in production, this release represents the most stable version to-date, and is highly recommended for everyone, especially users of 0.8.x versions.&lt;/p&gt;
-
-&lt;h2 id=&quot;netty-transport&quot;&gt;Netty Transport&lt;/h2&gt;
-
-&lt;p&gt;The first hightlight of this release is the new &lt;a href=&quot;http://netty.io/index.html&quot;&gt;Netty&lt;/a&gt; Transport contributed by &lt;a href=&quot;http://yahooeng.tumblr.com/&quot;&gt;Yahoo! Engineering&lt;/a&gt;. Storm&amp;#39;s core network transport mechanism is now plugable, and Storm now comes with two implementations: The original 0MQ transport, and a new Netty-based implementation.&lt;/p&gt;
-
-&lt;p&gt;In earlier versions, Storm relied solely on 0MQ for transport. Since 0MQ is a native library, it was highly platform-dependent and, at times, challenging to install properly. In addition, stability between versions varied widely between versions and only a relatively old 0MQ version (2.1.7) was certified to work with Storm.&lt;/p&gt;
-
-&lt;p&gt;The Netty transport offers a pure Java alternative that eliminates Storm&amp;#39;s dependency on native libraries. The Netty transport&amp;#39;s performance is up to twice as fast as 0MQ, and it will open the door for authorization and authentication between worker processes. For an in-depth performance comparison of the 0MQ and Netty transports, see &lt;a href=&quot;http://yahooeng.tumblr.com/post/64758709722/making-storm-fly-with-netty&quot;&gt;this blog post&lt;/a&gt; by Storm contributor &lt;a href=&quot;https://github.com/revans2&quot;&gt;Bobby Evans&lt;/a&gt;.&lt;/p&gt;
-
-&lt;p&gt;To configure Storm to use the Netty transport simply add the following to your &lt;code&gt;storm.yaml&lt;/code&gt; configuration and adjust the values to best suit your use-case:&lt;/p&gt;
-&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-text&quot; data-lang=&quot;text&quot;&gt;storm.messaging.transport: &amp;quot;backtype.storm.messaging.netty.Context&amp;quot;
-storm.messaging.netty.server_worker_threads: 1
-storm.messaging.netty.client_worker_threads: 1
-storm.messaging.netty.buffer_size: 5242880
-storm.messaging.netty.max_retries: 100
-storm.messaging.netty.max_wait_ms: 1000
-storm.messaging.netty.min_wait_ms: 100
-&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
-&lt;p&gt;You can also write your own transport implementation by implementing the &lt;a href=&quot;https://github.com/apache/incubator-storm/blob/master/storm-core/src/jvm/backtype/storm/messaging/IContext.java&quot;&gt;&lt;code&gt;backtype.storm.messaging.IContext&lt;/code&gt;&lt;/a&gt; interface.&lt;/p&gt;
-
-&lt;h2 id=&quot;log-viewer-ui&quot;&gt;Log Viewer UI&lt;/h2&gt;
-
-&lt;p&gt;Storm now includes a helpful new feature for debugging and monitoring topologies: The &lt;code&gt;logviewer&lt;/code&gt; daemon.&lt;/p&gt;
-
-&lt;p&gt;In earlier versions of Storm, viewing worker logs involved determining a worker&amp;#39;s location (host/port), typically through Storm UI, then &lt;code&gt;ssh&lt;/code&gt;ing to that host and &lt;code&gt;tail&lt;/code&gt;ing the corresponding worker log file. With the new log viewer. You can now easily access a specific worker&amp;#39;s log in a web browser by clicking on a worker&amp;#39;s port number right from Storm UI.&lt;/p&gt;
-
-&lt;p&gt;The &lt;code&gt;logviewer&lt;/code&gt; daemon runs as a separate process on Storm supervisor nodes. To enable the &lt;code&gt;logviewer&lt;/code&gt; run the following command (under supervision) on your cluster&amp;#39;s supervisor nodes:&lt;/p&gt;
-&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-text&quot; data-lang=&quot;text&quot;&gt;$ storm logviewer
-&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
-&lt;h2 id=&quot;improved-windows-support&quot;&gt;Improved Windows Support&lt;/h2&gt;
-
-&lt;p&gt;In previous versions, running Storm on Microsoft Windows required installing third-party binaries (0MQ), modifying Storm&amp;#39;s source, and adding Windows-specific scripts. With the addition of the platform-independent Netty transport, as well as numerous enhancements to make Storm more platform-independent, running Storm on Windows is easier than ever.&lt;/p&gt;
-
-&lt;h2 id=&quot;security-improvements&quot;&gt;Security Improvements&lt;/h2&gt;
-
-&lt;p&gt;Security, Authentication, and Authorization have been and will continue to be important focus areas for upcoming features. Storm 0.9.0 introduces an API for pluggable tuple serialization and a blowfish encryption based implementation for encrypting tuple data for sensitive use cases.&lt;/p&gt;
-
-&lt;h2 id=&quot;api-compatibility-and-upgrading&quot;&gt;API Compatibility and Upgrading&lt;/h2&gt;
-
-&lt;p&gt;For most Storm topology developers, upgrading to 0.9.0 is simply a matter of updating the &lt;a href=&quot;https://clojars.org/storm&quot;&gt;dependency&lt;/a&gt;. Storm&amp;#39;s core API has changed very little since the 0.8.2 release.&lt;/p&gt;
-
-&lt;p&gt;On the devops side, when upgrading to a new Storm release, it is safest to clear any existing state (Zookeeper, &lt;code&gt;storm.local.dir&lt;/code&gt;), prior to upgrading.&lt;/p&gt;
-
-&lt;h2 id=&quot;logging-changes&quot;&gt;Logging Changes&lt;/h2&gt;
-
-&lt;p&gt;Another important change in 0.9.0 has to do with logging. Storm has largely switched over to the &lt;a href=&quot;http://www.slf4j.org&quot;&gt;slf4j API&lt;/a&gt; (backed by a &lt;a href=&quot;http://logback.qos.ch&quot;&gt;logback&lt;/a&gt; logger implementation). Some Storm dependencies rely on the log4j API, so Storm currently depends on &lt;a href=&quot;http://www.slf4j.org/legacy.html#log4j-over-slf4j&quot;&gt;log4j-over-slf4j&lt;/a&gt;.&lt;/p&gt;
-
-&lt;p&gt;These changes have implications for existing topologies and topology components that use the log4j API.&lt;/p&gt;
-
-&lt;p&gt;In general, and when possible, Storm topologies and topology components should use the &lt;a href=&quot;http://www.slf4j.org&quot;&gt;slf4j API&lt;/a&gt; for logging.&lt;/p&gt;
-
-&lt;h2 id=&quot;thanks&quot;&gt;Thanks&lt;/h2&gt;
-
-&lt;p&gt;Special thanks are due to all those who have contributed to Storm -- whether through direct code contributions, documentation, bug reports, or helping other users on the mailing lists. Your efforts are much appreciated.&lt;/p&gt;
-
-&lt;h2 id=&quot;changelog&quot;&gt;Changelog&lt;/h2&gt;
-
-&lt;ul&gt;
-&lt;li&gt;Update build configuration to force compatibility with Java 1.6&lt;/li&gt;
-&lt;li&gt;Fixed a netty client issue where sleep times for reconnection could be negative (thanks brndnmtthws)&lt;/li&gt;
-&lt;li&gt;Fixed an issue that would cause storm-netty unit tests to fail&lt;/li&gt;
-&lt;li&gt;Added configuration to limit ShellBolt internal _pendingWrites queue length (thanks xiaokang)&lt;/li&gt;
-&lt;li&gt;Fixed a a netty client issue where sleep times for reconnection could be negative (thanks brndnmtthws)&lt;/li&gt;
-&lt;li&gt;Fixed a display issue with system stats in Storm UI (thanks d2r)&lt;/li&gt;
-&lt;li&gt;Nimbus now does worker heartbeat timeout checks as soon as heartbeats are updated (thanks d2r)&lt;/li&gt;
-&lt;li&gt;The logviewer now determines log file location by examining the logback configuration (thanks strongh)&lt;/li&gt;
-&lt;li&gt;Allow tick tuples to work with the system bolt (thanks xumingming)&lt;/li&gt;
-&lt;li&gt;Add default configuration values for the netty transport and the ability to configure the number of worker threads (thanks revans2)&lt;/li&gt;
-&lt;li&gt;Added timeout to unit tests to prevent a situation where tests would hang indefinitely (thanks d2r)&lt;/li&gt;
-&lt;li&gt;Fixed an issue in the system bolt where local mode would not be detected accurately (thanks miofthena)&lt;/li&gt;
-&lt;li&gt;Fixed &lt;code&gt;storm jar&lt;/code&gt; command to work properly when STORM_JAR_JVM_OPTS is not specified (thanks roadkill001)&lt;/li&gt;
-&lt;li&gt;All logging now done with slf4j&lt;/li&gt;
-&lt;li&gt;Replaced log4j logging system with logback&lt;/li&gt;
-&lt;li&gt;Logs are now limited to 1GB per worker (configurable via logging configuration file)&lt;/li&gt;
-&lt;li&gt;Build upgraded to leiningen 2.0&lt;/li&gt;
-&lt;li&gt;Revamped Trident spout interfaces to support more dynamic spouts, such as a spout who reads from a changing set of brokers&lt;/li&gt;
-&lt;li&gt;How tuples are serialized is now pluggable (thanks anfeng)&lt;/li&gt;
-&lt;li&gt;Added blowfish encryption based tuple serialization (thanks anfeng)&lt;/li&gt;
-&lt;li&gt;Have storm fall back to installed storm.yaml (thanks revans2)&lt;/li&gt;
-&lt;li&gt;Improve error message when Storm detects bundled storm.yaml to show the URL&amp;#39;s for offending resources (thanks revans2)&lt;/li&gt;
-&lt;li&gt;Nimbus throws NotAliveException instead of FileNotFoundException from various query methods when topology is no longer alive (thanks revans2)&lt;/li&gt;
-&lt;li&gt;Escape HTML and Javascript appropriately in Storm UI (thanks d2r)&lt;/li&gt;
-&lt;li&gt;Storm&amp;#39;s Zookeeper client now uses bounded exponential backoff strategy on failures&lt;/li&gt;
-&lt;li&gt;Automatically drain and log error stream of multilang subprocesses&lt;/li&gt;
-&lt;li&gt;Append component name to thread name of running executors so that logs are easier to read&lt;/li&gt;
-&lt;li&gt;Messaging system used for passing messages between workers is now pluggable (thanks anfeng)&lt;/li&gt;
-&lt;li&gt;Netty implementation of messaging (thanks anfeng)&lt;/li&gt;
-&lt;li&gt;Include topology id, worker port, and worker id in properties for worker processes, useful for logging (thanks d2r)&lt;/li&gt;
-&lt;li&gt;Tick tuples can now be scheduled using floating point seconds (thanks tscurtu)&lt;/li&gt;
-&lt;li&gt;Added log viewer daemon and links from UI to logviewers (thanks xiaokang)&lt;/li&gt;
-&lt;li&gt;DRPC server childopts now configurable (thanks strongh)&lt;/li&gt;
-&lt;li&gt;Default number of ackers to number of workers, instead of just one (thanks lyogavin)&lt;/li&gt;
-&lt;li&gt;Validate that Storm configs are of proper types/format/structure (thanks d2r)&lt;/li&gt;
-&lt;li&gt;FixedBatchSpout will now replay batches appropriately on batch failure (thanks ptgoetz)&lt;/li&gt;
-&lt;li&gt;Can set JAR_JVM_OPTS env variable to add jvm options when calling &amp;#39;storm jar&amp;#39; (thanks srmelody)&lt;/li&gt;
-&lt;li&gt;Throw error if batch id for transaction is behind the batch id in the opaque value (thanks mrflip)&lt;/li&gt;
-&lt;li&gt;Sort topologies by name in UI (thanks jaked)&lt;/li&gt;
-&lt;li&gt;Added LoggingMetricsConsumer to log all metrics to a file, by default not enabled (thanks mrflip)&lt;/li&gt;
-&lt;li&gt;Add prepare(Map conf) method to TopologyValidator (thanks ankitoshniwal)&lt;/li&gt;
-&lt;li&gt;Bug fix: Supervisor provides full path to workers to logging config rather than relative path (thanks revans2) &lt;/li&gt;
-&lt;li&gt;Bug fix: Call ReducerAggregator#init properly when used within persistentAggregate (thanks lorcan)&lt;/li&gt;
-&lt;li&gt;Bug fix: Set component-specific configs correctly for Trident spouts&lt;/li&gt;
-&lt;/ul&gt;
-</description>
-        <pubDate>Sun, 08 Dec 2013 00:00:00 -0500</pubDate>
-        <link>http://storm.apache.org/2013/12/08/storm090-released.html</link>
-        <guid isPermaLink="true">http://storm.apache.org/2013/12/08/storm090-released.html</guid>
-        
-        
-      </item>
-    
-      <item>
-        <title>Storm 0.8.2 released</title>
-        <description>&lt;p&gt;Storm 0.8.2 has been released and is available from &lt;a href=&quot;/downloads.html&quot;&gt;the downloads page&lt;/a&gt;. This release contains a ton of improvements and fixes and is a highly recommended upgrade for everyone.&lt;/p&gt;
-
-&lt;h2 id=&quot;isolation-scheduler&quot;&gt;Isolation Scheduler&lt;/h2&gt;
-
-&lt;p&gt;The highlight of this release is the new &amp;quot;Isolation scheduler&amp;quot; that makes it easy and safe to share a cluster among many topologies. The isolation scheduler lets you specify which topologies should be &amp;quot;isolated&amp;quot;, meaning that they run on a dedicated set of machines within the cluster where no other topologies will be running. These isolated topologies are given priority on the cluster, so resources will be allocated to isolated topologies if there&amp;#39;s competition with non-isolated topologies, and resources will be taken away from non-isolated topologies if necessary to get resources for an isolated topology. Once all isolated topologies are allocated, the remaining machines on the cluster are shared among all non-isolated topologies.&lt;/p&gt;
-
-&lt;p&gt;You configure the isolation scheduler in the Nimbus configuration. Set &amp;quot;storm.scheduler&amp;quot; to &amp;quot;backtype.storm.scheduler.IsolationScheduler&amp;quot;. Then, use the &amp;quot;isolation.scheduler.machines&amp;quot; config to specify how many machines each topology should get. This config is a map from topology name to number of machines. For example:&lt;/p&gt;
-
-&lt;script src=&quot;https://gist.github.com/4514691.js&quot;&gt;&lt;/script&gt;
-
-&lt;p&gt;Any topologies submitted to the cluster not listed there will not be isolated. Note that there is no way for a user of Storm to affect their isolation settings – this is only allowed by the administrator of the cluster (this is very much intentional).&lt;/p&gt;
-
-&lt;p&gt;The isolation scheduler solves the multi-tenancy problem – avoiding resource contention between topologies – by providing full isolation between topologies. The intention is that &amp;quot;productionized&amp;quot; topologies should be listed in the isolation config, and test or in-development topologies should not. The remaining machines on the cluster serve the dual role of failover for isolated topologies and for running the non-isolated topologies.&lt;/p&gt;
-
-&lt;h2 id=&quot;storm-ui-improvements&quot;&gt;Storm UI improvements&lt;/h2&gt;
-
-&lt;p&gt;The Storm UI has also been made significantly more useful. There are new stats &amp;quot;#executed&amp;quot;, &amp;quot;execute latency&amp;quot;, and &amp;quot;capacity&amp;quot; tracked for all bolts. The &amp;quot;capacity&amp;quot; metric is very useful and tells you what % of the time in the last 10 minutes the bolt spent executing tuples. If this value is close to 1, then the bolt is &amp;quot;at capacity&amp;quot; and is a bottleneck in your topology. The solution to at-capacity bolts is to increase the parallelism of that bolt.&lt;/p&gt;
-
-&lt;p&gt;Another useful improvement is the ability to kill, activate, deactivate, and rebalance topologies from the Storm UI.&lt;/p&gt;
-
-&lt;h2 id=&quot;important-bug-fixes&quot;&gt;Important bug fixes&lt;/h2&gt;
-
-&lt;p&gt;There are also a few important bug fixes in this release. We fixed two bugs that could cause a topology to freeze up and stop processing. One of these bugs was extremely unlikely to hit, but the other one was a regression in 0.8.1 and there was a small chance of hitting it anytime a worker was restarted.&lt;/p&gt;
-
-&lt;h2 id=&quot;changelog&quot;&gt;Changelog&lt;/h2&gt;
-
-&lt;ul&gt;
-&lt;li&gt;Added backtype.storm.scheduler.IsolationScheduler. This lets you run topologies that are completely isolated at the machine level. Configure Nimbus to isolate certain topologies, and how many machines to give to each of those topologies, with the isolation.scheduler.machines config in Nimbus&amp;#39;s storm.yaml. Topologies run on the cluster that are not listed there will share whatever remaining machines there are on the cluster after machines are allocated to the listed topologies.&lt;/li&gt;
-&lt;li&gt;Storm UI now uses nimbus.host to find Nimbus rather than always using localhost (thanks Frostman)&lt;/li&gt;
-&lt;li&gt;Added report-error! to Clojure DSL&lt;/li&gt;
-&lt;li&gt;Automatically throttle errors sent to Zookeeper/Storm UI when too many are reported in a time interval (all errors are still logged) Configured with TOPOLOGY_MAX_ERROR_REPORT_PER_INTERVAL and TOPOLOGY_ERROR_THROTTLE_INTERVAL_SECS&lt;/li&gt;
-&lt;li&gt;Kryo instance used for serialization can now be controlled via IKryoFactory interface and TOPOLOGY_KRYO_FACTORY config&lt;/li&gt;
-&lt;li&gt;Add ability to plug in custom code into Nimbus to allow/disallow topologies to be submitted via NIMBUS_TOPOLOGY_VALIDATOR config&lt;/li&gt;
-&lt;li&gt;Added TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS config to control how often a batch can be emitted in a Trident topology. Defaults to 500 milliseconds. This is used to prevent too much load from being placed on Zookeeper in the case that batches are being processed super quickly.&lt;/li&gt;
-&lt;li&gt;Log any topology submissions errors in nimbus.log&lt;/li&gt;
-&lt;li&gt;Add static helpers in Config when using regular maps&lt;/li&gt;
-&lt;li&gt;Make Trident much more memory efficient during failures by immediately removing state for failed attempts when a more recent attempt is seen&lt;/li&gt;
-&lt;li&gt;Add ability to name portions of a Trident computation and have those names appear in the Storm UI&lt;/li&gt;
-&lt;li&gt;Show Nimbus and topology configurations through Storm UI (thanks rnfein)&lt;/li&gt;
-&lt;li&gt;Added ITupleCollection interface for TridentState&amp;#39;s and TupleCollectionGet QueryFunction for getting the full contents of a state. MemoryMapState and LRUMemoryMapState implement this&lt;/li&gt;
-&lt;li&gt;Can now submit a topology in inactive state. Storm will wait to call open/prepare on the spouts/bolts until it is first activated.&lt;/li&gt;
-&lt;li&gt;Can now activate, deactive, rebalance, and kill topologies from the Storm UI (thanks Frostman)&lt;/li&gt;
-&lt;li&gt;Can now use --config option to override which yaml file from ~/.storm to use for the config (thanks tjun)&lt;/li&gt;
-&lt;li&gt;Redesigned the pluggable resource scheduler (INimbus, ISupervisor) interfaces to allow for much simpler integrations&lt;/li&gt;
-&lt;li&gt;Added prepare method to IScheduler&lt;/li&gt;
-&lt;li&gt;Added &amp;quot;throws Exception&amp;quot; to TestJob interface&lt;/li&gt;
-&lt;li&gt;Added reportError to multilang protocol and updated Python and Ruby adapters to use it (thanks Lazyshot)&lt;/li&gt;
-&lt;li&gt;Number tuples executed now tracked and shown in Storm UI&lt;/li&gt;
-&lt;li&gt;Added ReportedFailedException which causes a batch to fail without killing worker and reports the error to the UI&lt;/li&gt;
-&lt;li&gt;Execute latency now tracked and shown in Storm UI&lt;/li&gt;
-&lt;li&gt;Adding testTuple methods for easily creating Tuple instances to Testing API (thanks xumingming)&lt;/li&gt;
-&lt;li&gt;Trident now throws an error during construction of a topology when try to select fields that don&amp;#39;t exist in a stream (thanks xumingming)&lt;/li&gt;
-&lt;li&gt;Compute the capacity of a bolt based on execute latency and #executed over last 10 minutes and display in UI&lt;/li&gt;
-&lt;li&gt;Storm UI displays exception instead of blank page when there&amp;#39;s an error rendering the page (thanks Frostman)&lt;/li&gt;
-&lt;li&gt;Added MultiScheme interface (thanks sritchie)&lt;/li&gt;
-&lt;li&gt;Added MockTridentTuple for testing (thanks emblem)&lt;/li&gt;
-&lt;li&gt;Add whitelist methods to Cluster to allow only a subset of hosts to be revealed as available slots&lt;/li&gt;
-&lt;li&gt;Updated Trident Debug filter to take in an identifier to use when logging (thanks emblem)&lt;/li&gt;
-&lt;li&gt;Number of DRPC server worker threads now customizable (thanks xiaokang)&lt;/li&gt;
-&lt;li&gt;DRPC server now uses a bounded queue for requests to prevent being overloaded with requests (thanks xiaokang)&lt;/li&gt;
-&lt;li&gt;Add &lt;strong&gt;hash&lt;/strong&gt; method to all generated Python Thrift objects so that Python code can read Nimbus stats which use Thrift objects as dict keys&lt;/li&gt;
-&lt;li&gt;Bug fix: Fix for bug that could cause topology to hang when ZMQ blocks sending to a worker that got reassigned&lt;/li&gt;
-&lt;li&gt;Bug fix: Fix deadlock bug due to variant of dining philosophers problem. Spouts now use an overflow buffer to prevent blocking and guarantee that it can consume the incoming queue of acks/fails.&lt;/li&gt;
-&lt;li&gt;Bug fix: Fix race condition in supervisor that would lead to supervisor continuously crashing due to not finding &amp;quot;stormconf.ser&amp;quot; file for an already killed topology&lt;/li&gt;
-&lt;li&gt;Bug fix: bin/storm script now displays a helpful error message when an invalid command is specified&lt;/li&gt;
-&lt;li&gt;Bug fix: fixed NPE when emitting during emit method of Aggregator&lt;/li&gt;
-&lt;li&gt;Bug fix: URLs with periods in them in Storm UI now route correctly&lt;/li&gt;
-&lt;li&gt;Bug fix: Fix occasional cascading worker crashes due when a worker dies due to not removing connections from connection cache appropriately&lt;/li&gt;
-&lt;/ul&gt;
-</description>
-        <pubDate>Fri, 11 Jan 2013 00:00:00 -0500</pubDate>
-        <link>http://storm.apache.org/2013/01/11/storm082-released.html</link>
-        <guid isPermaLink="true">http://storm.apache.org/2013/01/11/storm082-released.html</guid>
-        
-        
-      </item>
-    
   </channel>
 </rss>
diff --git a/_site/getting-help.html b/_site/getting-help.html
index f1010fd..0734b28 100644
--- a/_site/getting-help.html
+++ b/_site/getting-help.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -104,13 +104,13 @@
 
 <p>You can view the archives of the mailing list <a href="http://mail-archives.apache.org/mod_mbox/storm-dev/">here</a>.</p>
 
-<h4 id="which-list-should-i-send/subscribe-to?">Which list should I send/subscribe to?</h4>
+<h4 id="which-list-should-i-send-subscribe-to">Which list should I send/subscribe to?</h4>
 
 <p>If you are using a pre-built binary distribution of Storm, then chances are you should send questions, comments, storm-related announcements, etc. to <a href="user@storm.apache.org">user@storm.apache.org</a>. </p>
 
 <p>If you are building storm from source, developing new features, or otherwise hacking storm source code, then <a href="dev@storm.apache.org">dev@storm.apache.org</a> is more appropriate. </p>
 
-<h4 id="what-will-happen-with-storm-user@googlegroups.com?">What will happen with <a href="mailto:storm-user@googlegroups.com">storm-user@googlegroups.com</a>?</h4>
+<h4 id="what-will-happen-with-storm-user-googlegroups-com">What will happen with <a href="mailto:storm-user@googlegroups.com">storm-user@googlegroups.com</a>?</h4>
 
 <p>All existing messages will remain archived there, and can be accessed/searched <a href="https://groups.google.com/forum/#!forum/storm-user">here</a>.</p>
 
diff --git a/_site/index.html b/_site/index.html
index 648637e..72fb465 100644
--- a/_site/index.html
+++ b/_site/index.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -107,6 +107,10 @@
                     <ul class="latest-news">
                         <ul class="latest-news">
                         
+                        <li><a href="/2015/11/05/storm096-released.html">Storm 0.9.6 released</a>&nbsp;<span class="small">(05 Nov 2015) </span></li>
+                        
+                        <li><a href="/2015/11/05/storm0100-released.html">Storm 0.10.0 released</a>&nbsp;<span class="small">(05 Nov 2015) </span></li>
+                        
                         <li><a href="/2015/06/15/storm0100-beta-released.html">Storm 0.10.0 Beta Released</a>&nbsp;<span class="small">(15 Jun 2015) </span></li>
                         
                         <li><a href="/2015/06/04/storm095-released.html">Storm 0.9.5 released</a>&nbsp;<span class="small">(04 Jun 2015) </span></li>
@@ -117,12 +121,8 @@
                         
                         <li><a href="/2014/10/20/storm093-release-candidate.html">Storm 0.9.3 release candidate 1 available</a>&nbsp;<span class="small">(20 Oct 2014) </span></li>
                         
-                        <li><a href="/2014/06/25/storm092-released.html">Storm 0.9.2 released</a>&nbsp;<span class="small">(25 Jun 2014) </span></li>
-                        
-                        <li><a href="/2014/06/17/contest-results.html">Storm Logo Contest Results</a>&nbsp;<span class="small">(17 Jun 2014) </span></li>
-                        
                     </ul> 
-                    <p align="right"><a href="/2015/06/15/storm0100-beta-released.html" class="btn-std">More News</a></p>
+                    <p align="right"><a href="/2015/11/05/storm096-released.html" class="btn-std">More News</a></p>
                 </div>
             </div>
         </div>
diff --git a/_site/news.html b/_site/news.html
index a4ee0dc..3b2d162 100644
--- a/_site/news.html
+++ b/_site/news.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -94,6 +94,10 @@
     <div class="col-md-3">
         <ul class="news" id="news-list">
             
+      		<li><a href="/2015/11/05/storm096-released.html">Storm 0.9.6 released</a></li>
+    		
+      		<li><a href="/2015/11/05/storm0100-released.html">Storm 0.10.0 released</a></li>
+    		
       		<li><a href="/2015/06/15/storm0100-beta-released.html">Storm 0.10.0 Beta Released</a></li>
     		
       		<li><a href="/2015/06/04/storm095-released.html">Storm 0.9.5 released</a></li>
diff --git a/_site/talksAndVideos.html b/_site/talksAndVideos.html
index b2c8541..6ee9f56 100644
--- a/_site/talksAndVideos.html
+++ b/_site/talksAndVideos.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
diff --git a/_site/tutorial.html b/_site/tutorial.html
index 695db94..559ea39 100644
--- a/_site/tutorial.html
+++ b/_site/tutorial.html
@@ -74,7 +74,7 @@
                         <li><a href="/contribute/BYLAWS.html">ByLaws</a></li>
                     </ul>
                 </li>
-                <li><a href="/2015/06/15/storm0100-beta-released.html" id="news">News</a></li>
+                <li><a href="/2015/11/05/storm096-released.html" id="news">News</a></li>
             </ul>
         </nav>
     </div>
@@ -113,7 +113,7 @@
 <p>To do realtime computation on Storm, you create what are called &quot;topologies&quot;. A topology is a graph of computation. Each node in a topology contains processing logic, and links between nodes indicate how data should be passed around between nodes.</p>
 
 <p>Running a topology is straightforward. First, you package all your code and dependencies into a single jar. Then, you run a command like the following:</p>
-<div class="highlight"><pre><code class="language-text" data-lang="text">storm jar all-my-code.jar backtype.storm.MyTopology arg1 arg2
+<div class="highlight"><pre><code class="language-" data-lang="">storm jar all-my-code.jar backtype.storm.MyTopology arg1 arg2
 </code></pre></div>
 <p>This runs the class <code>backtype.storm.MyTopology</code> with the arguments <code>arg1</code> and <code>arg2</code>. The main function of the class defines the topology and submits it to Nimbus. The <code>storm jar</code> part takes care of connecting to Nimbus and uploading the jar.</p>
 
@@ -148,20 +148,20 @@
     <span class="kd">private</span> <span class="n">OutputCollectorBase</span> <span class="n">_collector</span><span class="o">;</span>
 
     <span class="nd">@Override</span>
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">prepare</span><span class="o">(</span><span class="n">Map</span> <span class="n">conf</span><span class="o">,</span> <span class="n">TopologyContext</span> <span class="n">context</span><span class="o">,</span> <span class="n">OutputCollectorBase</span> <span class="n">collector</span><span class="o">)</span> <span class="o">{</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">prepare</span><span class="o">(</span><span class="n">Map</span> <span class="n">conf</span><span class="o">,</span> <span class="n">TopologyContext</span> <span class="n">context</span><span class="o">,</span> <span class="n">OutputCollectorBase</span> <span class="n">collector</span><span class="o">)</span> <span class="o">{</span>
         <span class="n">_collector</span> <span class="o">=</span> <span class="n">collector</span><span class="o">;</span>
     <span class="o">}</span>
 
     <span class="nd">@Override</span>
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">execute</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">input</span><span class="o">)</span> <span class="o">{</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">execute</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">input</span><span class="o">)</span> <span class="o">{</span>
         <span class="kt">int</span> <span class="n">val</span> <span class="o">=</span> <span class="n">input</span><span class="o">.</span><span class="na">getInteger</span><span class="o">(</span><span class="mi">0</span><span class="o">);</span>        
-        <span class="n">_collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="n">input</span><span class="o">,</span> <span class="k">new</span> <span class="nf">Values</span><span class="o">(</span><span class="n">val</span><span class="o">*</span><span class="mi">2</span><span class="o">,</span> <span class="n">val</span><span class="o">*</span><span class="mi">3</span><span class="o">));</span>
+        <span class="n">_collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="n">input</span><span class="o">,</span> <span class="k">new</span> <span class="n">Values</span><span class="o">(</span><span class="n">val</span><span class="o">*</span><span class="mi">2</span><span class="o">,</span> <span class="n">val</span><span class="o">*</span><span class="mi">3</span><span class="o">));</span>
         <span class="n">_collector</span><span class="o">.</span><span class="na">ack</span><span class="o">(</span><span class="n">input</span><span class="o">);</span>
     <span class="o">}</span>
 
     <span class="nd">@Override</span>
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">declareOutputFields</span><span class="o">(</span><span class="n">OutputFieldsDeclarer</span> <span class="n">declarer</span><span class="o">)</span> <span class="o">{</span>
-        <span class="n">declarer</span><span class="o">.</span><span class="na">declare</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;double&quot;</span><span class="o">,</span> <span class="s">&quot;triple&quot;</span><span class="o">));</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">declareOutputFields</span><span class="o">(</span><span class="n">OutputFieldsDeclarer</span> <span class="n">declarer</span><span class="o">)</span> <span class="o">{</span>
+        <span class="n">declarer</span><span class="o">.</span><span class="na">declare</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"double"</span><span class="o">,</span> <span class="s">"triple"</span><span class="o">));</span>
     <span class="o">}</span>    
 <span class="o">}</span>
 </code></pre></div>
@@ -170,12 +170,12 @@
 <h2 id="a-simple-topology">A simple topology</h2>
 
 <p>Let&#39;s take a look at a simple topology to explore the concepts more and see how the code shapes up. Let&#39;s look at the <code>ExclamationTopology</code> definition from storm-starter:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">TopologyBuilder</span> <span class="n">builder</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">TopologyBuilder</span><span class="o">();</span>        
-<span class="n">builder</span><span class="o">.</span><span class="na">setSpout</span><span class="o">(</span><span class="s">&quot;words&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="nf">TestWordSpout</span><span class="o">(),</span> <span class="mi">10</span><span class="o">);</span>        
-<span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">&quot;exclaim1&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="nf">ExclamationBolt</span><span class="o">(),</span> <span class="mi">3</span><span class="o">)</span>
-        <span class="o">.</span><span class="na">shuffleGrouping</span><span class="o">(</span><span class="s">&quot;words&quot;</span><span class="o">);</span>
-<span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">&quot;exclaim2&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="nf">ExclamationBolt</span><span class="o">(),</span> <span class="mi">2</span><span class="o">)</span>
-        <span class="o">.</span><span class="na">shuffleGrouping</span><span class="o">(</span><span class="s">&quot;exclaim1&quot;</span><span class="o">);</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">TopologyBuilder</span> <span class="n">builder</span> <span class="o">=</span> <span class="k">new</span> <span class="n">TopologyBuilder</span><span class="o">();</span>        
+<span class="n">builder</span><span class="o">.</span><span class="na">setSpout</span><span class="o">(</span><span class="s">"words"</span><span class="o">,</span> <span class="k">new</span> <span class="n">TestWordSpout</span><span class="o">(),</span> <span class="mi">10</span><span class="o">);</span>        
+<span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">"exclaim1"</span><span class="o">,</span> <span class="k">new</span> <span class="n">ExclamationBolt</span><span class="o">(),</span> <span class="mi">3</span><span class="o">)</span>
+        <span class="o">.</span><span class="na">shuffleGrouping</span><span class="o">(</span><span class="s">"words"</span><span class="o">);</span>
+<span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">"exclaim2"</span><span class="o">,</span> <span class="k">new</span> <span class="n">ExclamationBolt</span><span class="o">(),</span> <span class="mi">2</span><span class="o">)</span>
+        <span class="o">.</span><span class="na">shuffleGrouping</span><span class="o">(</span><span class="s">"exclaim1"</span><span class="o">);</span>
 </code></pre></div>
 <p>This topology contains a spout and two bolts. The spout emits words, and each bolt appends the string &quot;!!!&quot; to its input. The nodes are arranged in a line: the spout emits to the first bolt which then emits to the second bolt. If the spout emits the tuples [&quot;bob&quot;] and [&quot;john&quot;], then the second bolt will emit the words [&quot;bob!!!!!!&quot;] and [&quot;john!!!!!!&quot;].</p>
 
@@ -188,19 +188,19 @@
 <p><code>setBolt</code> returns an <a href="/javadoc/apidocs/backtype/storm/topology/InputDeclarer.html">InputDeclarer</a> object that is used to define the inputs to the Bolt. Here, component &quot;exclaim1&quot; declares that it wants to read all the tuples emitted by component &quot;words&quot; using a shuffle grouping, and component &quot;exclaim2&quot; declares that it wants to read all the tuples emitted by component &quot;exclaim1&quot; using a shuffle grouping. &quot;shuffle grouping&quot; means that tuples should be randomly distributed from the input tasks to the bolt&#39;s tasks. There are many ways to group data between components. These will be explained in a few sections.</p>
 
 <p>If you wanted component &quot;exclaim2&quot; to read all the tuples emitted by both component &quot;words&quot; and component &quot;exclaim1&quot;, you would write component &quot;exclaim2&quot;&#39;s definition like this:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">&quot;exclaim2&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="nf">ExclamationBolt</span><span class="o">(),</span> <span class="mi">5</span><span class="o">)</span>
-            <span class="o">.</span><span class="na">shuffleGrouping</span><span class="o">(</span><span class="s">&quot;words&quot;</span><span class="o">)</span>
-            <span class="o">.</span><span class="na">shuffleGrouping</span><span class="o">(</span><span class="s">&quot;exclaim1&quot;</span><span class="o">);</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">"exclaim2"</span><span class="o">,</span> <span class="k">new</span> <span class="n">ExclamationBolt</span><span class="o">(),</span> <span class="mi">5</span><span class="o">)</span>
+            <span class="o">.</span><span class="na">shuffleGrouping</span><span class="o">(</span><span class="s">"words"</span><span class="o">)</span>
+            <span class="o">.</span><span class="na">shuffleGrouping</span><span class="o">(</span><span class="s">"exclaim1"</span><span class="o">);</span>
 </code></pre></div>
 <p>As you can see, input declarations can be chained to specify multiple sources for the Bolt.</p>
 
 <p>Let&#39;s dig into the implementations of the spouts and bolts in this topology. Spouts are responsible for emitting new messages into the topology. <code>TestWordSpout</code> in this topology emits a random word from the list [&quot;nathan&quot;, &quot;mike&quot;, &quot;jackson&quot;, &quot;golda&quot;, &quot;bertels&quot;] as a 1-tuple every 100ms. The implementation of <code>nextTuple()</code> in TestWordSpout looks like this:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kt">void</span> <span class="nf">nextTuple</span><span class="o">()</span> <span class="o">{</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kt">void</span> <span class="nf">nextTuple</span><span class="p">(</span><span class="o">)</span> <span class="o">{</span>
     <span class="n">Utils</span><span class="o">.</span><span class="na">sleep</span><span class="o">(</span><span class="mi">100</span><span class="o">);</span>
-    <span class="kd">final</span> <span class="n">String</span><span class="o">[]</span> <span class="n">words</span> <span class="o">=</span> <span class="k">new</span> <span class="n">String</span><span class="o">[]</span> <span class="o">{</span><span class="s">&quot;nathan&quot;</span><span class="o">,</span> <span class="s">&quot;mike&quot;</span><span class="o">,</span> <span class="s">&quot;jackson&quot;</span><span class="o">,</span> <span class="s">&quot;golda&quot;</span><span class="o">,</span> <span class="s">&quot;bertels&quot;</span><span class="o">};</span>
-    <span class="kd">final</span> <span class="n">Random</span> <span class="n">rand</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">Random</span><span class="o">();</span>
+    <span class="kd">final</span> <span class="n">String</span><span class="o">[]</span> <span class="n">words</span> <span class="o">=</span> <span class="k">new</span> <span class="n">String</span><span class="o">[]</span> <span class="o">{</span><span class="s">"nathan"</span><span class="o">,</span> <span class="s">"mike"</span><span class="o">,</span> <span class="s">"jackson"</span><span class="o">,</span> <span class="s">"golda"</span><span class="o">,</span> <span class="s">"bertels"</span><span class="o">};</span>
+    <span class="kd">final</span> <span class="n">Random</span> <span class="n">rand</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Random</span><span class="o">();</span>
     <span class="kd">final</span> <span class="n">String</span> <span class="n">word</span> <span class="o">=</span> <span class="n">words</span><span class="o">[</span><span class="n">rand</span><span class="o">.</span><span class="na">nextInt</span><span class="o">(</span><span class="n">words</span><span class="o">.</span><span class="na">length</span><span class="o">)];</span>
-    <span class="n">_collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="k">new</span> <span class="nf">Values</span><span class="o">(</span><span class="n">word</span><span class="o">));</span>
+    <span class="n">_collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="k">new</span> <span class="n">Values</span><span class="o">(</span><span class="n">word</span><span class="o">));</span>
 <span class="o">}</span>
 </code></pre></div>
 <p>As you can see, the implementation is very straightforward.</p>
@@ -210,27 +210,27 @@
     <span class="n">OutputCollector</span> <span class="n">_collector</span><span class="o">;</span>
 
     <span class="nd">@Override</span>
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">prepare</span><span class="o">(</span><span class="n">Map</span> <span class="n">conf</span><span class="o">,</span> <span class="n">TopologyContext</span> <span class="n">context</span><span class="o">,</span> <span class="n">OutputCollector</span> <span class="n">collector</span><span class="o">)</span> <span class="o">{</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">prepare</span><span class="o">(</span><span class="n">Map</span> <span class="n">conf</span><span class="o">,</span> <span class="n">TopologyContext</span> <span class="n">context</span><span class="o">,</span> <span class="n">OutputCollector</span> <span class="n">collector</span><span class="o">)</span> <span class="o">{</span>
         <span class="n">_collector</span> <span class="o">=</span> <span class="n">collector</span><span class="o">;</span>
     <span class="o">}</span>
 
     <span class="nd">@Override</span>
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">execute</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">tuple</span><span class="o">)</span> <span class="o">{</span>
-        <span class="n">_collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="n">tuple</span><span class="o">,</span> <span class="k">new</span> <span class="nf">Values</span><span class="o">(</span><span class="n">tuple</span><span class="o">.</span><span class="na">getString</span><span class="o">(</span><span class="mi">0</span><span class="o">)</span> <span class="o">+</span> <span class="s">&quot;!!!&quot;</span><span class="o">));</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">execute</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">tuple</span><span class="o">)</span> <span class="o">{</span>
+        <span class="n">_collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="n">tuple</span><span class="o">,</span> <span class="k">new</span> <span class="n">Values</span><span class="o">(</span><span class="n">tuple</span><span class="o">.</span><span class="na">getString</span><span class="o">(</span><span class="mi">0</span><span class="o">)</span> <span class="o">+</span> <span class="s">"!!!"</span><span class="o">));</span>
         <span class="n">_collector</span><span class="o">.</span><span class="na">ack</span><span class="o">(</span><span class="n">tuple</span><span class="o">);</span>
     <span class="o">}</span>
 
     <span class="nd">@Override</span>
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">cleanup</span><span class="o">()</span> <span class="o">{</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">cleanup</span><span class="o">()</span> <span class="o">{</span>
     <span class="o">}</span>
 
     <span class="nd">@Override</span>
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">declareOutputFields</span><span class="o">(</span><span class="n">OutputFieldsDeclarer</span> <span class="n">declarer</span><span class="o">)</span> <span class="o">{</span>
-        <span class="n">declarer</span><span class="o">.</span><span class="na">declare</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;word&quot;</span><span class="o">));</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">declareOutputFields</span><span class="o">(</span><span class="n">OutputFieldsDeclarer</span> <span class="n">declarer</span><span class="o">)</span> <span class="o">{</span>
+        <span class="n">declarer</span><span class="o">.</span><span class="na">declare</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"word"</span><span class="o">));</span>
     <span class="o">}</span>
 
     <span class="nd">@Override</span>
-    <span class="kd">public</span> <span class="n">Map</span> <span class="nf">getComponentConfiguration</span><span class="o">()</span> <span class="o">{</span>
+    <span class="kd">public</span> <span class="n">Map</span> <span class="n">getComponentConfiguration</span><span class="o">()</span> <span class="o">{</span>
         <span class="k">return</span> <span class="kc">null</span><span class="o">;</span>
     <span class="o">}</span>
 <span class="o">}</span>
@@ -252,19 +252,19 @@
     <span class="n">OutputCollector</span> <span class="n">_collector</span><span class="o">;</span>
 
     <span class="nd">@Override</span>
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">prepare</span><span class="o">(</span><span class="n">Map</span> <span class="n">conf</span><span class="o">,</span> <span class="n">TopologyContext</span> <span class="n">context</span><span class="o">,</span> <span class="n">OutputCollector</span> <span class="n">collector</span><span class="o">)</span> <span class="o">{</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">prepare</span><span class="o">(</span><span class="n">Map</span> <span class="n">conf</span><span class="o">,</span> <span class="n">TopologyContext</span> <span class="n">context</span><span class="o">,</span> <span class="n">OutputCollector</span> <span class="n">collector</span><span class="o">)</span> <span class="o">{</span>
         <span class="n">_collector</span> <span class="o">=</span> <span class="n">collector</span><span class="o">;</span>
     <span class="o">}</span>
 
     <span class="nd">@Override</span>
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">execute</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">tuple</span><span class="o">)</span> <span class="o">{</span>
-        <span class="n">_collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="n">tuple</span><span class="o">,</span> <span class="k">new</span> <span class="nf">Values</span><span class="o">(</span><span class="n">tuple</span><span class="o">.</span><span class="na">getString</span><span class="o">(</span><span class="mi">0</span><span class="o">)</span> <span class="o">+</span> <span class="s">&quot;!!!&quot;</span><span class="o">));</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">execute</span><span class="o">(</span><span class="n">Tuple</span> <span class="n">tuple</span><span class="o">)</span> <span class="o">{</span>
+        <span class="n">_collector</span><span class="o">.</span><span class="na">emit</span><span class="o">(</span><span class="n">tuple</span><span class="o">,</span> <span class="k">new</span> <span class="n">Values</span><span class="o">(</span><span class="n">tuple</span><span class="o">.</span><span class="na">getString</span><span class="o">(</span><span class="mi">0</span><span class="o">)</span> <span class="o">+</span> <span class="s">"!!!"</span><span class="o">));</span>
         <span class="n">_collector</span><span class="o">.</span><span class="na">ack</span><span class="o">(</span><span class="n">tuple</span><span class="o">);</span>
     <span class="o">}</span>
 
     <span class="nd">@Override</span>
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">declareOutputFields</span><span class="o">(</span><span class="n">OutputFieldsDeclarer</span> <span class="n">declarer</span><span class="o">)</span> <span class="o">{</span>
-        <span class="n">declarer</span><span class="o">.</span><span class="na">declare</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;word&quot;</span><span class="o">));</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">declareOutputFields</span><span class="o">(</span><span class="n">OutputFieldsDeclarer</span> <span class="n">declarer</span><span class="o">)</span> <span class="o">{</span>
+        <span class="n">declarer</span><span class="o">.</span><span class="na">declare</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"word"</span><span class="o">));</span>
     <span class="o">}</span>    
 <span class="o">}</span>
 </code></pre></div>
@@ -277,14 +277,14 @@
 <p>In distributed mode, Storm operates as a cluster of machines. When you submit a topology to the master, you also submit all the code necessary to run the topology. The master will take care of distributing your code and allocating workers to run your topology. If workers go down, the master will reassign them somewhere else. You can read more about running topologies on a cluster on <a href="/documentation/Running-topologies-on-a-production-cluster.html">Running topologies on a production cluster</a>. </p>
 
 <p>Here&#39;s the code that runs <code>ExclamationTopology</code> in local mode:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">Config</span> <span class="n">conf</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">Config</span><span class="o">();</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">Config</span> <span class="n">conf</span> <span class="o">=</span> <span class="k">new</span> <span class="n">Config</span><span class="o">();</span>
 <span class="n">conf</span><span class="o">.</span><span class="na">setDebug</span><span class="o">(</span><span class="kc">true</span><span class="o">);</span>
 <span class="n">conf</span><span class="o">.</span><span class="na">setNumWorkers</span><span class="o">(</span><span class="mi">2</span><span class="o">);</span>
 
-<span class="n">LocalCluster</span> <span class="n">cluster</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">LocalCluster</span><span class="o">();</span>
-<span class="n">cluster</span><span class="o">.</span><span class="na">submitTopology</span><span class="o">(</span><span class="s">&quot;test&quot;</span><span class="o">,</span> <span class="n">conf</span><span class="o">,</span> <span class="n">builder</span><span class="o">.</span><span class="na">createTopology</span><span class="o">());</span>
+<span class="n">LocalCluster</span> <span class="n">cluster</span> <span class="o">=</span> <span class="k">new</span> <span class="n">LocalCluster</span><span class="o">();</span>
+<span class="n">cluster</span><span class="o">.</span><span class="na">submitTopology</span><span class="o">(</span><span class="s">"test"</span><span class="o">,</span> <span class="n">conf</span><span class="o">,</span> <span class="n">builder</span><span class="o">.</span><span class="na">createTopology</span><span class="o">());</span>
 <span class="n">Utils</span><span class="o">.</span><span class="na">sleep</span><span class="o">(</span><span class="mi">10000</span><span class="o">);</span>
-<span class="n">cluster</span><span class="o">.</span><span class="na">killTopology</span><span class="o">(</span><span class="s">&quot;test&quot;</span><span class="o">);</span>
+<span class="n">cluster</span><span class="o">.</span><span class="na">killTopology</span><span class="o">(</span><span class="s">"test"</span><span class="o">);</span>
 <span class="n">cluster</span><span class="o">.</span><span class="na">shutdown</span><span class="o">();</span>
 </code></pre></div>
 <p>First, the code defines an in-process cluster by creating a <code>LocalCluster</code> object. Submitting topologies to this virtual cluster is identical to submitting topologies to distributed clusters. It submits a topology to the <code>LocalCluster</code> by calling <code>submitTopology</code>, which takes as arguments a name for the running topology, a configuration for the topology, and then the topology itself.</p>
@@ -311,13 +311,13 @@
 <p>When a task for Bolt A emits a tuple to Bolt B, which task should it send the tuple to?</p>
 
 <p>A &quot;stream grouping&quot; answers this question by telling Storm how to send tuples between sets of tasks. Before we dig into the different kinds of stream groupings, let&#39;s take a look at another topology from <a href="http://github.com/apache/storm/blob/master/examples/storm-starter">storm-starter</a>. This <a href="https://github.com/apache/storm/blob/master/examples/storm-starter/src/jvm/storm/starter/WordCountTopology.java">WordCountTopology</a> reads sentences off of a spout and streams out of <code>WordCountBolt</code> the total number of times it has seen that word before:</p>
-<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">TopologyBuilder</span> <span class="n">builder</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">TopologyBuilder</span><span class="o">();</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">TopologyBuilder</span> <span class="n">builder</span> <span class="o">=</span> <span class="k">new</span> <span class="n">TopologyBuilder</span><span class="o">();</span>
 
-<span class="n">builder</span><span class="o">.</span><span class="na">setSpout</span><span class="o">(</span><span class="s">&quot;sentences&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="nf">RandomSentenceSpout</span><span class="o">(),</span> <span class="mi">5</span><span class="o">);</span>        
-<span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">&quot;split&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="nf">SplitSentence</span><span class="o">(),</span> <span class="mi">8</span><span class="o">)</span>
-        <span class="o">.</span><span class="na">shuffleGrouping</span><span class="o">(</span><span class="s">&quot;sentences&quot;</span><span class="o">);</span>
-<span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">&quot;count&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="nf">WordCount</span><span class="o">(),</span> <span class="mi">12</span><span class="o">)</span>
-        <span class="o">.</span><span class="na">fieldsGrouping</span><span class="o">(</span><span class="s">&quot;split&quot;</span><span class="o">,</span> <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;word&quot;</span><span class="o">));</span>
+<span class="n">builder</span><span class="o">.</span><span class="na">setSpout</span><span class="o">(</span><span class="s">"sentences"</span><span class="o">,</span> <span class="k">new</span> <span class="n">RandomSentenceSpout</span><span class="o">(),</span> <span class="mi">5</span><span class="o">);</span>        
+<span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">"split"</span><span class="o">,</span> <span class="k">new</span> <span class="n">SplitSentence</span><span class="o">(),</span> <span class="mi">8</span><span class="o">)</span>
+        <span class="o">.</span><span class="na">shuffleGrouping</span><span class="o">(</span><span class="s">"sentences"</span><span class="o">);</span>
+<span class="n">builder</span><span class="o">.</span><span class="na">setBolt</span><span class="o">(</span><span class="s">"count"</span><span class="o">,</span> <span class="k">new</span> <span class="n">WordCount</span><span class="o">(),</span> <span class="mi">12</span><span class="o">)</span>
+        <span class="o">.</span><span class="na">fieldsGrouping</span><span class="o">(</span><span class="s">"split"</span><span class="o">,</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"word"</span><span class="o">));</span>
 </code></pre></div>
 <p><code>SplitSentence</code> emits a tuple for each word in each sentence it receives, and <code>WordCount</code> keeps a map in memory from word to count. Each time <code>WordCount</code> receives a word, it updates its state and emits the new word count.</p>
 
@@ -337,12 +337,12 @@
 
 <p>Here&#39;s the definition of the <code>SplitSentence</code> bolt from <code>WordCountTopology</code>:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kd">public</span> <span class="kd">static</span> <span class="kd">class</span> <span class="nc">SplitSentence</span> <span class="kd">extends</span> <span class="n">ShellBolt</span> <span class="kd">implements</span> <span class="n">IRichBolt</span> <span class="o">{</span>
-    <span class="kd">public</span> <span class="nf">SplitSentence</span><span class="o">()</span> <span class="o">{</span>
-        <span class="kd">super</span><span class="o">(</span><span class="s">&quot;python&quot;</span><span class="o">,</span> <span class="s">&quot;splitsentence.py&quot;</span><span class="o">);</span>
+    <span class="kd">public</span> <span class="n">SplitSentence</span><span class="o">()</span> <span class="o">{</span>
+        <span class="kd">super</span><span class="o">(</span><span class="s">"python"</span><span class="o">,</span> <span class="s">"splitsentence.py"</span><span class="o">);</span>
     <span class="o">}</span>
 
-    <span class="kd">public</span> <span class="kt">void</span> <span class="nf">declareOutputFields</span><span class="o">(</span><span class="n">OutputFieldsDeclarer</span> <span class="n">declarer</span><span class="o">)</span> <span class="o">{</span>
-        <span class="n">declarer</span><span class="o">.</span><span class="na">declare</span><span class="o">(</span><span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">&quot;word&quot;</span><span class="o">));</span>
+    <span class="kd">public</span> <span class="kt">void</span> <span class="n">declareOutputFields</span><span class="o">(</span><span class="n">OutputFieldsDeclarer</span> <span class="n">declarer</span><span class="o">)</span> <span class="o">{</span>
+        <span class="n">declarer</span><span class="o">.</span><span class="na">declare</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"word"</span><span class="o">));</span>
     <span class="o">}</span>
 <span class="o">}</span>
 </code></pre></div>
@@ -351,7 +351,7 @@
 
 <span class="k">class</span> <span class="nc">SplitSentenceBolt</span><span class="p">(</span><span class="n">storm</span><span class="o">.</span><span class="n">BasicBolt</span><span class="p">):</span>
     <span class="k">def</span> <span class="nf">process</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">tup</span><span class="p">):</span>
-        <span class="n">words</span> <span class="o">=</span> <span class="n">tup</span><span class="o">.</span><span class="n">values</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">.</span><span class="n">split</span><span class="p">(</span><span class="s">&quot; &quot;</span><span class="p">)</span>
+        <span class="n">words</span> <span class="o">=</span> <span class="n">tup</span><span class="o">.</span><span class="n">values</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span><span class="o">.</span><span class="n">split</span><span class="p">(</span><span class="s">" "</span><span class="p">)</span>
         <span class="k">for</span> <span class="n">word</span> <span class="ow">in</span> <span class="n">words</span><span class="p">:</span>
           <span class="n">storm</span><span class="o">.</span><span class="n">emit</span><span class="p">([</span><span class="n">word</span><span class="p">])</span>
 
diff --git a/contribute/Contributing-to-Storm.md b/contribute/Contributing-to-Storm.md
index 7a5b232..351503a 100644
--- a/contribute/Contributing-to-Storm.md
+++ b/contribute/Contributing-to-Storm.md
@@ -21,7 +21,8 @@
 1. Open an issue on the [JIRA issue tracker](https://issues.apache.org/jira/browse/STORM) if one doesn't exist already
 2. Comment on the issue with your plan for implementing the issue. Explain what pieces of the codebase you're going to touch and how everything is going to fit together.
 3. Storm committers will iterate with you on the design to make sure you're on the right track
-4. Implement your issue, submit a pull request prefixed with the JIRA ID (e.g. "STORM-123: add new feature foo"), and iterate from there.
+4. Read through the developer documentation on how to build, code style, testing, etc [DEVELOPER.md](https://github.com/apache/storm/blob/master/DEVELOPER.md) 
+5. Implement your issue, submit a pull request prefixed with the JIRA ID (e.g. "STORM-123: add new feature foo"), and iterate from there.
 
 ### Contributing documentation
 
diff --git a/documentation.html b/documentation.html
index 348edd8..a2fc2f9 100644
--- a/documentation.html
+++ b/documentation.html
@@ -49,8 +49,8 @@
                             </ul>
                         </div>
                         <div role="tabpanel" class="tab-pane" id="integration">
-                            <p>The following modules are included in the Apache Storm distribution and are not required for storm to operate, 
-                            but are useful for extending Storm in order to provide additional functionality such as integration with other 
+                            <p>The following modules are included in the Apache Storm distribution and are not required for storm to operate,
+                            but are useful for extending Storm in order to provide additional functionality such as integration with other
                             technologies frequently used in combination with Storm.</p>
                             <ul>
                                 <li><a href="documentation/storm-kafka.html">Kafka</a></li>
@@ -62,6 +62,7 @@
                                 <li><a href="documentation/storm-solr.html">Solr</a></li>
                                 <li><a href="documentation/storm-eventhubs.html">Azure EventHubs</a></li>
                                 <li><a href="documentation/flux.html">Flux</a> (declarative wiring/configuration of Topologies)</li>
+                                <li><a href="documentation/storm-sql.html">SQL</a> (writing topologies in SQL)</li>
                             </ul>
                         </div>
                         <div role="tabpanel" class="tab-pane" id="intermediate">
diff --git a/documentation/BYLAWS.md b/documentation/BYLAWS.md
new file mode 100644
index 0000000..10e857e
--- /dev/null
+++ b/documentation/BYLAWS.md
@@ -0,0 +1,98 @@
+---
+title: Apache Storm Project Bylaws
+layout: documentation
+documentation: true
+---
+
+## Roles and Responsibilities
+
+Apache projects define a set of roles with associated rights and responsibilities. These roles govern what tasks an individual may perform within the project. The roles are defined in the following sections:
+
+### Users:
+
+The most important participants in the project are people who use our software. The majority of our developers start out as users and guide their development efforts from the user's perspective.
+
+Users contribute to the Apache projects by providing feedback to developers in the form of bug reports and feature suggestions. As well, users participate in the Apache community by helping other users on mailing lists and user support forums.
+
+### Contributors:
+
+Contributors are all of the volunteers who are contributing time, code, documentation, or resources to the Storm Project. A contributor that makes sustained, welcome contributions to the project may be invited to become a Committer, though the exact timing of such invitations depends on many factors.
+
+### Committers:
+
+The project's Committers are responsible for the project's technical management. Committers have access to all project source repositories. Committers may cast binding votes on any technical discussion regarding storm.
+
+Committer access is by invitation only and must be approved by lazy consensus of the active PMC members. A Committer is considered emeritus by their own declaration or by not contributing in any form to the project for over six months. An emeritus Committer may request reinstatement of commit access from the PMC. Such reinstatement is subject to lazy consensus approval of active PMC members.
+
+All Apache Committers are required to have a signed Contributor License Agreement (CLA) on file with the Apache Software Foundation. There is a [Committers' FAQ](https://www.apache.org/dev/committers.html) which provides more details on the requirements for Committers.
+
+A Committer who makes a sustained contribution to the project may be invited to become a member of the PMC. The form of contribution is not limited to code. It can also include code review, helping out users on the mailing lists, documentation, testing, etc.
+
+### Project Management Committee(PMC):
+
+The PMC is responsible to the board and the ASF for the management and oversight of the Apache Storm codebase. The responsibilities of the PMC include:
+
+ * Deciding what is distributed as products of the Apache Storm project. In particular all releases must be approved by the PMC.
+ * Maintaining the project's shared resources, including the codebase repository, mailing lists, websites.
+ * Speaking on behalf of the project.
+ * Resolving license disputes regarding products of the project.
+ * Nominating new PMC members and Committers.
+ * Maintaining these bylaws and other guidelines of the project.
+
+Membership of the PMC is by invitation only and must be approved by a consensus approval of active PMC members. A PMC member is considered "emeritus" by their own declaration or by not contributing in any form to the project for over six months. An emeritus member may request reinstatement to the PMC. Such reinstatement is subject to consensus approval of the active PMC members.
+
+The chair of the PMC is appointed by the ASF board. The chair is an office holder of the Apache Software Foundation (Vice President, Apache Storm) and has primary responsibility to the board for the management of the projects within the scope of the Storm PMC. The chair reports to the board quarterly on developments within the Storm project.
+
+The chair of the PMC is rotated annually. When the chair is rotated or if the current chair of the PMC resigns, the PMC votes to recommend a new chair using Single Transferable Vote (STV) voting. See http://wiki.apache.org/general/BoardVoting for specifics. The decision must be ratified by the Apache board.
+
+## Voting
+
+Decisions regarding the project are made by votes on the primary project development mailing list (dev@storm.apache.org). Where necessary, PMC voting may take place on the private Storm PMC mailing list. Votes are clearly indicated by subject line starting with [VOTE]. Votes may contain multiple items for approval and these should be clearly separated. Voting is carried out by replying to the vote mail. Voting may take four flavors:
+	
+| Vote | Meaning |
+|------|---------|
+| +1 | 'Yes,' 'Agree,' or 'the action should be performed.' |
+| +0 | Neutral about the proposed action. |
+| -0 | Mildly negative, but not enough so to want to block it. |
+| -1 |This is a negative vote. On issues where consensus is required, this vote counts as a veto. All vetoes must contain an explanation of why the veto is appropriate. Vetoes with no explanation are void. It may also be appropriate for a -1 vote to include an alternative course of action. |
+
+All participants in the Storm project are encouraged to show their agreement with or against a particular action by voting. For technical decisions, only the votes of active Committers are binding. Non-binding votes are still useful for those with binding votes to understand the perception of an action in the wider Storm community. For PMC decisions, only the votes of active PMC members are binding.
+
+Voting can also be applied to changes already made to the Storm codebase. These typically take the form of a veto (-1) in reply to the commit message sent when the commit is made. Note that this should be a rare occurrence. All efforts should be made to discuss issues when they are still patches before the code is committed.
+
+Only active (i.e. non-emeritus) Committers and PMC members have binding votes.
+
+## Approvals
+
+These are the types of approvals that can be sought. Different actions require different types of approvals
+
+| Approval Type | Criteria |
+|---------------|----------|
+| Consensus Approval | Consensus approval requires 3 binding +1 votes and no binding vetoes. |
+| Majority Approval | Majority approval requires at least 3 binding +1 votes and more +1 votes than -1 votes. |
+| Lazy Consensus | Lazy consensus requires no -1 votes ('silence gives assent'). |
+| 2/3 Majority | 2/3 majority votes requires at least 3 votes and twice as many +1 votes as -1 votes. |
+
+### Vetoes
+
+A valid, binding veto cannot be overruled. If a veto is cast, it must be accompanied by a valid reason explaining the reasons for the veto. The validity of a veto, if challenged, can be confirmed by anyone who has a binding vote. This does not necessarily signify agreement with the veto - merely that the veto is valid.
+
+If you disagree with a valid veto, you must lobby the person casting the veto to withdraw their veto. If a veto is not withdrawn, any action that has been vetoed must be reversed in a timely manner.
+
+## Actions
+
+This section describes the various actions which are undertaken within the project, the corresponding approval required for that action and those who have binding votes over the action.
+
+| Actions | Description | Approval | Binding Votes | Minimum Length | Mailing List |
+|---------|-------------|----------|---------------|----------------|--------------|
+| Code Change | A change made to a source code of the project and committed by a Committer. | A minimum of one +1 from a Committer other than the one who authored the patch, and no -1s. The code can be committed after the first +1. If a -1 is received to the patch within 7 days after the patch was posted, it may be reverted immediately if it was already merged. | Active Committers | 1 day from initial patch (**Note:** Committers should consider allowing more time for review based on the complexity and/or impact of the patch in question.)|JIRA or Github pull ( with notification sent to dev@storm.apache.org) |
+| Non-Code Change | A change made to a repository of the project and committed by a Committer. This includes documentation, website content, etc., but not source code, unless only comments are being modified. | Lazy Consensus | Active Committers | At the discression of the Committer |JIRA or Github pull (with notification sent to dev@storm.apache.org) |
+| Product Release | A vote is required to accept a proposed release as an official release of the project. Any Committer may call for a release vote at any point in time. | Majority Approval | Active PMC members | 3 days | dev@storm.apache.org |
+| Adoption of New Codebase | When the codebase for an existing, released product is to be replaced with an alternative codebase. If such a vote fails to gain approval, the existing code base will continue. This also covers the creation of new sub-projects and submodules within the project as well as merging of feature branches. | 2/3 Majority | Active PMC members | 6 days | dev@storm.apache.org |
+| New Committer | When a new Committer is proposed for the project. | Consensus Approval | Active PMC members | 3 days | private@storm.apache.org |
+| New PMC Member | When a member is proposed for the PMC. | Consensus Approval | Active PMC members | 3 days | private@storm.apache.org |
+| Emeritus PMC Member re-instatement | When an emeritus PMC member requests to be re-instated as an active PMC member. | Consensus Approval | Active PMC members | 6 days | private@storm.apache.org |
+| Emeritus Committer re-instatement | When an emeritus Committer requests to be re-instated as an active Committer. | Consensus Approval | Active PMC members | 6 days | private@storm.apache.org |
+| Committer Removal | When removal of commit privileges is sought. Note: Such actions will also be referred to the ASF board by the PMC chair. | 2/3 Majority | Active PMC members (excluding the Committer in question if a member of the PMC). | 6 Days | private@storm.apache.org |
+| PMC Member Removal | When removal of a PMC member is sought. Note: Such actions will also be referred to the ASF board by the PMC chair. | 2/3 Majority | Active PMC members (excluding the member in question). | 6 Days | private@storm.apache.org |
+| Modifying Bylaws | Modifying this document. | 2/3 Majority | Active PMC members | 6 Days | dev@storm.apache.org |
diff --git a/documentation/Contributing-to-Storm.md b/documentation/Contributing-to-Storm.md
new file mode 100644
index 0000000..fdc5835
--- /dev/null
+++ b/documentation/Contributing-to-Storm.md
@@ -0,0 +1,33 @@
+---
+title: Contributing
+layout: documentation
+documentation: true
+---
+
+### Getting started with contributing
+
+Some of the issues on the [issue tracker](https://issues.apache.org/jira/browse/STORM) are marked with the "Newbie" label. If you're interesting in contributing to Storm but don't know where to begin, these are good issues to start with. These issues are a great way to get your feet wet with learning the codebase because they require learning about only an isolated portion of the codebase and are a relatively small amount of work.
+
+### Learning the codebase
+
+The [Implementation docs](Implementation-docs.html) section of the wiki gives detailed walkthroughs of the codebase. Reading through these docs is highly recommended to understand the codebase.
+
+### Contribution process
+
+Contributions to the Storm codebase should be sent as [GitHub](https://github.com/apache/storm) pull requests. If there's any problems to the pull request we can iterate on it using GitHub's commenting features.
+
+For small patches, feel free to submit pull requests directly for them. For larger contributions, please use the following process. The idea behind this process is to prevent any wasted work and catch design issues early on:
+
+1. Open an issue on the [issue tracker](https://issues.apache.org/jira/browse/STORM) if one doesn't exist already
+2. Comment on the issue with your plan for implementing the issue. Explain what pieces of the codebase you're going to touch and how everything is going to fit together.
+3. Storm committers will iterate with you on the design to make sure you're on the right track
+4. Implement your issue, submit a pull request, and iterate from there.
+
+### Modules built on top of Storm
+
+Modules built on top of Storm (like spouts, bolts, etc) that aren't appropriate for Storm core can be done as your own project or as part of [@stormprocessor](https://github.com/stormprocessor). To be part of @stormprocessor put your project on your own Github and then send an email to the mailing list proposing to make it part of @stormprocessor. Then the community can discuss whether it's useful enough to be part of @stormprocessor. Then you'll be added to the @stormprocessor organization and can maintain your project there. The advantage of hosting your module in @stormprocessor is that it will be easier for potential users to find your project.
+
+### Contributing documentation
+
+Documentation contributions are very welcome! The best way to send contributions is as emails through the mailing list.
+
diff --git a/documentation/Documentation.md b/documentation/Documentation.md
new file mode 100644
index 0000000..ab555c1
--- /dev/null
+++ b/documentation/Documentation.md
@@ -0,0 +1,56 @@
+---
+title: Documentation
+layout: documentation
+documentation: true
+---
+### Basics of Storm
+
+* [Javadoc](/javadoc/apidocs/index.html)
+* [Concepts](Concepts.html)
+* [Configuration](Configuration.html)
+* [Guaranteeing message processing](Guaranteeing-message-processing.html)
+* [Fault-tolerance](Fault-tolerance.html)
+* [Command line client](Command-line-client.html)
+* [Understanding the parallelism of a Storm topology](Understanding-the-parallelism-of-a-Storm-topology.html)
+* [FAQ](FAQ.html)
+
+### Trident
+
+Trident is an alternative interface to Storm. It provides exactly-once processing, "transactional" datastore persistence, and a set of common stream analytics operations.
+
+* [Trident Tutorial](Trident-tutorial.html)     -- basic concepts and walkthrough
+* [Trident API Overview](Trident-API-Overview.html) -- operations for transforming and orchestrating data
+* [Trident State](Trident-state.html)        -- exactly-once processing and fast, persistent aggregation
+* [Trident spouts](Trident-spouts.html)       -- transactional and non-transactional data intake
+
+### Setup and deploying
+
+* [Setting up a Storm cluster](Setting-up-a-Storm-cluster.html)
+* [Local mode](Local-mode.html)
+* [Troubleshooting](Troubleshooting.html)
+* [Running topologies on a production cluster](Running-topologies-on-a-production-cluster.html)
+* [Building Storm](Maven.html) with Maven
+
+### Intermediate
+
+* [Serialization](Serialization.html)
+* [Common patterns](Common-patterns.html)
+* [Clojure DSL](Clojure-DSL.html)
+* [Using non-JVM languages with Storm](Using-non-JVM-languages-with-Storm.html)
+* [Distributed RPC](Distributed-RPC.html)
+* [Transactional topologies](Transactional-topologies.html)
+* [Kestrel and Storm](Kestrel-and-Storm.html)
+* [Direct groupings](Direct-groupings.html)
+* [Hooks](Hooks.html)
+* [Metrics](Metrics.html)
+* [Lifecycle of a trident tuple]()
+* [UI REST API](ui-rest-api.html)
+* [Logs](Logs.html)
+* [Dynamic Log Level Settings](dynamic-log-level-settings.html)
+* [Dynamic Worker Profiling](dynamic-worker-profiling.html)
+
+### Advanced
+
+* [Defining a non-JVM language DSL for Storm](Defining-a-non-jvm-language-dsl-for-storm.html)
+* [Multilang protocol](Multilang-protocol.html) (how to provide support for another language)
+* [Implementation docs](Implementation-docs.html)
diff --git a/documentation/FAQ.md b/documentation/FAQ.md
index a69862e..a65da1e 100644
--- a/documentation/FAQ.md
+++ b/documentation/FAQ.md
@@ -65,6 +65,10 @@
 
 At time of writing, you can't emit to multiple output streams from Trident -- see [STORM-68](https://issues.apache.org/jira/browse/STORM-68)
 
+### Why am I getting a NotSerializableException/IllegalStateException when my topology is being started up?
+
+Within the Storm lifecycle, the topology is instantiated and then serialized to byte format to be stored in ZooKeeper, prior to the topology being executed. Within this step, if a spout or bolt within the topology has an initialized unserializable property, serialization will fail. If there is a need for a field that is unserializable, initialize it within the bolt or spout's prepare method, which is run after the topology is delivered to the worker.
+
 ## Spouts
 
 ### What is a coordinator, and why are there several?
@@ -112,7 +116,7 @@
 
 ### How do I aggregate events by time?
 
-If have records with an immutable timestamp, and you would like to count, average or otherwise aggregate them into discrete time buckets, Trident is an excellent and scalable solution.
+If you have records with an immutable timestamp, and you would like to count, average or otherwise aggregate them into discrete time buckets, Trident is an excellent and scalable solution.
 
 Write an `Each` function that turns the timestamp into a time bucket: if the bucket size was "by hour", then the timestamp `2013-08-08 12:34:56` would be mapped to the `2013-08-08 12:00:00` time bucket, and so would everything else in the twelve o'clock hour. Then group on that timebucket and use a grouped persistentAggregate. The persistentAggregate uses a local cacheMap backed by a data store. Groups with many records require very few reads from the data store, and use efficient bulk reads and writes; as long as your data feed is relatively prompt Trident will make very efficient use of memory and network. Even if a server drops off line for a day, then delivers that full day's worth of data in a rush, the old results will be calmly retrieved and updated -- and without interfering with calculating the current results.
 
diff --git a/documentation/Installing-native-dependencies.md b/documentation/Installing-native-dependencies.md
deleted file mode 100644
index 3207b8e..0000000
--- a/documentation/Installing-native-dependencies.md
+++ /dev/null
@@ -1,38 +0,0 @@
----
-layout: documentation
----
-The native dependencies are only needed on actual Storm clusters. When running Storm in local mode, Storm uses a pure Java messaging system so that you don't need to install native dependencies on your development machine.
-
-Installing ZeroMQ and JZMQ is usually straightforward. Sometimes, however, people run into issues with autoconf and get strange errors. If you run into any issues, please email the [Storm mailing list](http://groups.google.com/group/storm-user) or come get help in the #storm-user room on freenode. 
-
-Storm has been tested with ZeroMQ 2.1.7, and this is the recommended ZeroMQ release that you install. You can download a ZeroMQ release [here](http://download.zeromq.org/). Installing ZeroMQ should look something like this:
-
-```
-wget http://download.zeromq.org/zeromq-2.1.7.tar.gz
-tar -xzf zeromq-2.1.7.tar.gz
-cd zeromq-2.1.7
-./configure
-make
-sudo make install
-```
-
-JZMQ is the Java bindings for ZeroMQ. JZMQ doesn't have any releases (we're working with them on that), so there is risk of a regression if you always install from the master branch. To prevent a regression from happening, you should instead install from [this fork](http://github.com/nathanmarz/jzmq) which is tested to work with Storm. Installing JZMQ should look something like this:
-
-```
-#install jzmq
-git clone https://github.com/nathanmarz/jzmq.git
-cd jzmq
-./autogen.sh
-./configure
-make
-sudo make install
-```
-
-To get the JZMQ build to work, you may need to do one or all of the following:
-
-1. Set JAVA_HOME environment variable appropriately
-2. Install Java dev package (more info [here](http://codeslinger.posterous.com/getting-zeromq-and-jzmq-running-on-mac-os-x) for Mac OSX users)
-3. Upgrade autoconf on your machine
-4. Follow the instructions in [this blog post](http://blog.pmorelli.com/getting-zeromq-and-jzmq-running-on-mac-os-x)
-
-If you run into any errors when running `./configure`, [this thread](http://stackoverflow.com/questions/3522248/how-do-i-compile-jzmq-for-zeromq-on-osx) may provide a solution.
\ No newline at end of file
diff --git a/documentation/Kestrel-and-Storm.md b/documentation/Kestrel-and-Storm.md
index d079b81..c947004 100644
--- a/documentation/Kestrel-and-Storm.md
+++ b/documentation/Kestrel-and-Storm.md
@@ -7,7 +7,7 @@
 
 ## Preliminaries
 ### Storm
-This tutorial uses examples from the [storm-kestrel](https://github.com/nathanmarz/storm-kestrel) project and the [storm-starter](http://github.com/apache/storm/blob/master/examples/storm-starter) project. It's recommended that you clone those projects and follow along with the examples. Read [Setting up development environment](https://github.com/apache/storm/wiki/Setting-up-development-environment) and [Creating a new Storm project](https://github.com/apache/storm/wiki/Creating-a-new-Storm-project) to get your machine set up.
+This tutorial uses examples from the [storm-kestrel](https://github.com/nathanmarz/storm-kestrel) project and the [storm-starter](http://github.com/apache/storm/blob/master/examples/storm-starter) project. It's recommended that you clone those projects and follow along with the examples. Read [Setting up development environment](Setting-up-development-environment.html) and [Creating a new Storm project](Creating-a-new-Storm-project.html) to get your machine set up.
 ### Kestrel
 It assumes you are able to run locally a Kestrel server as described [here](https://github.com/nathanmarz/storm-kestrel).
 
@@ -197,4 +197,4 @@
 
 Now execute the program to add items to the queue and launch the Storm topology. The order in which you launch the programs is of no importance.
 
-If you run the topology with TOPOLOGY_DEBUG you should see tuples being emitted in the topology.
\ No newline at end of file
+If you run the topology with TOPOLOGY_DEBUG you should see tuples being emitted in the topology.
diff --git a/documentation/Logs.md b/documentation/Logs.md
new file mode 100644
index 0000000..28e6693
--- /dev/null
+++ b/documentation/Logs.md
@@ -0,0 +1,30 @@
+---
+title: Storm Logs
+layout: documentation
+documentation: true
+---
+Logs in Storm are essential for tracking the status, operations, error messages and debug information for all the 
+daemons (e.g., nimbus, supervisor, logviewer, drpc, ui, pacemaker) and topologies' workers.
+
+### Location of the Logs
+All the daemon logs are placed under ${storm.log.dir} directory, which an administrator can set in the System properties or
+in the cluster configuration. By default, ${storm.log.dir} points to ${storm.home}/logs.
+
+All the worker logs are placed under the workers-artifacts directory in a hierarchical manner, e.g.,
+${workers-artifacts}/${topologyId}/${port}/worker.log. Users can set the workers-artifacts directory
+by configuring the variable "storm.workers.artifacts.dir". By default, workers-artifacts directory
+locates at ${storm.log.dir}/logs/workers-artifacts.
+
+### Using the Storm UI for Log View/Download and Log Search
+Daemon and worker logs are allowed to view and download through Storm UI by authorized users.
+
+To improve the debugging of Storm, we provide the Log Search feature.
+Log Search supports searching in a certain log file or in all of a topology's log files:
+
+String search in a log file: In the log page for a worker, a user can search a certain string, e.g., "Exception", in a certain worker log. This search can happen for both normal text log or rolled zip log files. In the results, the offset and matched lines will be displayed.
+
+![Search in a log](images/search-for-a-single-worker-log.png "Search in a log")
+
+Search in a topology: a user can also search a string for a certain topology by clicking the icon of magnifying lens at the top right corner of the UI page. This means the UI will try to search on all the supervisor nodes in a distributed way to find the matched string in all logs for this topology. The search can happen for either normal text log files or rolled zip log files by checking/unchecking the "Search archived logs:" box. Then the matched results can be shown on the UI with url links, directing the user to the certain logs on each supervisor node. This powerful feature is very helpful for users to find certain problematic supervisor nodes running this topology.
+
+![Search in a topology](images/search-a-topology.png "Search in a topology")
diff --git a/documentation/Maven.md b/documentation/Maven.md
index 677a987..ffd591d 100644
--- a/documentation/Maven.md
+++ b/documentation/Maven.md
@@ -1,6 +1,7 @@
 ---
 title: Maven
 layout: documentation
+documentation: true
 ---
 To develop topologies, you'll need the Storm jars on your classpath. You should either include the unpacked jars in the classpath for your project or use Maven to include Storm as a development dependency. Storm is hosted on Maven Central. To include Storm in your project as a development dependency, add the following to your pom.xml:
 
diff --git a/documentation/Message-passing-implementation.md b/documentation/Message-passing-implementation.md
index e17fd3f..a17f66a 100644
--- a/documentation/Message-passing-implementation.md
+++ b/documentation/Message-passing-implementation.md
@@ -8,23 +8,23 @@
 This page walks through how emitting and transferring tuples works in Storm.
 
 - Worker is responsible for message transfer
-   - `refresh-connections` is called every "task.refresh.poll.secs" or whenever assignment in ZK changes. It manages connections to other workers and maintains a mapping from task -> worker [code](https://github.com/apache/incubator-storm/blob/0.7.1/src/clj/backtype/storm/daemon/worker.clj#L123)
-   - Provides a "transfer function" that is used by tasks to send tuples to other tasks. The transfer function takes in a task id and a tuple, and it serializes the tuple and puts it onto a "transfer queue". There is a single transfer queue for each worker. [code](https://github.com/apache/incubator-storm/blob/0.7.1/src/clj/backtype/storm/daemon/worker.clj#L56)
-   - The serializer is thread-safe [code](https://github.com/apache/incubator-storm/blob/0.7.1/src/jvm/backtype/storm/serialization/KryoTupleSerializer.java#L26)
-   - The worker has a single thread which drains the transfer queue and sends the messages to other workers [code](https://github.com/apache/incubator-storm/blob/0.7.1/src/clj/backtype/storm/daemon/worker.clj#L185)
-   - Message sending happens through this protocol: [code](https://github.com/apache/incubator-storm/blob/0.7.1/src/clj/backtype/storm/messaging/protocol.clj)
-   - The implementation for distributed mode uses ZeroMQ [code](https://github.com/apache/incubator-storm/blob/0.7.1/src/clj/backtype/storm/messaging/zmq.clj)
-   - The implementation for local mode uses in memory Java queues (so that it's easy to use Storm locally without needing to get ZeroMQ installed) [code](https://github.com/apache/incubator-storm/blob/0.7.1/src/clj/backtype/storm/messaging/local.clj)
+   - `refresh-connections` is called every "task.refresh.poll.secs" or whenever assignment in ZK changes. It manages connections to other workers and maintains a mapping from task -> worker [code](https://github.com/apache/storm/blob/0.7.1/src/clj/backtype/storm/daemon/worker.clj#L123)
+   - Provides a "transfer function" that is used by tasks to send tuples to other tasks. The transfer function takes in a task id and a tuple, and it serializes the tuple and puts it onto a "transfer queue". There is a single transfer queue for each worker. [code](https://github.com/apache/storm/blob/0.7.1/src/clj/backtype/storm/daemon/worker.clj#L56)
+   - The serializer is thread-safe [code](https://github.com/apache/storm/blob/0.7.1/src/jvm/backtype/storm/serialization/KryoTupleSerializer.java#L26)
+   - The worker has a single thread which drains the transfer queue and sends the messages to other workers [code](https://github.com/apache/storm/blob/0.7.1/src/clj/backtype/storm/daemon/worker.clj#L185)
+   - Message sending happens through this protocol: [code](https://github.com/apache/storm/blob/0.7.1/src/clj/backtype/storm/messaging/protocol.clj)
+   - The implementation for distributed mode uses ZeroMQ [code](https://github.com/apache/storm/blob/0.7.1/src/clj/backtype/storm/messaging/zmq.clj)
+   - The implementation for local mode uses in memory Java queues (so that it's easy to use Storm locally without needing to get ZeroMQ installed) [code](https://github.com/apache/storm/blob/0.7.1/src/clj/backtype/storm/messaging/local.clj)
 - Receiving messages in tasks works differently in local mode and distributed mode
-   - In local mode, the tuple is sent directly to an in-memory queue for the receiving task [code](https://github.com/apache/incubator-storm/blob/master/src/clj/backtype/storm/messaging/local.clj#L21)
-   - In distributed mode, each worker listens on a single TCP port for incoming messages and then routes those messages in-memory to tasks. The TCP port is called a "virtual port", because it receives [task id, message] and then routes it to the actual task. [code](https://github.com/apache/incubator-storm/blob/master/src/clj/backtype/storm/daemon/worker.clj#L204)
-      - The virtual port implementation is here: [code](https://github.com/apache/incubator-storm/blob/master/src/clj/zilch/virtual_port.clj)
-      - Tasks listen on an in-memory ZeroMQ port for messages from the virtual port [code](https://github.com/apache/incubator-storm/blob/master/src/clj/backtype/storm/daemon/task.clj#L201)
-        - Bolts listen here: [code](https://github.com/apache/incubator-storm/blob/master/src/clj/backtype/storm/daemon/task.clj#L489)
-        - Spouts listen here: [code](https://github.com/apache/incubator-storm/blob/master/src/clj/backtype/storm/daemon/task.clj#L382)
+   - In local mode, the tuple is sent directly to an in-memory queue for the receiving task [code](https://github.com/apache/storm/blob/0.7.1/src/clj/backtype/storm/messaging/local.clj#L21)
+   - In distributed mode, each worker listens on a single TCP port for incoming messages and then routes those messages in-memory to tasks. The TCP port is called a "virtual port", because it receives [task id, message] and then routes it to the actual task. [code](https://github.com/apache/storm/blob/0.7.1/src/clj/backtype/storm/daemon/worker.clj#L204)
+      - The virtual port implementation is here: [code](https://github.com/apache/storm/blob/0.7.1/src/clj/zilch/virtual_port.clj)
+      - Tasks listen on an in-memory ZeroMQ port for messages from the virtual port [code](https://github.com/apache/storm/blob/0.7.1/src/clj/backtype/storm/daemon/task.clj#L201)
+        - Bolts listen here: [code](https://github.com/apache/storm/blob/0.7.1/src/clj/backtype/storm/daemon/task.clj#L489)
+        - Spouts listen here: [code](https://github.com/apache/storm/blob/0.7.1/src/clj/backtype/storm/daemon/task.clj#L382)
 - Tasks are responsible for message routing. A tuple is emitted either to a direct stream (where the task id is specified) or a regular stream. In direct streams, the message is only sent if that bolt subscribes to that direct stream. In regular streams, the stream grouping functions are used to determine the task ids to send the tuple to.
-  - Tasks have a routing map from {stream id} -> {component id} -> {stream grouping function} [code](https://github.com/apache/incubator-storm/blob/master/src/clj/backtype/storm/daemon/task.clj#L198)
-  - The "tasks-fn" returns the task ids to send the tuples to for either regular stream emit or direct stream emit [code](https://github.com/apache/incubator-storm/blob/master/src/clj/backtype/storm/daemon/task.clj#L207)
+  - Tasks have a routing map from {stream id} -> {component id} -> {stream grouping function} [code](https://github.com/apache/storm/blob/0.7.1/src/clj/backtype/storm/daemon/task.clj#L198)
+  - The "tasks-fn" returns the task ids to send the tuples to for either regular stream emit or direct stream emit [code](https://github.com/apache/storm/blob/0.7.1/src/clj/backtype/storm/daemon/task.clj#L207)
   - After getting the output task ids, bolts and spouts use the transfer-fn provided by the worker to actually transfer the tuples
-      - Bolt transfer code here: [code](https://github.com/apache/incubator-storm/blob/master/src/clj/backtype/storm/daemon/task.clj#L429)
-      - Spout transfer code here: [code](https://github.com/apache/incubator-storm/blob/master/src/clj/backtype/storm/daemon/task.clj#L329)
+      - Bolt transfer code here: [code](https://github.com/apache/storm/blob/0.7.1/src/clj/backtype/storm/daemon/task.clj#L429)
+      - Spout transfer code here: [code](https://github.com/apache/storm/blob/0.7.1/src/clj/backtype/storm/daemon/task.clj#L329)
diff --git a/documentation/Pacemaker.md b/documentation/Pacemaker.md
new file mode 100644
index 0000000..39e3014
--- /dev/null
+++ b/documentation/Pacemaker.md
@@ -0,0 +1,113 @@
+---
+title: Pacemaker
+layout: documentation
+documentation: true
+---
+
+
+### Introduction
+Pacemaker is a storm daemon designed to process heartbeats from workers. As Storm is scaled up, ZooKeeper begins to become a bottleneck due to high volumes of writes from workers doing heartbeats. Lots of writes to disk and too much traffic across the network is generated as ZooKeeper tries to maintain consistency.
+
+Because heartbeats are of an ephemeral nature, they do not need to be persisted to disk or synced across nodes; an in-memory store will do. This is the role of Pacemaker. Pacemaker functions as a simple in-memory key/value store with ZooKeeper-like, directory-style keys and byte array values.
+
+The corresponding Pacemaker client is a plugin for the `ClusterState` interface, `org.apache.storm.pacemaker.pacemaker_state_factory`. Heartbeat calls are funneled by the `ClusterState` produced by `pacemaker_state_factory` into the Pacemaker daemon, while other set/get operations are forwarded to ZooKeeper.
+
+------
+
+### Configuration
+
+ - `pacemaker.host` : The host that the Pacemaker daemon is running on
+ - `pacemaker.port` : The port that Pacemaker will listen on
+ - `pacemaker.max.threads` : Maximum number of threads Pacemaker daemon will use to handle requests.
+ - `pacemaker.childopts` : Any JVM parameters that need to go to the Pacemaker. (used by storm-deploy project)
+ - `pacemaker.auth.method` : The authentication method that is used (more info below)
+
+#### Example
+
+To get Pacemaker up and running, set the following option in the cluster config on all nodes:
+```
+storm.cluster.state.store: "org.apache.storm.pacemaker.pacemaker_state_factory"
+```
+
+The Pacemaker host also needs to be set on all nodes:
+```
+pacemaker.host: somehost.mycompany.com
+```
+
+And then start all of your daemons
+
+(including Pacemaker):
+```
+$ storm pacemaker
+```
+
+The Storm cluster should now be pushing all worker heartbeats through Pacemaker.
+
+### Security
+
+Currently digest (password-based) and Kerberos security are supported. Security is currently only around reads, not writes. Writes may be performed by anyone, whereas reads may only be performed by authorized and authenticated users. This is an area for future development, as it leaves the cluster open to DoS attacks, but it prevents any sensitive information from reaching unauthorized eyes, which was the main goal.
+
+#### Digest
+To configure digest authentication, set `pacemaker.auth.method: DIGEST` in the cluster config on the nodes hosting Nimbus and Pacemaker.
+The nodes must also have `java.security.auth.login.config` set to point to a JAAS config file containing the following structure:
+```
+PacemakerDigest {
+    username="some username"
+    password="some password";
+};
+```
+
+Any node with these settings configured will be able to read from Pacemaker.
+Worker nodes need not have these configs set, and may keep `pacemaker.auth.method: NONE` set, since they do not need to read from the Pacemaker daemon.
+
+#### Kerberos
+To configure Kerberos authentication, set `pacemaker.auth.method: KERBEROS` in the cluster config on the nodes hosting Nimbus and Pacemaker.
+The nodes must also have `java.security.auth.login.config` set to point to a JAAS config.
+
+The JAAS config on Nimbus must look something like this:
+```
+PacemakerClient {
+    com.sun.security.auth.module.Krb5LoginModule required
+    useKeyTab=true
+    keyTab="/etc/keytabs/nimbus.keytab"
+    storeKey=true
+    useTicketCache=false
+    serviceName="pacemaker"
+    principal="nimbus@MY.COMPANY.COM";
+};
+                         
+```
+
+The JAAS config on Pacemaker must look something like this:
+```
+PacemakerServer {
+   com.sun.security.auth.module.Krb5LoginModule required
+   useKeyTab=true
+   keyTab="/etc/keytabs/pacemaker.keytab"
+   storeKey=true
+   useTicketCache=false
+   principal="pacemaker@MY.COMPANY.COM";
+};
+```
+
+ - The client's user principal in the `PacemakerClient` section on the Nimbus host must match the `nimbus.daemon.user` storm cluster config value.
+ - The client's `serviceName` value must match the server's user principal in the `PacemakerServer` section on the Pacemaker host.
+
+
+### Fault Tolerance
+
+Pacemaker runs as a single daemon instance, making it a potential Single Point of Failure.
+
+If Pacemaker becomes unreachable by Nimbus, through crash or network partition, the workers will continue to run, and Nimbus will repeatedly attempt to reconnect. Nimbus functionality will be disrupted, but the topologies themselves will continue to run.
+In case of partition of the cluster where Nimbus and Pacemaker are on the same side of the partition, the workers that are on the other side of the partition will not be able to heartbeat, and Nimbus will reschedule the tasks elsewhere. This is probably what we want to happen anyway.
+
+
+### ZooKeeper Comparison
+Compared to ZooKeeper, Pacemaker uses less CPU, less memory, and of course no disk for the same load, thanks to lack of overhead from maintaining consistency between nodes.
+On Gigabit networking, there is a theoretical limit of about 6000 nodes. However, the real limit is likely around 2000-3000 nodes. These limits have not yet been tested.
+On a 270 supervisor cluster, fully scheduled with topologies, Pacemaker resource utilization was 70% of one core and nearly 1GiB of RAM on a machine with 4 `Intel(R) Xeon(R) CPU E5530 @ 2.40GHz` and 24GiB of RAM.
+
+
+There is an easy route to HA for Pacemaker. Unlike ZooKeeper, Pacemaker should be able to scale horizontally without overhead. By contrast, with ZooKeeper, there are diminishing returns when adding ZK nodes.
+
+In short, a single Pacemaker node should be able to handle many times the load that a ZooKeeper cluster can, and future HA work allowing horizontal scaling will increase that even farther.
diff --git a/documentation/Resource_Aware_Scheduler_overview.md b/documentation/Resource_Aware_Scheduler_overview.md
new file mode 100644
index 0000000..ed5fe66
--- /dev/null
+++ b/documentation/Resource_Aware_Scheduler_overview.md
@@ -0,0 +1,232 @@
+---
+title: Resource Aware Scheduler
+layout: documentation
+documentation: true
+---
+# Introduction
+
+The purpose of this document is to provide a description of the Resource Aware Scheduler for the Storm distributed real-time computation system.  This document will provide you with both a high level description of the resource aware scheduler in Storm
+
+## Using Resource Aware Scheduler
+
+The user can switch to using the Resource Aware Scheduler by setting the following in *conf/storm.yaml*
+
+    storm.scheduler: “backtype.storm.scheduler.resource.ResourceAwareScheduler”
+
+
+## API Overview
+
+For a Storm Topology, the user can now specify the amount of resources a topology component (i.e. Spout or Bolt) is required to run a single instance of the component.  The user can specify the resource requirement for a topology component by using the following API calls.
+
+### Setting Memory Requirement
+
+API to set component memory requirement:
+
+    public T setMemoryLoad(Number onHeap, Number offHeap)
+
+Parameters:
+* Number onHeap – The amount of on heap memory an instance of this component will consume in megabytes
+* Number offHeap – The amount of off heap memory an instance of this component will consume in megabytes
+
+The user also has to option to just specify the on heap memory requirement if the component does not have an off heap memory need.
+
+    public T setMemoryLoad(Number onHeap)
+
+Parameters:
+* Number onHeap – The amount of on heap memory an instance of this component will consume
+
+If no value is provided for offHeap, 0.0 will be used. If no value is provided for onHeap, or if the API is never called for a component, the default value will be used.
+
+Example of Usage:
+
+    SpoutDeclarer s1 = builder.setSpout("word", new TestWordSpout(), 10);
+    s1.setMemoryLoad(1024.0, 512.0);
+    builder.setBolt("exclaim1", new ExclamationBolt(), 3)
+                .shuffleGrouping("word").setMemoryLoad(512.0);
+
+The entire memory requested for this topology is 16.5 GB. That is from 10 spouts with 1GB on heap memory and 0.5 GB off heap memory each and 3 bolts with 0.5 GB on heap memory each.
+
+### Setting CPU Requirement
+
+
+API to set component CPU requirement:
+
+    public T setCPULoad(Double amount)
+
+Parameters:
+* Number amount – The amount of on CPU an instance of this component will consume.
+
+Currently, the amount of CPU resources a component requires or is available on a node is represented by a point system. CPU usage is a difficult concept to define. Different CPU architectures perform differently depending on the task at hand. They are so complex that expressing all of that in a single precise portable number is impossible. Instead we take a convention over configuration approach and are primarily concerned with rough level of CPU usage while still providing the possibility to specify amounts more fine grained.
+
+By convention a CPU core typically will get 100 points. If you feel that your processors are more or less powerful you can adjust this accordingly. Heavy tasks that are CPU bound will get 100 points, as they can consume an entire core. Medium tasks should get 50, light tasks 25, and tiny tasks 10. In some cases you have a task that spawns other threads to help with processing. These tasks may need to go above 100 points to express the amount of CPU they are using. If these conventions are followed the common case for a single threaded task the reported Capacity * 100 should be the number of CPU points that the task needs.
+
+Example of Usage:
+
+    SpoutDeclarer s1 = builder.setSpout("word", new TestWordSpout(), 10);
+    s1.setCPULoad(15.0);
+    builder.setBolt("exclaim1", new ExclamationBolt(), 3)
+                .shuffleGrouping("word").setCPULoad(10.0);
+    builder.setBolt("exclaim2", new HeavyBolt(), 1)
+                    .shuffleGrouping("exclaim1").setCPULoad(450.0);
+
+###	Limiting the Heap Size per Worker (JVM) Process
+
+
+    public void setTopologyWorkerMaxHeapSize(Number size)
+
+Parameters:
+* Number size – The memory limit a worker process will be allocated in megabytes
+
+The user can limit the amount of memory resources the resource aware scheduler allocates to a single worker on a per topology basis by using the above API.  This API is in place so that the users can spread executors to multiple workers.  However, spreading executors to multiple workers may increase the communication latency since executors will not be able to use Disruptor Queue for intra-process communication.
+
+Example of Usage:
+
+    Config conf = new Config();
+    conf.setTopologyWorkerMaxHeapSize(512.0);
+
+### Setting Available Resources on Node
+
+A storm administrator can specify node resource availability by modifying the *conf/storm.yaml* file located in the storm home directory of that node.
+
+A storm administrator can specify how much available memory a node has in megabytes adding the following to *storm.yaml*
+
+    supervisor.memory.capacity.mb: [amount<Double>]
+
+A storm administrator can also specify how much available CPU resources a node has available adding the following to *storm.yaml*
+
+    supervisor.cpu.capacity: [amount<Double>]
+
+
+Note: that the amount the user can specify for the available CPU is represented using a point system like discussed earlier.
+
+Example of Usage:
+
+    supervisor.memory.capacity.mb: 20480.0
+    supervisor.cpu.capacity: 100.0
+
+
+### Other Configurations
+
+The user can set some default configurations for the Resource Aware Scheduler in *conf/storm.yaml*:
+
+    //default value if on heap memory requirement is not specified for a component 
+    topology.component.resources.onheap.memory.mb: 128.0
+
+    //default value if off heap memory requirement is not specified for a component 
+    topology.component.resources.offheap.memory.mb: 0.0
+
+    //default value if CPU requirement is not specified for a component 
+    topology.component.cpu.pcore.percent: 10.0
+
+    //default value for the max heap size for a worker  
+    topology.worker.max.heap.size.mb: 768.0
+
+# Topology Priorities and Per User Resource 
+
+The Resource Aware Scheduler or RAS also has multitenant capabilities since many Storm users typically share a Storm cluster.  Resource Aware Scheduler can allocate resources on a per user basis.  Each user can be guaranteed a certain amount of resources to run his or her topologies and the Resource Aware Scheduler will meet those guarantees when possible.  When the Storm cluster has extra free resources, Resource Aware Scheduler will to be able allocate additional resources to user in a fair manner. The importance of topologies can also vary.  Topologies can be used for actual production or just experimentation, thus Resource Aware Scheduler will take into account the importance of a topology when determining the order in which to schedule topologies or when to evict topologies
+
+## Setup
+
+The resource guarantees of a user can be specified *conf/user-resource-pools.yaml*.  Specify the resource guarantees of a user in the following format:
+
+    resource.aware.scheduler.user.pools:
+	[UserId]
+		cpu: [Amount of Guarantee CPU Resources]
+		memory: [Amount of Guarantee Memory Resources]
+
+An example of what *user-resource-pools.yaml* can look like:
+
+    resource.aware.scheduler.user.pools:
+        jerry:
+            cpu: 1000
+            memory: 8192.0
+        derek:
+            cpu: 10000.0
+            memory: 32768
+        bobby:
+            cpu: 5000.0
+            memory: 16384.0
+
+Please note that the specified amount of Guaranteed CPU and Memory can be either a integer or double
+
+## API Overview
+### Specifying Topology Priority
+The range of topology priorities can range form 0-29.  The topologies priorities will be partitioned into several priority levels that may contain a range of priorities. 
+For example we can create a priority level mapping:
+
+    PRODUCTION => 0 – 9
+    STAGING => 10 – 19
+    DEV => 20 – 29
+
+Thus, each priority level contains 10 sub priorities. Users can set the priority level of a topology by using the following API
+
+    conf.setTopologyPriority(int priority)
+    
+Parameters:
+* priority – an integer representing the priority of the topology
+
+Please note that the 0-29 range is not a hard limit.  Thus, a user can set a priority number that is higher than 29. However, the property of higher the priority number, lower the importance still holds
+
+### Specifying Scheduling Strategy:
+
+A user can specify on a per topology basis what scheduling strategy to use.  Users can implement the IStrategy interface and define new strategies to schedule specific topologies.  This pluggable interface was created since we realize different topologies might have different scheduling needs.  A user can set the topology strategy within the topology definition by using the API:
+
+    public void setTopologyStrategy(Class<? extends IStrategy> clazz)
+    
+Parameters:
+* clazz – The strategy class that implements the IStrategy interface
+
+Example Usage:
+
+    conf.setTopologyStrategy(backtype.storm.scheduler.resource.strategies.scheduling.DefaultResourceAwareStrategy.class);
+
+A default scheduling is provided.  The DefaultResourceAwareStrategy is implemented based off the scheduling algorithm in the original paper describing resource aware scheduling in Storm:
+
+http://web.engr.illinois.edu/~bpeng/files/r-storm.pdf
+
+### Specifying Topology Prioritization Strategy
+
+The order of scheduling is a pluggable interface in which a user could define a strategy that prioritizes topologies.  For a user to define his or her own prioritization strategy, he or she needs to implement the ISchedulingPriorityStrategy interface.  A user can set the scheduling priority strategy by setting the *Config.RESOURCE_AWARE_SCHEDULER_PRIORITY_STRATEGY* to point to the class that implements the strategy. For instance:
+
+    resource.aware.scheduler.priority.strategy: "backtype.storm.scheduler.resource.strategies.priority.DefaultSchedulingPriorityStrategy"
+    
+A default strategy will be provided.  The following explains how the default scheduling priority strategy works.
+
+**DefaultSchedulingPriorityStrategy**
+
+The order of scheduling should be based on the distance between a user’s current resource allocation and his or her guaranteed allocation.  We should prioritize the users who are the furthest away from their resource guarantee. The difficulty of this problem is that a user may have multiple resource guarantees, and another user can have another set of resource guarantees, so how can we compare them in a fair manner?  Let's use the average percentage of resource guarantees satisfied as a method of comparison.
+
+For example:
+
+|User|Resource Guarantee|Resource Allocated|
+|----|------------------|------------------|
+|A|<10 CPU, 50GB>|<2 CPU, 40 GB>|
+|B|< 20 CPU, 25GB>|<15 CPU, 10 GB>|
+
+User A’s average percentage satisfied of resource guarantee: 
+
+(2/10+40/50)/2  = 0.5
+
+User B’s average percentage satisfied of resource guarantee: 
+
+(15/20+10/25)/2  = 0.575
+
+Thus, in this example User A has a smaller average percentage of his or her resource guarantee satisfied than User B.  Thus, User A should get priority to be allocated more resource, i.e., schedule a topology submitted by User A.
+
+When scheduling, RAS sorts users by the average percentage satisfied of resource guarantee and schedule topologies from users based on that ordering starting from the users with the lowest average percentage satisfied of resource guarantee.  When a user’s resource guarantee is completely satisfied, the user’s average percentage satisfied of resource guarantee will be greater than or equal to 1.
+
+### Specifying Eviction Strategy
+The eviction strategy is used when there are not enough free resources in the cluster to schedule new topologies. If the cluster is full, we need a mechanism to evict topologies so that user resource guarantees can be met and additional resource can be shared fairly among users. The strategy for evicting topologies is also a pluggable interface in which the user can implement his or her own topology eviction strategy.  For a user to implement his or her own eviction strategy, he or she needs to implement the IEvictionStrategy Interface and set *Config.RESOURCE_AWARE_SCHEDULER_EVICTION_STRATEGY* to point to the implemented strategy class. For instance:
+
+    resource.aware.scheduler.eviction.strategy: "backtype.storm.scheduler.resource.strategies.eviction.DefaultEvictionStrategy"
+
+A default eviction strategy is provided.  The following explains how the default topology eviction strategy works
+
+**DefaultEvictionStrategy**
+
+
+To determine if topology eviction should occur we should take into account the priority of the topology that we are trying to schedule and whether the resource guarantees for the owner of the topology have been met.  
+
+We should never evict a topology from a user that does not have his or her resource guarantees satisfied.  The following flow chart should describe the logic for the eviction process.
+
+![Viewing metrics with VisualVM](images/resource_aware_scheduler_default_eviction_strategy.svg)
\ No newline at end of file
diff --git a/documentation/Serialization.md b/documentation/Serialization.md
index fb86161..2d3488e 100644
--- a/documentation/Serialization.md
+++ b/documentation/Serialization.md
@@ -7,7 +7,7 @@
 
 Tuples can be comprised of objects of any types. Since Storm is a distributed system, it needs to know how to serialize and deserialize objects when they're passed between tasks.
 
-Storm uses [Kryo](http://code.google.com/p/kryo/) for serialization. Kryo is a flexible and fast serialization library that produces small serializations.
+Storm uses [Kryo](https://github.com/EsotericSoftware/kryo) for serialization. Kryo is a flexible and fast serialization library that produces small serializations.
 
 By default, Storm can serialize primitive types, strings, byte arrays, ArrayList, HashMap, HashSet, and the Clojure collection types. If you want to use another type in your tuples, you'll need to register a custom serializer.
 
@@ -23,12 +23,12 @@
 
 ### Custom serialization
 
-As mentioned, Storm uses Kryo for serialization. To implement custom serializers, you need to register new serializers with Kryo. It's highly recommended that you read over [Kryo's home page](http://code.google.com/p/kryo/) to understand how it handles custom serialization.
+As mentioned, Storm uses Kryo for serialization. To implement custom serializers, you need to register new serializers with Kryo. It's highly recommended that you read over [Kryo's home page](https://github.com/EsotericSoftware/kryo) to understand how it handles custom serialization.
 
 Adding custom serializers is done through the "topology.kryo.register" property in your topology config. It takes a list of registrations, where each registration can take one of two forms:
 
 1. The name of a class to register. In this case, Storm will use Kryo's `FieldsSerializer` to serialize the class. This may or may not be optimal for the class -- see the Kryo docs for more details.
-2. A map from the name of a class to register to an implementation of [com.esotericsoftware.kryo.Serializer](http://code.google.com/p/kryo/source/browse/trunk/src/com/esotericsoftware/kryo/Serializer.java).
+2. A map from the name of a class to register to an implementation of [com.esotericsoftware.kryo.Serializer](https://github.com/EsotericSoftware/kryo/blob/master/src/com/esotericsoftware/kryo/Serializer.java).
 
 Let's look at an example.
 
@@ -59,4 +59,4 @@
 
 When a topology is submitted, a single set of serializations is chosen to be used by all components in the topology for sending messages. This is done by merging the component-specific serializer registrations with the regular set of serialization registrations. If two components define serializers for the same class, one of the serializers is chosen arbitrarily.
 
-To force a serializer for a particular class if there's a conflict between two component-specific registrations, just define the serializer you want to use in the topology-specific configuration. The topology-specific configuration has precedence over component-specific configurations for serialization registrations.
\ No newline at end of file
+To force a serializer for a particular class if there's a conflict between two component-specific registrations, just define the serializer you want to use in the topology-specific configuration. The topology-specific configuration has precedence over component-specific configurations for serialization registrations.
diff --git a/documentation/Setting-up-a-Storm-cluster.md b/documentation/Setting-up-a-Storm-cluster.md
index 07b4eda..b31cb17 100644
--- a/documentation/Setting-up-a-Storm-cluster.md
+++ b/documentation/Setting-up-a-Storm-cluster.md
@@ -52,17 +52,25 @@
 
 If the port that your Zookeeper cluster uses is different than the default, you should set **storm.zookeeper.port** as well.
 
-2) **storm.local.dir**: The Nimbus and Supervisor daemons require a directory on the local disk to store small amounts of state (like jars, confs, and things like that). You should create that directory on each machine, give it proper permissions, and then fill in the directory location using this config. For example:
+2) **storm.local.dir**: The Nimbus and Supervisor daemons require a directory on the local disk to store small amounts of state (like jars, confs, and things like that).
+ You should create that directory on each machine, give it proper permissions, and then fill in the directory location using this config. For example:
 
 ```yaml
 storm.local.dir: "/mnt/storm"
 ```
+If you run storm on windows,it could be:
+```yaml
+storm.local.dir: "C:\\storm-local"
+```
+If you use a relative path,it will be relative to where you installed storm(STORM_HOME).
+You can leave it empty with default value `$STORM_HOME/storm-local`
 
-3) **nimbus.host**: The worker nodes need to know which machine is the master in order to download topology jars and confs. For example:
+3) **nimbus.seeds**: The worker nodes need to know which machines are the candidate of master in order to download topology jars and confs. For example:
 
 ```yaml
-nimbus.host: "111.222.333.44"
+nimbus.seeds: ["111.222.333.44"]
 ```
+You're encouraged to fill out the value to list of **machine's FQDN**. If you want to set up Nimbus H/A, you have to address all machines' FQDN which run nimbus. You may want to leave it to default value when you just want to set up 'pseudo-distributed' cluster, but you're still encouraged to fill out FQDN.
 
 4) **supervisor.slots.ports**: For each worker machine, you configure how many workers run on that machine with this config. Each worker uses a single port for receiving messages, and this setting defines which ports are open for use. If you define five ports here, then Storm will allocate up to five workers to run on this machine. If you define three ports, Storm will only run up to three. By default, this setting is configured to run 4 workers on the ports 6700, 6701, 6702, and 6703. For example:
 
@@ -74,6 +82,25 @@
     - 6703
 ```
 
+### Monitoring Health of Supervisors
+
+Storm provides a mechanism by which administrators can configure the supervisor to run administrator supplied scripts periodically to determine if a node is healthy or not. Administrators can have the supervisor determine if the node is in a healthy state by performing any checks of their choice in scripts located in storm.health.check.dir. If a script detects the node to be in an unhealthy state, it must print a line to standard output beginning with the string ERROR. The supervisor will periodically run the scripts in the health check dir and check the output. If the script’s output contains the string ERROR, as described above, the supervisor will shut down any workers and exit. 
+
+If the supervisor is running with supervision "/bin/storm node-health-check" can be called to determine if the supervisor should be launched or if the node is unhealthy.
+
+The health check directory location can be configured with:
+
+```yaml
+storm.health.check.dir: "healthchecks"
+
+```
+The scripts must have execute permissions.
+The time to allow any given healthcheck script to run before it is marked failed due to timeout can be configured with:
+
+```yaml
+storm.health.check.timeout.ms: 5000
+```
+
 ### Configure external libraries and environmental variables (optional)
 
 If you need support from external libraries or custom plugins, you can place such jars into the extlib/ and extlib-daemon/ directories. Note that the extlib-daemon/ directory stores jars used only by daemons (Nimbus, Supervisor, DRPC, UI, Logviewer), e.g., HDFS and customized scheduling libraries. Accordingly, two environmental variables STORM_EXT_CLASSPATH and STORM_EXT_CLASSPATH_DAEMON can be configured by users for including the external classpath and daemon-only external classpath.
@@ -85,6 +112,6 @@
 
 1. **Nimbus**: Run the command "bin/storm nimbus" under supervision on the master machine.
 2. **Supervisor**: Run the command "bin/storm supervisor" under supervision on each worker machine. The supervisor daemon is responsible for starting and stopping worker processes on that machine.
-3. **UI**: Run the Storm UI (a site you can access from the browser that gives diagnostics on the cluster and topologies) by running the command "bin/storm ui" under supervision. The UI can be accessed by navigating your web browser to http://{nimbus host}:8080. 
+3. **UI**: Run the Storm UI (a site you can access from the browser that gives diagnostics on the cluster and topologies) by running the command "bin/storm ui" under supervision. The UI can be accessed by navigating your web browser to http://{ui host}:8080. 
 
 As you can see, running the daemons is very straightforward. The daemons will log to the logs/ directory in wherever you extracted the Storm release.
diff --git a/documentation/Setting-up-development-environment.md b/documentation/Setting-up-development-environment.md
index fa450be..bfa98a2 100644
--- a/documentation/Setting-up-development-environment.md
+++ b/documentation/Setting-up-development-environment.md
@@ -29,13 +29,5 @@
 The previous step installed the `storm` client on your machine which is used to communicate with remote Storm clusters. Now all you have to do is tell the client which Storm cluster to talk to. To do this, all you have to do is put the host address of the master in the `~/.storm/storm.yaml` file. It should look something like this:
 
 ```
-nimbus.host: "123.45.678.890"
+nimbus.seeds: ["123.45.678.890"]
 ```
-
-Alternatively, if you use the [storm-deploy](https://github.com/nathanmarz/storm-deploy) project to provision Storm clusters on AWS, it will automatically set up your ~/.storm/storm.yaml file. You can manually attach to a Storm cluster (or switch between multiple clusters) using the "attach" command, like so:
-
-```
-lein run :deploy --attach --name mystormcluster
-```
-
-More information is on the storm-deploy [wiki](https://github.com/nathanmarz/storm-deploy/wiki)
\ No newline at end of file
diff --git a/documentation/State-checkpointing.md b/documentation/State-checkpointing.md
new file mode 100644
index 0000000..c7a81f5
--- /dev/null
+++ b/documentation/State-checkpointing.md
@@ -0,0 +1,152 @@
+---
+title: Storm State Management
+layout: documentation
+documentation: true
+---
+# State support in core storm
+Storm core has abstractions for bolts to save and retrieve the state of its operations. There is a default in-memory
+based state implementation and also a Redis backed implementation that provides state persistence.
+
+## State management
+Bolts that requires its state to be managed and persisted by the framework should implement the `IStatefulBolt` interface or
+extend the `BaseStatefulBolt` and implement `void initState(T state)` method. The `initState` method is invoked by the framework
+during the bolt initialization with the previously saved state of the bolt. This is invoked after prepare but before the bolt starts
+processing any tuples.
+
+Currently the only kind of `State` implementation that is supported is `KeyValueState` which provides key-value mapping.
+
+For example a word count bolt could use the key value state abstraction for the word counts as follows.
+
+1. Extend the BaseStatefulBolt and type parameterize it with KeyValueState which would store the mapping of word to count.
+2. The bolt gets initialized with its previously saved state in the init method. This will contain the word counts
+last committed by the framework during the previous run.
+3. In the execute method, update the word count.
+
+ ```java
+ public class WordCountBolt extends BaseStatefulBolt<KeyValueState<String, Long>> {
+ private KeyValueState<String, Long> wordCounts;
+ ...
+     @Override
+     public void initState(KeyValueState<String, Long> state) {
+       wordCounts = state;
+     }
+     @Override
+     public void execute(Tuple tuple, BasicOutputCollector collector) {
+       String word = tuple.getString(0);
+       Integer count = wordCounts.get(word, 0);
+       count++;
+       wordCounts.put(word, count);
+       collector.emit(new Values(word, count));
+     }
+ ...
+ }
+ ```
+4. The framework periodically checkpoints the state of the bolt (default every second). The frequency
+can be changed by setting the storm config `topology.state.checkpoint.interval.ms`
+5. For state persistence, use a state provider that supports persistence by setting the `topology.state.provider` in the
+storm config. E.g. for using Redis based key-value state implementation set `topology.state.provider: org.apache.storm.redis.state.RedisKeyValueStateProvider`
+in storm.yaml. The provider implementation jar should be in the class path, which in this case means putting the `storm-redis-*.jar`
+in the extlib directory.
+6. The state provider properties can be overridden by setting `topology.state.provider.config`. For Redis state this is a
+json config with the following properties.
+
+ ```
+ {
+   "keyClass": "Optional fully qualified class name of the Key type.",
+   "valueClass": "Optional fully qualified class name of the Value type.",
+   "keySerializerClass": "Optional Key serializer implementation class.",
+   "valueSerializerClass": "Optional Value Serializer implementation class.",
+   "jedisPoolConfig": {
+     "host": "localhost",
+     "port": 6379,
+     "timeout": 2000,
+     "database": 0,
+     "password": "xyz"
+     }
+ }
+ ```
+
+## Checkpoint mechanism
+Checkpoint is triggered by an internal checkpoint spout at the specified `topology.state.checkpoint.interval.ms`. If there is
+at-least one `IStatefulBolt` in the topology, the checkpoint spout is automatically added by the topology builder . For stateful topologies,
+the topology builder wraps the `IStatefulBolt` in a `StatefulBoltExecutor` which handles the state commits on receiving the checkpoint tuples.
+The non stateful bolts are wrapped in a `CheckpointTupleForwarder` which just forwards the checkpoint tuples so that the checkpoint tuples
+can flow through the topology DAG. The checkpoint tuples flow through a separate internal stream namely `$checkpoint`. The topology builder
+wires the checkpoint stream across the whole topology with the checkpoint spout at the root.
+
+```
+              default                         default               default
+[spout1]   ---------------> [statefulbolt1] ----------> [bolt1] --------------> [statefulbolt2]
+                          |                 ---------->         -------------->
+                          |                   ($chpt)               ($chpt)
+                          |
+[$checkpointspout] _______| ($chpt)
+```
+
+At checkpoint intervals the checkpoint tuples are emitted by the checkpoint spout. On receiving a checkpoint tuple, the state of the bolt
+is saved and then the checkpoint tuple is forwarded to the next component. Each bolt waits for the checkpoint to arrive on all its input
+streams before it saves its state so that the state represents a consistent state across the topology. Once the checkpoint spout receives
+ACK from all the bolts, the state commit is complete and the transaction is recorded as committed by the checkpoint spout.
+
+The state commit works like a three phase commit protocol with a prepare and commit phase so that the state across the topology is saved
+in a consistent and atomic manner.
+
+### Recovery
+The recovery phase is triggered when the topology is started for the first time. If the previous transaction was not successfully
+prepared, a `rollback` message is sent across the topology so that if a bolt has some prepared transactions it can be discarded.
+If the previous transaction was prepared successfully but not committed, a `commit` message is sent across the topology so that
+the prepared transactions can be committed. After these steps are complete, the bolts are initialized with the state.
+
+The recovery is also triggered if one of the bolts fails to acknowledge the checkpoint message or say a worker crashed in
+the middle. Thus when the worker is restarted by the supervisor, the checkpoint mechanism makes sure that the bolt gets
+initialized with its previous state and the checkpointing continues from the point where it left off.
+
+### Guarantee
+Storm relies on the acking mechanism to replay tuples in case of failures. It is possible that the state is committed
+but the worker crashes before acking the tuples. In this case the tuples are replayed causing duplicate state updates.
+Also currently the StatefulBoltExecutor continues to process the tuples from a stream after it has received a checkpoint
+tuple on one stream while waiting for checkpoint to arrive on other input streams for saving the state. This can also cause
+duplicate state updates during recovery.
+
+The state abstraction does not eliminate duplicate evaluations and currently provides only at-least once guarantee.
+
+### IStateful bolt hooks
+IStateful bolt interface provides hook methods where in the stateful bolts could implement some custom actions.
+```java
+    /**
+     * This is a hook for the component to perform some actions just before the
+     * framework commits its state.
+     */
+    void preCommit(long txid);
+
+    /**
+     * This is a hook for the component to perform some actions just before the
+     * framework prepares its state.
+     */
+    void prePrepare(long txid);
+
+    /**
+     * This is a hook for the component to perform some actions just before the
+     * framework rolls back the prepared state.
+     */
+    void preRollback();
+```
+This is optional and stateful bolts are not expected to provide any implementation. This is provided so that other
+system level components can be built on top of the stateful abstractions where we might want to take some actions before the
+stateful bolt's state is prepared, committed or rolled back.
+
+## Providing custom state implementations
+Currently the only kind of `State` implementation supported is `KeyValueState` which provides key-value mapping.
+
+Custom state implementations should provide implementations for the methods defined in the `org.apache.storm.State` interface.
+These are the `void prepareCommit(long txid)`, `void commit(long txid)`, `rollback()` methods. `commit()` method is optional
+and is useful if the bolt manages the state on its own. This is currently used only by the internal system bolts,
+for e.g. the CheckpointSpout to save its state.
+
+`KeyValueState` implementation should also implement the methods defined in the `org.apache.storm.state.KeyValueState` interface.
+
+### State provider
+The framework instantiates the state via the corresponding `StateProvider` implementation. A custom state should also provide
+a `StateProvider` implementation which can load and return the state based on the namespace. Each state belongs to a unique namespace.
+The namespace is typically unique per task so that each task can have its own state. The StateProvider and the corresponding
+State implementation should be available in the class path of Storm (by placing them in the extlib directory).
diff --git a/documentation/Structure-of-the-codebase.md b/documentation/Structure-of-the-codebase.md
index 5da6039..11adeeb 100644
--- a/documentation/Structure-of-the-codebase.md
+++ b/documentation/Structure-of-the-codebase.md
@@ -78,7 +78,7 @@
 
 [backtype.storm.hooks](https://github.com/apache/storm/tree/master/storm-core/src/jvm/backtype/storm/hooks): Interfaces for hooking into various events in Storm, such as when tasks emit tuples, when tuples are acked, etc. User guide for hooks is [here](https://github.com/apache/storm/wiki/Hooks).
 
-[backtype.storm.serialization](https://github.com/apache/storm/tree/master/storm-core/src/jvm/backtype/storm/serialization): Implementation of how Storm serializes/deserializes tuples. Built on top of [Kryo](http://code.google.com/p/kryo/).
+[backtype.storm.serialization](https://github.com/apache/storm/tree/master/storm-core/src/jvm/backtype/storm/serialization): Implementation of how Storm serializes/deserializes tuples. Built on top of [Kryo](https://github.com/EsotericSoftware/kryo).
 
 [backtype.storm.spout](https://github.com/apache/storm/tree/master/storm-core/src/jvm/backtype/storm/spout): Definition of spout and associated interfaces (like the `SpoutOutputCollector`). Also contains `ShellSpout` which implements the protocol for defining spouts in non-JVM languages.
 
@@ -139,4 +139,4 @@
 
 [backtype.storm.util](https://github.com/apache/storm/blob/master/storm-core/src/clj/backtype/storm/util.clj): Contains generic utility functions used throughout the code base.
  
-[backtype.storm.zookeeper](https://github.com/apache/storm/blob/master/storm-core/src/clj/backtype/storm/zookeeper.clj): Clojure wrapper around the Zookeeper API and implements some "high-level" stuff like "mkdirs" and "delete-recursive".
\ No newline at end of file
+[backtype.storm.zookeeper](https://github.com/apache/storm/blob/master/storm-core/src/clj/backtype/storm/zookeeper.clj): Clojure wrapper around the Zookeeper API and implements some "high-level" stuff like "mkdirs" and "delete-recursive".
diff --git a/documentation/Trident-API-Overview.md b/documentation/Trident-API-Overview.md
index bc5dcb1..f996f0d 100644
--- a/documentation/Trident-API-Overview.md
+++ b/documentation/Trident-API-Overview.md
@@ -2,6 +2,7 @@
 title: Trident API Overview
 layout: documentation
 documentation: true
+
 ---
 
 The core data model in Trident is the "Stream", processed as a series of batches. A stream is partitioned among the nodes in the cluster, and operations applied to a stream are applied in parallel across each partition.
@@ -77,15 +78,228 @@
 If you ran this code:
 
 ```java
-mystream.each(new Fields("b", "a"), new MyFilter())
+mystream.filter(new MyFilter())
 ```
 
 The resulting tuples would be:
 
 ```
-[2, 1, 1]
+[1, 2, 3]
 ```
 
+### map and flatMap
+
+`map` returns a stream consisting of the result of applying the given mapping function to the tuples of the stream. This
+can be used to apply a one-one transformation to the tuples.
+
+For example, if there is a stream of words and you wanted to convert it to a stream of upper case words,
+you could define a mapping function as follows,
+
+```java
+public class UpperCase extends MapFunction {
+ @Override
+ public Values execute(TridentTuple input) {
+   return new Values(input.getString(0).toUpperCase());
+ }
+}
+```
+
+The mapping function can then be applied on the stream to produce a stream of uppercase words.
+
+```java
+mystream.map(new UpperCase())
+```
+
+`flatMap` is similar to `map` but has the effect of applying a one-to-many transformation to the values of the stream,
+and then flattening the resulting elements into a new stream.
+
+For example, if there is a stream of sentences and you wanted to convert it to a stream of words,
+you could define a flatMap function as follows,
+
+```java
+public class Split extends FlatMapFunction {
+  @Override
+  public Iterable<Values> execute(TridentTuple input) {
+    List<Values> valuesList = new ArrayList<>();
+    for (String word : input.getString(0).split(" ")) {
+      valuesList.add(new Values(word));
+    }
+    return valuesList;
+  }
+}
+```
+
+The flatMap function can then be applied on the stream of sentences to produce a stream of words,
+
+```java
+mystream.flatMap(new Split())
+```
+
+Of course these operations can be chained, so a stream of uppercase words can be obtained from a stream of sentences as follows,
+
+```java
+mystream.flatMap(new Split()).map(new UpperCase())
+```
+### peek
+`peek` can be used to perform an additional action on each trident tuple as they flow through the stream.
+ This could be useful for debugging to see the tuples as they flow past a certain point in a pipeline.
+
+For example, the below code would print the result of converting the words to uppercase before they are passed to `groupBy`
+```java
+ mystream.flatMap(new Split()).map(new UpperCase())
+         .peek(new Consumer() {
+                @Override
+                public void accept(TridentTuple input) {
+                  System.out.println(input.getString(0));
+                }
+         })
+         .groupBy(new Fields("word"))
+         .persistentAggregate(new MemoryMapState.Factory(), new Count(), new Fields("count"))
+```
+
+### min and minBy
+`min` and `minBy` operations return minimum value on each partition of a batch of tuples in a trident stream.
+
+Suppose, a trident stream contains fields ["device-id", "count"] and the following partitions of tuples
+
+```
+Partition 0:
+[123, 2]
+[113, 54]
+[23,  28]
+[237, 37]
+[12,  23]
+[62,  17]
+[98,  42]
+
+Partition 1:
+[64,  18]
+[72,  54]
+[2,   28]
+[742, 71]
+[98,  45]
+[62,  12]
+[19,  174]
+
+
+Partition 2:
+[27,  94]
+[82,  23]
+[9,   86]
+[53,  71]
+[74,  37]
+[51,  49]
+[37,  98]
+
+```
+
+`minBy` operation can be applied on the above stream of tuples like below which results in emitting tuples with minimum values of `count` field in each partition.
+
+``` java
+  mystream.minBy(new Fields("count"))
+```
+Result of the above code on mentioned partitions is:
+ 
+```
+Partition 0:
+[123, 2]
+
+
+Partition 1:
+[62,  12]
+
+
+Partition 2:
+[82,  23]
+
+```
+
+You can look at other `min` and `minBy` operations on Stream
+``` java
+      public <T> Stream minBy(String inputFieldName, Comparator<T> comparator) 
+      public Stream min(Comparator<TridentTuple> comparator) 
+```
+Below example shows how these APIs can be used to find minimum using respective Comparators on a tuple. 
+
+``` java
+
+        FixedBatchSpout spout = new FixedBatchSpout(allFields, 10, Vehicle.generateVehicles(20));
+
+        TridentTopology topology = new TridentTopology();
+        Stream vehiclesStream = topology.newStream("spout1", spout).
+                each(allFields, new Debug("##### vehicles"));
+                
+        Stream slowVehiclesStream =
+                vehiclesStream
+                        .min(new SpeedComparator()) // Comparator w.r.t speed on received tuple.
+                        .each(vehicleField, new Debug("#### slowest vehicle"));
+
+        vehiclesStream
+                .minBy(Vehicle.FIELD_NAME, new EfficiencyComparator()) // Comparator w.r.t efficiency on received tuple.
+                .each(vehicleField, new Debug("#### least efficient vehicle"));
+
+```
+Example applications of these APIs can be located at [TridentMinMaxOfDevicesTopology](https://github.com/apache/storm/blob/master/examples/storm-starter/src/jvm/org/apache/storm/starter/trident/TridentMinMaxOfDevicesTopology.java) and [TridentMinMaxOfVehiclesTopology](https://github.com/apache/storm/blob/master/examples/storm-starter/src/jvm/org/apache/storm/starter/trident/TridentMinMaxOfVehiclesTopology.java) 
+
+### max and maxBy
+`max` and `maxBy` operations return maximum value on each partition of a batch of tuples in a trident stream.
+
+Suppose, a trident stream contains fields ["device-id", "count"] as mentioned in the above section.
+
+`max` and `maxBy` operations can be applied on the above stream of tuples like below which results in emitting tuples with maximum values of `count` field for each partition.
+
+``` java
+  mystream.maxBy(new Fields("count"))
+```
+Result of the above code on mentioned partitions is:
+ 
+```
+Partition 0:
+[113, 54]
+
+
+Partition 1:
+[19,  174]
+
+
+Partition 2:
+[37,  98]
+
+```
+
+You can look at other `max` and `maxBy` functions on Stream
+
+``` java
+
+      public <T> Stream maxBy(String inputFieldName, Comparator<T> comparator) 
+      public Stream max(Comparator<TridentTuple> comparator) 
+      
+```
+
+Below example shows how these APIs can be used to find maximum using respective Comparators on a tuple.
+
+``` java
+
+        FixedBatchSpout spout = new FixedBatchSpout(allFields, 10, Vehicle.generateVehicles(20));
+
+        TridentTopology topology = new TridentTopology();
+        Stream vehiclesStream = topology.newStream("spout1", spout).
+                each(allFields, new Debug("##### vehicles"));
+
+        vehiclesStream
+                .max(new SpeedComparator()) // Comparator w.r.t speed on received tuple.
+                .each(vehicleField, new Debug("#### fastest vehicle"))
+                .project(driverField)
+                .each(driverField, new Debug("##### fastest driver"));
+        
+        vehiclesStream
+                .maxBy(Vehicle.FIELD_NAME, new EfficiencyComparator()) // Comparator w.r.t efficiency on received tuple.
+                .each(vehicleField, new Debug("#### most efficient vehicle"));
+
+```
+
+Example applications of these APIs can be located at [TridentMinMaxOfDevicesTopology](https://github.com/apache/storm/blob/master/examples/storm-starter/src/jvm/org/apache/storm/starter/trident/TridentMinMaxOfDevicesTopology.java) and [TridentMinMaxOfVehiclesTopology](https://github.com/apache/storm/blob/master/examples/storm-starter/src/jvm/org/apache/storm/starter/trident/TridentMinMaxOfVehiclesTopology.java) 
+
 ### partitionAggregate
 
 partitionAggregate runs a function on each partition of a batch of tuples. Unlike functions, the tuples emitted by partitionAggregate replace the input tuples given to it. Consider this example:
diff --git a/documentation/Tutorial.md b/documentation/Tutorial.md
new file mode 100644
index 0000000..0d44177
--- /dev/null
+++ b/documentation/Tutorial.md
@@ -0,0 +1,320 @@
+---
+title: Tutorial
+layout: documentation
+documentation: true
+---
+In this tutorial, you'll learn how to create Storm topologies and deploy them to a Storm cluster. Java will be the main language used, but a few examples will use Python to illustrate Storm's multi-language capabilities.
+
+## Preliminaries
+
+This tutorial uses examples from the [storm-starter](https://github.com/apache/storm/blob/master/examples/storm-starter) project. It's recommended that you clone the project and follow along with the examples. Read [Setting up a development environment](Setting-up-development-environment.html) and [Creating a new Storm project](Creating-a-new-Storm-project.html) to get your machine set up.
+
+## Components of a Storm cluster
+
+A Storm cluster is superficially similar to a Hadoop cluster. Whereas on Hadoop you run "MapReduce jobs", on Storm you run "topologies". "Jobs" and "topologies" themselves are very different -- one key difference is that a MapReduce job eventually finishes, whereas a topology processes messages forever (or until you kill it).
+
+There are two kinds of nodes on a Storm cluster: the master node and the worker nodes. The master node runs a daemon called "Nimbus" that is similar to Hadoop's "JobTracker". Nimbus is responsible for distributing code around the cluster, assigning tasks to machines, and monitoring for failures.
+
+Each worker node runs a daemon called the "Supervisor". The supervisor listens for work assigned to its machine and starts and stops worker processes as necessary based on what Nimbus has assigned to it. Each worker process executes a subset of a topology; a running topology consists of many worker processes spread across many machines.
+
+![Storm cluster](images/storm-cluster.png)
+
+All coordination between Nimbus and the Supervisors is done through a [Zookeeper](http://zookeeper.apache.org/) cluster. Additionally, the Nimbus daemon and Supervisor daemons are fail-fast and stateless; all state is kept in Zookeeper or on local disk. This means you can kill -9 Nimbus or the Supervisors and they'll start back up like nothing happened. This design leads to Storm clusters being incredibly stable.
+
+## Topologies
+
+To do realtime computation on Storm, you create what are called "topologies". A topology is a graph of computation. Each node in a topology contains processing logic, and links between nodes indicate how data should be passed around between nodes.
+
+Running a topology is straightforward. First, you package all your code and dependencies into a single jar. Then, you run a command like the following:
+
+```
+storm jar all-my-code.jar backtype.storm.MyTopology arg1 arg2
+```
+
+This runs the class `backtype.storm.MyTopology` with the arguments `arg1` and `arg2`. The main function of the class defines the topology and submits it to Nimbus. The `storm jar` part takes care of connecting to Nimbus and uploading the jar.
+
+Since topology definitions are just Thrift structs, and Nimbus is a Thrift service, you can create and submit topologies using any programming language. The above example is the easiest way to do it from a JVM-based language. See [Running topologies on a production cluster](Running-topologies-on-a-production-cluster.html) for more information on starting and stopping topologies.
+
+## Streams
+
+The core abstraction in Storm is the "stream". A stream is an unbounded sequence of tuples. Storm provides the primitives for transforming a stream into a new stream in a distributed and reliable way. For example, you may transform a stream of tweets into a stream of trending topics.
+
+The basic primitives Storm provides for doing stream transformations are "spouts" and "bolts". Spouts and bolts have interfaces that you implement to run your application-specific logic.
+
+A spout is a source of streams. For example, a spout may read tuples off of a [Kestrel](http://github.com/nathanmarz/storm-kestrel) queue and emit them as a stream. Or a spout may connect to the Twitter API and emit a stream of tweets.
+
+A bolt consumes any number of input streams, does some processing, and possibly emits new streams. Complex stream transformations, like computing a stream of trending topics from a stream of tweets, require multiple steps and thus multiple bolts. Bolts can do anything from run functions, filter tuples, do streaming aggregations, do streaming joins, talk to databases, and more.
+
+Networks of spouts and bolts are packaged into a "topology" which is the top-level abstraction that you submit to Storm clusters for execution. A topology is a graph of stream transformations where each node is a spout or bolt. Edges in the graph indicate which bolts are subscribing to which streams. When a spout or bolt emits a tuple to a stream, it sends the tuple to every bolt that subscribed to that stream.
+
+![A Storm topology](images/topology.png)
+
+Links between nodes in your topology indicate how tuples should be passed around. For example, if there is a link between Spout A and Bolt B, a link from Spout A to Bolt C, and a link from Bolt B to Bolt C, then everytime Spout A emits a tuple, it will send the tuple to both Bolt B and Bolt C. All of Bolt B's output tuples will go to Bolt C as well.
+
+Each node in a Storm topology executes in parallel. In your topology, you can specify how much parallelism you want for each node, and then Storm will spawn that number of threads across the cluster to do the execution.
+
+A topology runs forever, or until you kill it. Storm will automatically reassign any failed tasks. Additionally, Storm guarantees that there will be no data loss, even if machines go down and messages are dropped.
+
+## Data model
+
+Storm uses tuples as its data model. A tuple is a named list of values, and a field in a tuple can be an object of any type. Out of the box, Storm supports all the primitive types, strings, and byte arrays as tuple field values. To use an object of another type, you just need to implement [a serializer](Serialization.html) for the type.
+
+Every node in a topology must declare the output fields for the tuples it emits. For example, this bolt declares that it emits 2-tuples with the fields "double" and "triple":
+
+```java
+public class DoubleAndTripleBolt extends BaseRichBolt {
+    private OutputCollectorBase _collector;
+
+    @Override
+    public void prepare(Map conf, TopologyContext context, OutputCollectorBase collector) {
+        _collector = collector;
+    }
+
+    @Override
+    public void execute(Tuple input) {
+        int val = input.getInteger(0);        
+        _collector.emit(input, new Values(val*2, val*3));
+        _collector.ack(input);
+    }
+
+    @Override
+    public void declareOutputFields(OutputFieldsDeclarer declarer) {
+        declarer.declare(new Fields("double", "triple"));
+    }    
+}
+```
+
+The `declareOutputFields` function declares the output fields `["double", "triple"]` for the component. The rest of the bolt will be explained in the upcoming sections.
+
+## A simple topology
+
+Let's take a look at a simple topology to explore the concepts more and see how the code shapes up. Let's look at the `ExclamationTopology` definition from storm-starter:
+
+```java
+TopologyBuilder builder = new TopologyBuilder();        
+builder.setSpout("words", new TestWordSpout(), 10);        
+builder.setBolt("exclaim1", new ExclamationBolt(), 3)
+        .shuffleGrouping("words");
+builder.setBolt("exclaim2", new ExclamationBolt(), 2)
+        .shuffleGrouping("exclaim1");
+```
+
+This topology contains a spout and two bolts. The spout emits words, and each bolt appends the string "!!!" to its input. The nodes are arranged in a line: the spout emits to the first bolt which then emits to the second bolt. If the spout emits the tuples ["bob"] and ["john"], then the second bolt will emit the words ["bob!!!!!!"] and ["john!!!!!!"].
+
+This code defines the nodes using the `setSpout` and `setBolt` methods. These methods take as input a user-specified id, an object containing the processing logic, and the amount of parallelism you want for the node. In this example, the spout is given id "words" and the bolts are given ids "exclaim1" and "exclaim2". 
+
+The object containing the processing logic implements the [IRichSpout](/javadoc/apidocs/backtype/storm/topology/IRichSpout.html) interface for spouts and the [IRichBolt](/javadoc/apidocs/backtype/storm/topology/IRichBolt.html) interface for bolts.
+
+The last parameter, how much parallelism you want for the node, is optional. It indicates how many threads should execute that component across the cluster. If you omit it, Storm will only allocate one thread for that node.
+
+`setBolt` returns an [InputDeclarer](/javadoc/apidocs/backtype/storm/topology/InputDeclarer.html) object that is used to define the inputs to the Bolt. Here, component "exclaim1" declares that it wants to read all the tuples emitted by component "words" using a shuffle grouping, and component "exclaim2" declares that it wants to read all the tuples emitted by component "exclaim1" using a shuffle grouping. "shuffle grouping" means that tuples should be randomly distributed from the input tasks to the bolt's tasks. There are many ways to group data between components. These will be explained in a few sections.
+
+If you wanted component "exclaim2" to read all the tuples emitted by both component "words" and component "exclaim1", you would write component "exclaim2"'s definition like this:
+
+```java
+builder.setBolt("exclaim2", new ExclamationBolt(), 5)
+            .shuffleGrouping("words")
+            .shuffleGrouping("exclaim1");
+```
+
+As you can see, input declarations can be chained to specify multiple sources for the Bolt.
+
+Let's dig into the implementations of the spouts and bolts in this topology. Spouts are responsible for emitting new messages into the topology. `TestWordSpout` in this topology emits a random word from the list ["nathan", "mike", "jackson", "golda", "bertels"] as a 1-tuple every 100ms. The implementation of `nextTuple()` in TestWordSpout looks like this:
+
+```java
+public void nextTuple() {
+    Utils.sleep(100);
+    final String[] words = new String[] {"nathan", "mike", "jackson", "golda", "bertels"};
+    final Random rand = new Random();
+    final String word = words[rand.nextInt(words.length)];
+    _collector.emit(new Values(word));
+}
+```
+
+As you can see, the implementation is very straightforward.
+
+`ExclamationBolt` appends the string "!!!" to its input. Let's take a look at the full implementation for `ExclamationBolt`:
+
+```java
+public static class ExclamationBolt implements IRichBolt {
+    OutputCollector _collector;
+
+    @Override
+    public void prepare(Map conf, TopologyContext context, OutputCollector collector) {
+        _collector = collector;
+    }
+
+    @Override
+    public void execute(Tuple tuple) {
+        _collector.emit(tuple, new Values(tuple.getString(0) + "!!!"));
+        _collector.ack(tuple);
+    }
+
+    @Override
+    public void cleanup() {
+    }
+
+    @Override
+    public void declareOutputFields(OutputFieldsDeclarer declarer) {
+        declarer.declare(new Fields("word"));
+    }
+    
+    @Override
+    public Map<String, Object> getComponentConfiguration() {
+        return null;
+    }
+}
+```
+
+The `prepare` method provides the bolt with an `OutputCollector` that is used for emitting tuples from this bolt. Tuples can be emitted at anytime from the bolt -- in the `prepare`, `execute`, or `cleanup` methods, or even asynchronously in another thread. This `prepare` implementation simply saves the `OutputCollector` as an instance variable to be used later on in the `execute` method.
+
+The `execute` method receives a tuple from one of the bolt's inputs. The `ExclamationBolt` grabs the first field from the tuple and emits a new tuple with the string "!!!" appended to it. If you implement a bolt that subscribes to multiple input sources, you can find out which component the [Tuple](/javadoc/apidocs/backtype/storm/tuple/Tuple.html) came from by using the `Tuple#getSourceComponent` method.
+
+There's a few other things going on in the `execute` method, namely that the input tuple is passed as the first argument to `emit` and the input tuple is acked on the final line. These are part of Storm's reliability API for guaranteeing no data loss and will be explained later in this tutorial. 
+
+The `cleanup` method is called when a Bolt is being shutdown and should cleanup any resources that were opened. There's no guarantee that this method will be called on the cluster: for example, if the machine the task is running on blows up, there's no way to invoke the method. The `cleanup` method is intended for when you run topologies in [local mode](Local-mode.html) (where a Storm cluster is simulated in process), and you want to be able to run and kill many topologies without suffering any resource leaks.
+
+The `declareOutputFields` method declares that the `ExclamationBolt` emits 1-tuples with one field called "word".
+
+The `getComponentConfiguration` method allows you to configure various aspects of how this component runs. This is a more advanced topic that is explained further on [Configuration](Configuration.html).
+
+Methods like `cleanup` and `getComponentConfiguration` are often not needed in a bolt implementation. You can define bolts more succinctly by using a base class that provides default implementations where appropriate. `ExclamationBolt` can be written more succinctly by extending `BaseRichBolt`, like so:
+
+```java
+public static class ExclamationBolt extends BaseRichBolt {
+    OutputCollector _collector;
+
+    @Override
+    public void prepare(Map conf, TopologyContext context, OutputCollector collector) {
+        _collector = collector;
+    }
+
+    @Override
+    public void execute(Tuple tuple) {
+        _collector.emit(tuple, new Values(tuple.getString(0) + "!!!"));
+        _collector.ack(tuple);
+    }
+
+    @Override
+    public void declareOutputFields(OutputFieldsDeclarer declarer) {
+        declarer.declare(new Fields("word"));
+    }    
+}
+```
+
+## Running ExclamationTopology in local mode
+
+Let's see how to run the `ExclamationTopology` in local mode and see that it's working.
+
+Storm has two modes of operation: local mode and distributed mode. In local mode, Storm executes completely in process by simulating worker nodes with threads. Local mode is useful for testing and development of topologies. When you run the topologies in storm-starter, they'll run in local mode and you'll be able to see what messages each component is emitting. You can read more about running topologies in local mode on [Local mode](Local-mode.html).
+
+In distributed mode, Storm operates as a cluster of machines. When you submit a topology to the master, you also submit all the code necessary to run the topology. The master will take care of distributing your code and allocating workers to run your topology. If workers go down, the master will reassign them somewhere else. You can read more about running topologies on a cluster on [Running topologies on a production cluster](Running-topologies-on-a-production-cluster.html)]. 
+
+Here's the code that runs `ExclamationTopology` in local mode:
+
+```java
+Config conf = new Config();
+conf.setDebug(true);
+conf.setNumWorkers(2);
+
+LocalCluster cluster = new LocalCluster();
+cluster.submitTopology("test", conf, builder.createTopology());
+Utils.sleep(10000);
+cluster.killTopology("test");
+cluster.shutdown();
+```
+
+First, the code defines an in-process cluster by creating a `LocalCluster` object. Submitting topologies to this virtual cluster is identical to submitting topologies to distributed clusters. It submits a topology to the `LocalCluster` by calling `submitTopology`, which takes as arguments a name for the running topology, a configuration for the topology, and then the topology itself.
+
+The name is used to identify the topology so that you can kill it later on. A topology will run indefinitely until you kill it.
+
+The configuration is used to tune various aspects of the running topology. The two configurations specified here are very common:
+
+1. **TOPOLOGY_WORKERS** (set with `setNumWorkers`) specifies how many _processes_ you want allocated around the cluster to execute the topology. Each component in the topology will execute as many _threads_. The number of threads allocated to a given component is configured through the `setBolt` and `setSpout` methods. Those _threads_ exist within worker _processes_. Each worker _process_ contains within it some number of _threads_ for some number of components. For instance, you may have 300 threads specified across all your components and 50 worker processes specified in your config. Each worker process will execute 6 threads, each of which of could belong to a different component. You tune the performance of Storm topologies by tweaking the parallelism for each component and the number of worker processes those threads should run within.
+2. **TOPOLOGY_DEBUG** (set with `setDebug`), when set to true, tells Storm to log every message every emitted by a component. This is useful in local mode when testing topologies, but you probably want to keep this turned off when running topologies on the cluster.
+
+There's many other configurations you can set for the topology. The various configurations are detailed on [the Javadoc for Config](/javadoc/apidocs/backtype/storm/Config.html).
+
+To learn about how to set up your development environment so that you can run topologies in local mode (such as in Eclipse), see [Creating a new Storm project](Creating-a-new-Storm-project.html).
+
+## Stream groupings
+
+A stream grouping tells a topology how to send tuples between two components. Remember, spouts and bolts execute in parallel as many tasks across the cluster. If you look at how a topology is executing at the task level, it looks something like this:
+
+![Tasks in a topology](images/topology-tasks.png)
+
+When a task for Bolt A emits a tuple to Bolt B, which task should it send the tuple to?
+
+A "stream grouping" answers this question by telling Storm how to send tuples between sets of tasks. Before we dig into the different kinds of stream groupings, let's take a look at another topology from [storm-starter](http://github.com/apache/storm/blob/master/examples/storm-starter). This [WordCountTopology](https://github.com/apache/storm/blob/master/examples/storm-starter/src/jvm/storm/starter/WordCountTopology.java) reads sentences off of a spout and streams out of `WordCountBolt` the total number of times it has seen that word before:
+
+```java
+TopologyBuilder builder = new TopologyBuilder();
+        
+builder.setSpout("sentences", new RandomSentenceSpout(), 5);        
+builder.setBolt("split", new SplitSentence(), 8)
+        .shuffleGrouping("sentences");
+builder.setBolt("count", new WordCount(), 12)
+        .fieldsGrouping("split", new Fields("word"));
+```
+
+`SplitSentence` emits a tuple for each word in each sentence it receives, and `WordCount` keeps a map in memory from word to count. Each time `WordCount` receives a word, it updates its state and emits the new word count.
+
+There's a few different kinds of stream groupings.
+
+The simplest kind of grouping is called a "shuffle grouping" which sends the tuple to a random task. A shuffle grouping is used in the `WordCountTopology` to send tuples from `RandomSentenceSpout` to the `SplitSentence` bolt. It has the effect of evenly distributing the work of processing the tuples across all of `SplitSentence` bolt's tasks.
+
+A more interesting kind of grouping is the "fields grouping". A fields grouping is used between the `SplitSentence` bolt and the `WordCount` bolt. It is critical for the functioning of the `WordCount` bolt that the same word always go to the same task. Otherwise, more than one task will see the same word, and they'll each emit incorrect values for the count since each has incomplete information. A fields grouping lets you group a stream by a subset of its fields. This causes equal values for that subset of fields to go to the same task. Since `WordCount` subscribes to `SplitSentence`'s output stream using a fields grouping on the "word" field, the same word always goes to the same task and the bolt produces the correct output.
+
+Fields groupings are the basis of implementing streaming joins and streaming aggregations as well as a plethora of other use cases. Underneath the hood, fields groupings are implemented using mod hashing.
+
+There's a few other kinds of stream groupings. You can read more about them on [Concepts](Concepts.html). 
+
+## Defining Bolts in other languages
+
+Bolts can be defined in any language. Bolts written in another language are executed as subprocesses, and Storm communicates with those subprocesses with JSON messages over stdin/stdout. The communication protocol just requires an ~100 line adapter library, and Storm ships with adapter libraries for Ruby, Python, and Fancy. 
+
+Here's the definition of the `SplitSentence` bolt from `WordCountTopology`:
+
+```java
+public static class SplitSentence extends ShellBolt implements IRichBolt {
+    public SplitSentence() {
+        super("python", "splitsentence.py");
+    }
+
+    public void declareOutputFields(OutputFieldsDeclarer declarer) {
+        declarer.declare(new Fields("word"));
+    }
+}
+```
+
+`SplitSentence` overrides `ShellBolt` and declares it as running using `python` with the arguments `splitsentence.py`. Here's the implementation of `splitsentence.py`:
+
+```python
+import storm
+
+class SplitSentenceBolt(storm.BasicBolt):
+    def process(self, tup):
+        words = tup.values[0].split(" ")
+        for word in words:
+          storm.emit([word])
+
+SplitSentenceBolt().run()
+```
+
+For more information on writing spouts and bolts in other languages, and to learn about how to create topologies in other languages (and avoid the JVM completely), see [Using non-JVM languages with Storm](Using-non-JVM-languages-with-Storm.html).
+
+## Guaranteeing message processing
+
+Earlier on in this tutorial, we skipped over a few aspects of how tuples are emitted. Those aspects were part of Storm's reliability API: how Storm guarantees that every message coming off a spout will be fully processed. See [Guaranteeing message processing](Guaranteeing-message-processing.html) for information on how this works and what you have to do as a user to take advantage of Storm's reliability capabilities.
+
+## Transactional topologies
+
+Storm guarantees that every message will be played through the topology at least once. A common question asked is "how do you do things like counting on top of Storm? Won't you overcount?" Storm has a feature called transactional topologies that let you achieve exactly-once messaging semantics for most computations. Read more about transactional topologies [here](Transactional-topologies.html). 
+
+## Distributed RPC
+
+This tutorial showed how to do basic stream processing on top of Storm. There's lots more things you can do with Storm's primitives. One of the most interesting applications of Storm is Distributed RPC, where you parallelize the computation of intense functions on the fly. Read more about Distributed RPC [here](Distributed-RPC.html). 
+
+## Conclusion
+
+This tutorial gave a broad overview of developing, testing, and deploying Storm topologies. The rest of the documentation dives deeper into all the aspects of using Storm.
diff --git a/documentation/Understanding-the-parallelism-of-a-Storm-topology.md b/documentation/Understanding-the-parallelism-of-a-Storm-topology.md
index 455b229..9b1e006 100644
--- a/documentation/Understanding-the-parallelism-of-a-Storm-topology.md
+++ b/documentation/Understanding-the-parallelism-of-a-Storm-topology.md
@@ -116,7 +116,7 @@
 
 * [Concepts](Concepts.html)
 * [Configuration](Configuration.html)
-* [Running topologies on a production cluster](Running-topologies-on-a-production-cluster.html)
+* [Running topologies on a production cluster](Running-topologies-on-a-production-cluster.html)]
 * [Local mode](Local-mode.html)
-* [Tutorial](/tutorial.html)
+* [Tutorial](Tutorial.html)
 * [Storm API documentation](/javadoc/apidocs/), most notably the class ``Config``
diff --git a/documentation/Windowing.md b/documentation/Windowing.md
new file mode 100644
index 0000000..44512f7
--- /dev/null
+++ b/documentation/Windowing.md
@@ -0,0 +1,239 @@
+---
+title: Windowing Support in Core Storm
+layout: documentation
+documentation: true
+---
+
+Storm core has support for processing a group of tuples that falls within a window. Windows are specified with the 
+following two parameters,
+
+1. Window length - the length or duration of the window
+2. Sliding interval - the interval at which the windowing slides
+
+## Sliding Window
+
+Tuples are grouped in windows and window slides every sliding interval. A tuple can belong to more than one window.
+
+For example a time duration based sliding window with length 10 secs and sliding interval of 5 seconds.
+
+```
+| e1 e2 | e3 e4 e5 e6 | e7 e8 e9 |...
+0       5             10         15    -> time
+
+|<------- w1 -------->|
+        |------------ w2 ------->|
+```
+
+The window is evaluated every 5 seconds and some of the tuples in the first window overlaps with the second one.
+		
+
+## Tumbling Window
+
+Tuples are grouped in a single window based on time or count. Any tuple belongs to only one of the windows.
+
+For example a time duration based tumbling window with length 5 secs.
+
+```
+| e1 e2 | e3 e4 e5 e6 | e7 e8 e9 |...
+0       5             10         15    -> time
+   w1         w2            w3
+```
+
+The window is evaluated every five seconds and none of the windows overlap.
+
+Storm supports specifying the window length and sliding intervals as a count of the number of tuples or as a time duration.
+
+The bolt interface `IWindowedBolt` is implemented by bolts that needs windowing support.
+
+```java
+public interface IWindowedBolt extends IComponent {
+    void prepare(Map stormConf, TopologyContext context, OutputCollector collector);
+    /**
+     * Process tuples falling within the window and optionally emit 
+     * new tuples based on the tuples in the input window.
+     */
+    void execute(TupleWindow inputWindow);
+    void cleanup();
+}
+```
+
+Every time the window activates, the `execute` method is invoked. The TupleWindow parameter gives access to the current tuples
+in the window, the tuples that expired and the new tuples that are added since last window was computed which will be useful 
+for efficient windowing computations.
+
+Bolts that needs windowing support typically would extend `BaseWindowedBolt` which has the apis for specifying the
+window length and sliding intervals.
+
+E.g. 
+
+```java
+public class SlidingWindowBolt extends BaseWindowedBolt {
+	private OutputCollector collector;
+	
+    @Override
+    public void prepare(Map stormConf, TopologyContext context, OutputCollector collector) {
+    	this.collector = collector;
+    }
+	
+    @Override
+    public void execute(TupleWindow inputWindow) {
+	  for(Tuple tuple: inputWindow.get()) {
+	    // do the windowing computation
+		...
+	  }
+	  // emit the results
+	  collector.emit(new Values(computedValue));
+    }
+}
+
+public static void main(String[] args) {
+    TopologyBuilder builder = new TopologyBuilder();
+     builder.setSpout("spout", new RandomSentenceSpout(), 1);
+     builder.setBolt("slidingwindowbolt", 
+                     new SlidingWindowBolt().withWindow(new Count(30), new Count(10)),
+                     1).shuffleGrouping("spout");
+    Config conf = new Config();
+    conf.setDebug(true);
+    conf.setNumWorkers(1);
+
+    StormSubmitter.submitTopologyWithProgressBar(args[0], conf, builder.createTopology());
+	
+}
+```
+
+The following window configurations are supported.
+
+```java
+withWindow(Count windowLength, Count slidingInterval)
+Tuple count based sliding window that slides after `slidingInterval` number of tuples.
+
+withWindow(Count windowLength)
+Tuple count based window that slides with every incoming tuple.
+
+withWindow(Count windowLength, Duration slidingInterval)
+Tuple count based sliding window that slides after `slidingInterval` time duration.
+
+withWindow(Duration windowLength, Duration slidingInterval)
+Time duration based sliding window that slides after `slidingInterval` time duration.
+
+withWindow(Duration windowLength)
+Time duration based window that slides with every incoming tuple.
+
+withWindow(Duration windowLength, Count slidingInterval)
+Time duration based sliding window configuration that slides after `slidingInterval` number of tuples.
+
+withTumblingWindow(BaseWindowedBolt.Count count)
+Count based tumbling window that tumbles after the specified count of tuples.
+
+withTumblingWindow(BaseWindowedBolt.Duration duration)
+Time duration based tumbling window that tumbles after the specified time duration.
+
+```
+
+## Tuple timestamp and out of order tuples
+By default the timestamp tracked in the window is the time when the tuple is processed by the bolt. The window calculations
+are performed based on the processing timestamp. Storm has support for tracking windows based on the source generated timestamp.
+
+```java
+/**
+* Specify a field in the tuple that represents the timestamp as a long value. If this
+* field is not present in the incoming tuple, an {@link IllegalArgumentException} will be thrown.
+*
+* @param fieldName the name of the field that contains the timestamp
+*/
+public BaseWindowedBolt withTimestampField(String fieldName)
+```
+
+The value for the above `fieldName` will be looked up from the incoming tuple and considered for windowing calculations. 
+If the field is not present in the tuple an exception will be thrown. Along with the timestamp field name, a time lag parameter 
+can also be specified which indicates the max time limit for tuples with out of order timestamps. 
+
+E.g. If the lag is 5 secs and a tuple `t1` arrived with timestamp `06:00:05` no tuples may arrive with tuple timestamp earlier than `06:00:00`. If a tuple
+arrives with timestamp 05:59:59 after `t1` and the window has moved past `t1`, it will be treated as a late tuple and not processed. Currently the late
+ tuples are just logged in the worker log files at INFO level.
+
+```java
+/**
+* Specify the maximum time lag of the tuple timestamp in milliseconds. It means that the tuple timestamps
+* cannot be out of order by more than this amount.
+*
+* @param duration the max lag duration
+*/
+public BaseWindowedBolt withLag(Duration duration)
+```
+
+### Watermarks
+For processing tuples with timestamp field, storm internally computes watermarks based on the incoming tuple timestamp. Watermark is 
+the minimum of the latest tuple timestamps (minus the lag) across all the input streams. At a higher level this is similar to the watermark concept
+used by Flink and Google's MillWheel for tracking event based timestamps.
+
+Periodically (default every sec), the watermark timestamps are emitted and this is considered as the clock tick for the window calculation if 
+tuple based timestamps are in use. The interval at which watermarks are emitted can be changed with the below api.
+ 
+```java
+/**
+* Specify the watermark event generation interval. For tuple based timestamps, watermark events
+* are used to track the progress of time
+*
+* @param interval the interval at which watermark events are generated
+*/
+public BaseWindowedBolt withWatermarkInterval(Duration interval)
+```
+
+
+When a watermark is received, all windows up to that timestamp will be evaluated.
+
+For example, consider tuple timestamp based processing with following window parameters,
+
+`Window length = 20s, sliding interval = 10s, watermark emit frequency = 1s, max lag = 5s`
+
+```
+|-----|-----|-----|-----|-----|-----|-----|
+0     10    20    30    40    50    60    70
+````
+
+Current ts = `09:00:00`
+
+Tuples `e1(6:00:03), e2(6:00:05), e3(6:00:07), e4(6:00:18), e5(6:00:26), e6(6:00:36)` are received between `9:00:00` and `9:00:01`
+
+At time t = `09:00:01`, watermark w1 = `6:00:31` is emitted since no tuples earlier than `6:00:31` can arrive.
+
+Three windows will be evaluated. The first window end ts (06:00:10) is computed by taking the earliest event timestamp (06:00:03) 
+and computing the ceiling based on the sliding interval (10s).
+
+1. `5:59:50 - 06:00:10` with tuples e1, e2, e3
+2. `6:00:00 - 06:00:20` with tuples e1, e2, e3, e4
+3. `6:00:10 - 06:00:30` with tuples e4, e5
+
+e6 is not evaluated since watermark timestamp `6:00:31` is older than the tuple ts `6:00:36`.
+
+Tuples `e7(8:00:25), e8(8:00:26), e9(8:00:27), e10(8:00:39)` are received between `9:00:01` and `9:00:02`
+
+At time t = `09:00:02` another watermark w2 = `08:00:34` is emitted since no tuples earlier than `8:00:34` can arrive now.
+
+Three windows will be evaluated,
+
+1. `6:00:20 - 06:00:40` with tuples e5, e6 (from earlier batch)
+2. `6:00:30 - 06:00:50` with tuple e6 (from earlier batch)
+3. `8:00:10 - 08:00:30` with tuples e7, e8, e9
+
+e10 is not evaluated since the tuple ts `8:00:39` is beyond the watermark time `8:00:34`.
+
+The window calculation considers the time gaps and computes the windows based on the tuple timestamp.
+
+## Guarentees
+The windowing functionality in storm core currently provides at-least once guarentee. The values emitted from the bolts
+`execute(TupleWindow inputWindow)` method are automatically anchored to all the tuples in the inputWindow. The downstream
+bolts are expected to ack the received tuple (i.e the tuple emitted from the windowed bolt) to complete the tuple tree. 
+If not the tuples will be replayed and the windowing computation will be re-evaluated. 
+
+The tuples in the window are automatically acked when the expire, i.e. when they fall out of the window after 
+`windowLength + slidingInterval`. Note that the configuration `topology.message.timeout.secs` should be sufficiently more 
+than `windowLength + slidingInterval` for time based windows; otherwise the tuples will timeout and get replayed and can result
+in duplicate evaluations. For count based windows, the configuration should be adjusted such that `windowLength + slidingInterval`
+tuples can be received within the timeout period.
+
+## Example topology
+An example toplogy `SlidingWindowTopology` shows how to use the apis to compute a sliding window sum and a tumbling window 
+average.
+
diff --git a/documentation/cgroups_in_storm.md b/documentation/cgroups_in_storm.md
new file mode 100644
index 0000000..bf61bba
--- /dev/null
+++ b/documentation/cgroups_in_storm.md
@@ -0,0 +1,65 @@
+# CGroups in Storm
+
+CGroups are used by Storm to limit the resource usage of workers to guarantee fairness and QOS.  
+
+**Please note: CGroups is currently supported only on Linux platforms (kernel version 2.6.24 and above)** 
+
+## Setup
+
+To use CGroups make sure to install cgroups and configure cgroups correctly.  For more information about setting up and configuring, please visit:
+
+https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Resource_Management_Guide/ch-Using_Control_Groups.html
+
+A sample/default cgconfig.conf file is supplied in the <stormroot>/conf directory.  The contents are as follows:
+
+```
+mount {
+	cpuset	= /cgroup/cpuset;
+	cpu	= /cgroup/storm_resources;
+	cpuacct	= /cgroup/cpuacct;
+	memory	= /cgroup/storm_resources;
+	devices	= /cgroup/devices;
+	freezer	= /cgroup/freezer;
+	net_cls	= /cgroup/net_cls;
+	blkio	= /cgroup/blkio;
+}
+
+group storm {
+       perm {
+               task {
+                      uid = 500;
+                      gid = 500;
+               }
+               admin {
+                      uid = 500;
+                      gid = 500;
+               }
+       }
+       cpu {
+       }
+}
+```
+
+For a more detailed explanation of the format and configs for the cgconfig.conf file, please visit:
+
+https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Resource_Management_Guide/ch-Using_Control_Groups.html#The_cgconfig.conf_File
+
+# Settings Related To CGroups in Storm
+
+| Setting                       | Function                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            |
+|-------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| storm.cgroup.enable                | This config is used to set whether or not cgroups will be used.  Set "true" to enable use of cgroups.  Set "false" to not use cgroups. When this config is set to false, unit tests related to cgroups will be skipped. Default set to "false"                                                                                                                                                                                                                                                                                         |
+| storm.cgroup.hierarchy.dir   | The path to the cgroup hierarchy that storm will use.  Default set to "/cgroup/storm_resources"                                                                                                                                                                                                                                                                                                                                                                                                                                     |
+| storm.cgroup.resources       | A list of subsystems that will be regulated by CGroups. Default set to cpu and memory.  Currently only cpu and memory are supported                                                                                                                                                                                                                                                                                                                                                                                                                                               |
+| storm.supervisor.cgroup.rootdir     | The root cgroup used by the supervisor.  The path to the cgroup will be \<storm.cgroup.hierarchy.dir>/\<storm.supervisor.cgroup.rootdir>.  Default set to "storm"                                                                                                                                                                                                                                                                                                                                                                           |
+| storm.cgroup.cgexec.cmd            | Absolute path to the cgexec command used to launch workers within a cgroup. Default set to "/bin/cgexec"                                                                                                                                                                                                                                                                                                                                                                                                                            |
+| storm.worker.cgroup.memory.mb.limit | The memory limit in MB for each worker.  This can be set on a per supervisor node basis.  This config is used to set the cgroup config memory.limit_in_bytes.  For more details about memory.limit_in_bytes, please visit:  https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Resource_Management_Guide/sec-memory.html.    Please note, if you are using the Resource Aware Scheduler, please do NOT set this config as this config will override the values calculated by the Resource Aware Scheduler |
+| storm.worker.cgroup.cpu.limit       | The cpu share for each worker. This can be set on a per supervisor node basis.  This config is used to set the cgroup config cpu.share. For more details about cpu.share, please visit:   https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Resource_Management_Guide/sec-cpu.html. Please note, if you are using the Resource Aware Scheduler, please do NOT set this config as this config will override the values calculated by the Resource Aware Scheduler.                                       |
+
+Since limiting CPU usage via cpu.shares only limits the proportional CPU usage of a process, to limit the amount of CPU usage of all the worker processes on a supervisor node, please set the config supervisor.cpu.capacity. Where each increment represents 1% of a core thus if a user sets supervisor.cpu.capacity: 200, the user is indicating the use of 2 cores.
+
+## Integration with Resource Aware Scheduler
+
+CGroups can be used in conjunction with the Resource Aware Scheduler.  CGroups will then enforce the resource usage of workers as allocated by the Resource Aware Scheduler.  To use cgroups with the Resource Aware Scheduler, simply enable cgroups and be sure NOT to set storm.worker.cgroup.memory.mb.limit and storm.worker.cgroup.cpu.limit configs.
+
+
diff --git a/documentation/distcache-blobstore.md b/documentation/distcache-blobstore.md
new file mode 100644
index 0000000..66cbf38
--- /dev/null
+++ b/documentation/distcache-blobstore.md
@@ -0,0 +1,740 @@
+---
+title: Storm Distributed Cache API
+layout: documentation
+documentation: true
+---
+# Storm Distributed Cache API
+
+The distributed cache feature in storm is used to efficiently distribute files
+(or blobs, which is the equivalent terminology for a file in the distributed
+cache and is used interchangeably in this document) that are large and can
+change during the lifetime of a topology, such as geo-location data,
+dictionaries, etc. Typical use cases include phrase recognition, entity
+extraction, document classification, URL re-writing, location/address detection
+and so forth. Such files may be several KB to several GB in size. For small
+datasets that don't need dynamic updates, including them in the topology jar
+could be fine. But for large files, the startup times could become very large.
+In these cases, the distributed cache feature can provide fast topology startup,
+especially if the files were previously downloaded for the same submitter and
+are still in the cache. This is useful with frequent deployments, sometimes few
+times a day with updated jars, because the large cached files will remain available
+without changes. The large cached blobs that do not change frequently will
+remain available in the distributed cache.
+
+At the starting time of a topology, the user specifies the set of files the
+topology needs. Once a topology is running, the user at any time can request for
+any file in the distributed cache to be updated with a newer version. The
+updating of blobs happens in an eventual consistency model. If the topology
+needs to know what version of a file it has access to, it is the responsibility
+of the user to find this information out. The files are stored in a cache with
+Least-Recently Used (LRU) eviction policy, where the supervisor decides which
+cached files are no longer needed and can delete them to free disk space. The
+blobs can be compressed, and the user can request the blobs to be uncompressed
+before it accesses them.
+
+## Motivation for Distributed Cache
+* Allows sharing blobs among topologies.
+* Allows updating the blobs from the command line.
+
+## Distributed Cache Implementations
+The current BlobStore interface has the following two implementations
+* LocalFsBlobStore
+* HdfsBlobStore
+
+Appendix A contains the interface for blobstore implementation.
+
+## LocalFsBlobStore
+![LocalFsBlobStore](images/local_blobstore.png)
+
+Local file system implementation of Blobstore can be depicted in the above timeline diagram.
+
+There are several stages from blob creation to blob download and corresponding execution of a topology. 
+The main stages can be depicted as follows
+
+### Blob Creation Command
+Blobs in the blobstore can be created through command line using the following command.
+
+```
+storm blobstore create --file README.txt --acl o::rwa --repl-fctr 4 key1
+```
+
+The above command creates a blob with a key name “key1” corresponding to the file README.txt. 
+The access given to all users being read, write and admin with a replication factor of 4.
+
+### Topology Submission and Blob Mapping
+Users can submit their topology with the following command. The command includes the 
+topology map configuration. The configuration holds two keys “key1” and “key2” with the 
+key “key1” having a local file name mapping named “blob_file” and it is not compressed.
+
+```
+storm jar /home/y/lib/storm-starter/current/storm-starter-jar-with-dependencies.jar 
+storm.starter.clj.word_count test_topo -c topology.blobstore.map='{"key1":{"localname":"blob_file", "uncompress":"false"},"key2":{}}'
+```
+
+### Blob Creation Process
+The creation of the blob takes place through the interface “ClientBlobStore”. Appendix B contains the “ClientBlobStore” interface. 
+The concrete implementation of this interface is the  “NimbusBlobStore”. In the case of local file system the client makes a 
+call to the nimbus to create the blobs within the local file system. The nimbus uses the local file system implementation to create these blobs. 
+When a user submits a topology, the jar, configuration and code files are uploaded as blobs with the help of blobstore. 
+Also, all the other blobs specified by the topology are mapped to it with the help of topology.blobstore.map configuration.
+
+### Blob Download by the Supervisor
+Finally, the blobs corresponding to a topology are downloaded by the supervisor once it receives the assignments from the nimbus through 
+the same “NimbusBlobStore” thrift client that uploaded the blobs. The supervisor downloads the code, jar and conf blobs by calling the 
+“NimbusBlobStore” client directly while the blobs specified in the topology.blobstore.map are downloaded and mapped locally with the help 
+of the Localizer. The Localizer talks to the “NimbusBlobStore” thrift client to download the blobs and adds the blob compression and local 
+blob name mapping logic to suit the implementation of a topology. Once all the blobs have been downloaded the workers are launched to run 
+the topologies.
+
+## HdfsBlobStore
+![HdfsBlobStore](images/hdfs_blobstore.png)
+
+The HdfsBlobStore functionality has a similar implementation and blob creation and download procedure barring how the replication 
+is handled in the two blobstore implementations. The replication in HDFS blobstore is obvious as HDFS is equipped to handle replication 
+and it requires no state to be stored inside the zookeeper. On the other hand, the local file system blobstore requires the state to be 
+stored on the zookeeper in order for it to work with nimbus HA. Nimbus HA allows the local filesystem to implement the replication feature 
+seamlessly by storing the state in the zookeeper about the running topologies and syncing the blobs on various nimbuses. On the supervisor’s 
+end, the supervisor and localizer talks to HdfsBlobStore through “HdfsClientBlobStore” implementation.
+
+## Additional Features and Documentation
+```
+storm jar /home/y/lib/storm-starter/current/storm-starter-jar-with-dependencies.jar storm.starter.clj.word_count test_topo 
+-c topology.blobstore.map='{"key1":{"localname":"blob_file", "uncompress":"false"},"key2":{}}'
+```
+ 
+### Compression
+The blobstore allows the user to specify the “uncompress” configuration to true or false. This configuration can be specified 
+in the topology.blobstore.map mentioned in the above command. This allows the user to upload a compressed file like a tarball/zip. 
+In local file system blobstore, the compressed blobs are stored on the nimbus node. The localizer code takes the responsibility to 
+uncompress the blob and store it on the supervisor node. Symbolic links to the blobs on the supervisor node are created within the worker 
+before the execution starts.
+
+### Local File Name Mapping
+Apart from compression the blobstore helps to give the blob a name that can be used by the workers. The localizer takes 
+the responsibility of mapping the blob to a local name on the supervisor node.
+
+## Additional Blobstore Implementation Details
+Blobstore uses a hashing function to create the blobs based on the key. The blobs are generally stored inside the directory specified by
+the blobstore.dir configuration. By default, it is stored under “storm.local.dir/nimbus/blobs” for local file system and a similar path on 
+hdfs file system.
+
+Once a file is submitted, the blobstore reads the configs and creates a metadata for the blob with all the access control details. The metadata 
+is generally used for authorization while accessing the blobs. The blob key and version contribute to the hash code and there by the directory 
+under “storm.local.dir/nimbus/blobs/data” where the data is placed. The blobs are generally placed in a positive number directory like 193,822 etc.
+
+Once the topology is launched and the relevant blobs have been created, the supervisor downloads blobs related to the storm.conf, storm.ser 
+and storm.code first and all the blobs uploaded by the command line separately using the localizer to uncompress and map them to a local name 
+specified in the topology.blobstore.map configuration. The supervisor periodically updates blobs by checking for the change of version. 
+This allows updating the blobs on the fly and thereby making it a very useful feature.
+
+For a local file system, the distributed cache on the supervisor node is set to 10240 MB as a soft limit and the clean up code attempts 
+to clean anything over the soft limit every 600 seconds based on LRU policy.
+
+The HDFS blobstore implementation handles load better by removing the burden on the nimbus to store the blobs, which avoids it becoming a bottleneck. Moreover, it provides seamless replication of blobs. On the other hand, the local file system blobstore is not very efficient in 
+replicating the blobs and is limited by the number of nimbuses. Moreover, the supervisor talks to the HDFS blobstore directly without the 
+involvement of the nimbus and thereby reduces the load and dependency on nimbus.
+
+## Highly Available Nimbus
+### Problem Statement:
+Currently the storm master aka nimbus, is a process that runs on a single machine under supervision. In most cases, the 
+nimbus failure is transient and it is restarted by the process that does supervision. However sometimes when disks fail and networks 
+partitions occur, nimbus goes down. Under these circumstances, the topologies run normally but no new topologies can be 
+submitted, no existing topologies can be killed/deactivated/activated and if a supervisor node fails then the 
+reassignments are not performed resulting in performance degradation or topology failures. With this project we intend, 
+to resolve this problem by running nimbus in a primary backup mode to guarantee that even if a nimbus server fails one 
+of the backups will take over. 
+
+### Requirements for Highly Available Nimbus:
+* Increase overall availability of nimbus.
+* Allow nimbus hosts to leave and join the cluster at will any time. A newly joined host should auto catch up and join 
+the list of potential leaders automatically. 
+* No topology resubmissions required in case of nimbus fail overs.
+* No active topology should ever be lost. 
+
+#### Leader Election:
+The nimbus server will use the following interface:
+
+```java
+public interface ILeaderElector {
+    /**
+     * queue up for leadership lock. The call returns immediately and the caller                     
+     * must check isLeader() to perform any leadership action.
+     */
+    void addToLeaderLockQueue();
+
+    /**
+     * Removes the caller from the leader lock queue. If the caller is leader
+     * also releases the lock.
+     */
+    void removeFromLeaderLockQueue();
+
+    /**
+     *
+     * @return true if the caller currently has the leader lock.
+     */
+    boolean isLeader();
+
+    /**
+     *
+     * @return the current leader's address , throws exception if noone has has    lock.
+     */
+    InetSocketAddress getLeaderAddress();
+
+    /**
+     * 
+     * @return list of current nimbus addresses, includes leader.
+     */
+    List<InetSocketAddress> getAllNimbusAddresses();
+}
+```
+Once a nimbus comes up it calls addToLeaderLockQueue() function. The leader election code selects a leader from the queue.
+If the topology code, jar or config blobs are missing, it would download the blobs from any other nimbus which is up and running.
+
+The first implementation will be Zookeeper based. If the zookeeper connection is lost/reset resulting in loss of lock
+or the spot in queue the implementation will take care of updating the state such that isLeader() will reflect the 
+current status. The leader like actions must finish in less than minimumOf(connectionTimeout, SessionTimeout) to ensure
+the lock was held by nimbus for the entire duration of the action (Not sure if we want to just state this expectation 
+and ensure that zk configurations are set high enough which will result in higher failover time or we actually want to 
+create some sort of rollback mechanism for all actions, the second option needs a lot of code). If a nimbus that is not 
+leader receives a request that only a leader can perform,  it will throw a RunTimeException.
+
+### Nimbus state store:
+
+To achieve fail over from primary to backup servers nimbus state/data needs to be replicated across all nimbus hosts or 
+needs to be stored in a distributed storage. Replicating the data correctly involves state management, consistency checks
+and it is hard to test for correctness. However many storm users do not want to take extra dependency on another replicated
+storage system like HDFS and still need high availability. The blobstore implementation along with the state storage helps
+to overcome the failover scenarios in case a leader nimbus goes down.
+
+To support replication we will allow the user to define a code replication factor which would reflect number of nimbus 
+hosts to which the code must be replicated before starting the topology. With replication comes the issue of consistency. 
+The topology is launched once the code, jar and conf blob files are replicated based on the "topology.min.replication" config.
+Maintaining state for failover scenarios is important for local file system. The current implementation makes sure one of the
+available nimbus is elected as a leader in the case of a failure. If the topology specific blobs are missing, the leader nimbus
+tries to download them as and when they are needed. With this current architecture, we do not have to download all the blobs 
+required for a topology for a nimbus to accept leadership. This helps us in case the blobs are very large and avoid causing any 
+inadvertant delays in electing a leader.
+
+The state for every blob is relevant for the local blobstore implementation. For HDFS blobstore the replication
+is taken care by the HDFS. For handling the fail over scenarios for a local blobstore we need to store the state of the leader and
+non-leader nimbuses within the zookeeper.
+
+The state is stored under /storm/blobstore/key/nimbusHostPort:SequenceNumber for the blobstore to work to make nimbus highly available. 
+This state is used in the local file system blobstore to support replication. The HDFS blobstore does not have to store the state inside the 
+zookeeper.
+
+* NimbusHostPort: This piece of information generally contains the parsed string holding the hostname and port of the nimbus. 
+  It uses the same class “NimbusHostPortInfo” used earlier by the code-distributor interface to store the state and parse the data.
+
+* SequenceNumber: This is the blob sequence number information. The SequenceNumber information is implemented by a KeySequenceNumber class. 
+The sequence numbers are generated for every key. For every update, the sequence numbers are assigned based ona global sequence number 
+stored under /storm/blobstoremaxsequencenumber/key. For more details about how the numbers are generated you can look at the java docs for KeySequenceNumber.
+
+![Nimbus High Availability - BlobStore](images/nimbus_ha_blobstore.png)
+
+The sequence diagram proposes how the blobstore works and the state storage inside the zookeeper makes the nimbus highly available.
+Currently, the thread to sync the blobs on a non-leader is within the nimbus. In the future, it will be nice to move the thread around
+to the blobstore to make the blobstore coordinate the state change and blob download as per the sequence diagram.
+
+## Thrift and Rest API 
+In order to avoid workers/supervisors/ui talking to zookeeper for getting master nimbus address we are going to modify the 
+`getClusterInfo` API so it can also return nimbus information. getClusterInfo currently returns `ClusterSummary` instance
+which has a list of `supervisorSummary` and a list of `topologySummary` instances. We will add a list of `NimbusSummary` 
+to the `ClusterSummary`. See the structures below:
+
+```
+struct ClusterSummary {
+  1: required list<SupervisorSummary> supervisors;
+  3: required list<TopologySummary> topologies;
+  4: required list<NimbusSummary> nimbuses;
+}
+
+struct NimbusSummary {
+  1: required string host;
+  2: required i32 port;
+  3: required i32 uptime_secs;
+  4: required bool isLeader;
+  5: required string version;
+}
+```
+
+This will be used by StormSubmitter, Nimbus clients, supervisors and ui to discover the current leaders and participating 
+nimbus hosts. Any nimbus host will be able to respond to these requests. The nimbus hosts can read this information once 
+from zookeeper and cache it and keep updating the cache when the watchers are fired to indicate any changes,which should 
+be rare in general case.
+
+Note: All nimbus hosts have watchers on zookeeper to be notified immediately as soon as a new blobs is available for download, the callback may or may not download
+the code. Therefore, a background thread is triggered to download the respective blobs to run the topologies. The replication is achieved when the blobs are downloaded
+onto non-leader nimbuses. So you should expect your topology submission time to be somewhere between 0 to (2 * nimbus.code.sync.freq.secs) for any 
+nimbus.min.replication.count > 1.
+
+## Configuration
+
+```
+blobstore.dir: The directory where all blobs are stored. For local file system it represents the directory on the nimbus
+node and for HDFS file system it represents the hdfs file system path.
+
+supervisor.blobstore.class: This configuration is meant to set the client for  the supervisor  in order to talk to the blobstore. 
+For a local file system blobstore it is set to “backtype.storm.blobstore.NimbusBlobStore” and for the HDFS blobstore it is set 
+to “backtype.storm.blobstore.HdfsClientBlobStore”.
+
+supervisor.blobstore.download.thread.count: This configuration spawns multiple threads for from the supervisor in order download 
+blobs concurrently. The default is set to 5
+
+supervisor.blobstore.download.max_retries: This configuration is set to allow the supervisor to retry for the blob download. 
+By default it is set to 3.
+
+supervisor.localizer.cache.target.size.mb: The jvm opts provided to workers launched by this supervisor. All "%ID%" substrings 
+are replaced with an identifier for this worker. Also, "%WORKER-ID%", "%STORM-ID%" and "%WORKER-PORT%" are replaced with 
+appropriate runtime values for this worker. The distributed cache target size in MB. This is a soft limit to the size 
+of the distributed cache contents. It is set to 10240 MB.
+
+supervisor.localizer.cleanup.interval.ms: The distributed cache cleanup interval. Controls how often it scans to attempt to 
+cleanup anything over the cache target size. By default it is set to 600000 milliseconds.
+
+nimbus.blobstore.class:  Sets the blobstore implementation nimbus uses. It is set to "backtype.storm.blobstore.LocalFsBlobStore"
+
+nimbus.blobstore.expiration.secs: During operations with the blobstore, via master, how long a connection is idle before nimbus 
+considers it dead and drops the session and any associated connections. The default is set to 600.
+
+storm.blobstore.inputstream.buffer.size.bytes: The buffer size it uses for blobstore upload. It is set to 65536 bytes.
+
+client.blobstore.class: The blobstore implementation the storm client uses. The current implementation uses the default 
+config "backtype.storm.blobstore.NimbusBlobStore".
+
+blobstore.replication.factor: It sets the replication for each blob within the blobstore. The “topology.min.replication.count” 
+ensures the minimum replication the topology specific blobs are set before launching the topology. You might want to set the 
+“topology.min.replication.count <= blobstore.replication”. The default is set to 3.
+
+topology.min.replication.count : Minimum number of nimbus hosts where the code must be replicated before leader nimbus
+can mark the topology as active and create assignments. Default is 1.
+
+topology.max.replication.wait.time.sec: Maximum wait time for the nimbus host replication to achieve the nimbus.min.replication.count.
+Once this time is elapsed nimbus will go ahead and perform topology activation tasks even if required nimbus.min.replication.count is not achieved. 
+The default is 60 seconds, a value of -1 indicates to wait for ever.
+* nimbus.code.sync.freq.secs: Frequency at which the background thread on nimbus which syncs code for locally missing blobs. Default is 2 minutes.
+```
+
+## Using the Distributed Cache API, Command Line Interface (CLI)
+
+### Creating blobs 
+
+To use the distributed cache feature, the user first has to "introduce" files
+that need to be cached and bind them to key strings. To achieve this, the user
+uses the "blobstore create" command of the storm executable, as follows:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+storm blobstore create [-f|--file FILE] [-a|--acl ACL1,ACL2,...] [--repl-fctr NUMBER] [keyname]
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The contents come from a FILE, if provided by -f or --file option, otherwise
+from STDIN.  
+The ACLs, which can also be a comma separated list of many ACLs, is of the
+following format:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+> [u|o]:[username]:[r-|w-|a-|_]
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+where:  
+
+* u = user  
+* o = other  
+* username = user for this particular ACL  
+* r = read access  
+* w = write access  
+* a = admin access  
+* _ = ignored  
+
+The replication factor can be set to a value greater than 1 using --repl-fctr.
+
+Note: The replication right now is configurable for a hdfs blobstore but for a
+local blobstore the replication always stays at 1. For a hdfs blobstore
+the default replication is set to 3.
+
+###### Example:  
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+storm blobstore create --file README.txt --acl o::rwa --repl-fctr 4 key1
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In the above example, the *README.txt* file is added to the distributed cache.
+It can be accessed using the key string "*key1*" for any topology that needs
+it. The file is set to have read/write/admin access for others, a.k.a world
+everything and the replication is set to 4.
+
+###### Example:  
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+storm blobstore create mytopo:data.tgz -f data.tgz -a u:alice:rwa,u:bob:rw,o::r  
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The above example createss a mytopo:data.tgz key using the data stored in
+data.tgz.  User alice would have full access, bob would have read/write access
+and everyone else would have read access.
+
+### Making dist. cache files accessible to topologies
+
+Once a blob is created, we can use it for topologies. This is generally achieved
+by including the key string among the configurations of a topology, with the
+following format. A shortcut is to add the configuration item on the command
+line when starting a topology by using the **-c** command:
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+-c topology.blobstore.map='{"[KEY]":{"localname":"[VALUE]", "uncompress":"[true|false]"}}'
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Note: Please take care of the quotes.
+
+The cache file would then be accessible to the topology as a local file with the
+name [VALUE].  
+The localname parameter is optional, if omitted the local cached file will have
+the same name as [KEY].  
+The uncompress parameter is optional, if omitted the local cached file will not
+be uncompressed.  Note that the key string needs to have the appropriate
+file-name-like format and extension, so it can be uncompressed correctly.
+
+###### Example:  
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+storm jar /home/y/lib/storm-starter/current/storm-starter-jar-with-dependencies.jar storm.starter.clj.word_count test_topo -c topology.blobstore.map='{"key1":{"localname":"blob_file", "uncompress":"false"},"key2":{}}'
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Note: Please take care of the quotes.
+
+In the above example, we start the *word_count* topology (stored in the
+*storm-starter-jar-with-dependencies.jar* file), and ask it to have access
+to the cached file stored with key string = *key1*. This file would then be
+accessible to the topology as a local file called *blob_file*, and the
+supervisor will not try to uncompress the file. Note that in our example, the
+file's content originally came from *README.txt*. We also ask for the file
+stored with the key string = *key2* to be accessible to the topology. Since
+both the optional parameters are omitted, this file will get the local name =
+*key2*, and will not be uncompressed.
+
+### Updating a cached file
+
+It is possible for the cached files to be updated while topologies are running.
+The update happens in an eventual consistency model, where the supervisors poll
+Nimbus every 30 seconds, and update their local copies. In the current version,
+it is the user's responsibility to check whether a new file is available.
+
+To update a cached file, use the following command. Contents come from a FILE or
+STDIN. Write access is required to be able to update a cached file.
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+storm blobstore update [-f|--file NEW_FILE] [KEYSTRING]
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+###### Example:  
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+storm blobstore update -f updates.txt key1
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In the above example, the topologies will be presented with the contents of the
+file *updates.txt* instead of *README.txt* (from the previous example), even
+though their access by the topology is still through a file called
+*blob_file*.
+
+### Removing a cached file
+
+To remove a file from the distributed cache, use the following command. Removing
+a file requires write access.
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+storm blobstore delete [KEYSTRING]
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+### Listing Blobs currently in the distributed cache blobstore
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+storm blobstore list [KEY...]
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+lists blobs currently in the blobstore
+
+### Reading the contents of a blob
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+storm blobstore cat [-f|--file FILE] KEY
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+read a blob and then either write it to a file, or STDOUT. Reading a blob
+requires read access.
+
+### Setting the access control for a blob
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+set-acl [-s ACL] KEY
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ACL is in the form [uo]:[username]:[r-][w-][a-] can be comma  separated list
+(requires admin access).
+
+### Update the replication factor for a blob
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+storm blobstore replication --update --repl-fctr 5 key1
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+### Read the replication factor of a blob
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+storm blobstore replication --read key1
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+### Command line help
+
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+storm help blobstore
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+## Using the Distributed Cache API from Java
+
+We start by getting a ClientBlobStore object by calling this function:
+
+``` java
+Config theconf = new Config();
+theconf.putAll(Utils.readStormConfig());
+ClientBlobStore clientBlobStore = Utils.getClientBlobStore(theconf);
+```
+
+The required Utils package can by imported by:
+
+```java
+import backtype.storm.utils.Utils;
+```
+
+ClientBlobStore and other blob-related classes can be imported by:
+
+```java
+import backtype.storm.blobstore.ClientBlobStore;
+import backtype.storm.blobstore.AtomicOutputStream;
+import backtype.storm.blobstore.InputStreamWithMeta;
+import backtype.storm.blobstore.BlobStoreAclHandler;
+import backtype.storm.generated.*;
+```
+
+### Creating ACLs to be used for blobs
+
+```java
+String stringBlobACL = "u:username:rwa";
+AccessControl blobACL = BlobStoreAclHandler.parseAccessControl(stringBlobACL);
+List<AccessControl> acls = new LinkedList<AccessControl>();
+acls.add(blobACL); // more ACLs can be added here
+SettableBlobMeta settableBlobMeta = new SettableBlobMeta(acls);
+settableBlobMeta.set_replication_factor(4); // Here we can set the replication factor
+```
+
+The settableBlobMeta object is what we need to create a blob in the next step. 
+
+### Creating a blob
+
+```java
+AtomicOutputStream blobStream = clientBlobStore.createBlob("some_key", settableBlobMeta);
+blobStream.write("Some String or input data".getBytes());
+blobStream.close();
+```
+
+Note that the settableBlobMeta object here comes from the last step, creating ACLs.
+It is recommended that for very large files, the user writes the bytes in smaller chunks (for example 64 KB, up to 1 MB chunks).
+
+### Updating a blob
+
+Similar to creating a blob, but we get the AtomicOutputStream in a different way:
+
+```java
+String blobKey = "some_key";
+AtomicOutputStream blobStream = clientBlobStore.updateBlob(blobKey);
+```
+
+Pass a byte stream to the returned AtomicOutputStream as before. 
+
+### Updating the ACLs of a blob
+
+```java
+String blobKey = "some_key";
+AccessControl updateAcl = BlobStoreAclHandler.parseAccessControl("u:USER:--a");
+List<AccessControl> updateAcls = new LinkedList<AccessControl>();
+updateAcls.add(updateAcl);
+SettableBlobMeta modifiedSettableBlobMeta = new SettableBlobMeta(updateAcls);
+clientBlobStore.setBlobMeta(blobKey, modifiedSettableBlobMeta);
+
+//Now set write only
+updateAcl = BlobStoreAclHandler.parseAccessControl("u:USER:-w-");
+updateAcls = new LinkedList<AccessControl>();
+updateAcls.add(updateAcl);
+modifiedSettableBlobMeta = new SettableBlobMeta(updateAcls);
+clientBlobStore.setBlobMeta(blobKey, modifiedSettableBlobMeta);
+```
+
+### Updating and Reading the replication of a blob
+
+```java
+String blobKey = "some_key";
+BlobReplication replication = clientBlobStore.updateBlobReplication(blobKey, 5);
+int replication_factor = replication.get_replication();
+```
+
+Note: The replication factor gets updated and reflected only for hdfs blobstore
+
+### Reading a blob
+
+```java
+String blobKey = "some_key";
+InputStreamWithMeta blobInputStream = clientBlobStore.getBlob(blobKey);
+BufferedReader r = new BufferedReader(new InputStreamReader(blobInputStream));
+String blobContents =  r.readLine();
+```
+
+### Deleting a blob
+
+```java
+String blobKey = "some_key";
+clientBlobStore.deleteBlob(blobKey);
+```
+
+### Getting a list of blob keys already in the blobstore
+
+```java
+Iterator <String> stringIterator = clientBlobStore.listKeys();
+```
+
+## Appendix A
+
+```java
+public abstract void prepare(Map conf, String baseDir);
+
+public abstract AtomicOutputStream createBlob(String key, SettableBlobMeta meta, Subject who) throws AuthorizationException, KeyAlreadyExistsException;
+
+public abstract AtomicOutputStream updateBlob(String key, Subject who) throws AuthorizationException, KeyNotFoundException;
+
+public abstract ReadableBlobMeta getBlobMeta(String key, Subject who) throws AuthorizationException, KeyNotFoundException;
+
+public abstract void setBlobMeta(String key, SettableBlobMeta meta, Subject who) throws AuthorizationException, KeyNotFoundException;
+
+public abstract void deleteBlob(String key, Subject who) throws AuthorizationException, KeyNotFoundException;
+
+public abstract InputStreamWithMeta getBlob(String key, Subject who) throws AuthorizationException, KeyNotFoundException;
+
+public abstract Iterator<String> listKeys(Subject who);
+
+public abstract BlobReplication getBlobReplication(String key, Subject who) throws Exception;
+
+public abstract BlobReplication updateBlobReplication(String key, int replication, Subject who) throws AuthorizationException, KeyNotFoundException, IOException
+```
+
+## Appendix B
+
+```java
+public abstract void prepare(Map conf);
+
+protected abstract AtomicOutputStream createBlobToExtend(String key, SettableBlobMeta meta) throws AuthorizationException, KeyAlreadyExistsException;
+
+public abstract AtomicOutputStream updateBlob(String key) throws AuthorizationException, KeyNotFoundException;
+
+public abstract ReadableBlobMeta getBlobMeta(String key) throws AuthorizationException, KeyNotFoundException;
+
+protected abstract void setBlobMetaToExtend(String key, SettableBlobMeta meta) throws AuthorizationException, KeyNotFoundException;
+
+public abstract void deleteBlob(String key) throws AuthorizationException, KeyNotFoundException;
+
+public abstract InputStreamWithMeta getBlob(String key) throws AuthorizationException, KeyNotFoundException;
+
+public abstract Iterator<String> listKeys();
+
+public abstract void watchBlob(String key, IBlobWatcher watcher) throws AuthorizationException;
+
+public abstract void stopWatchingBlob(String key) throws AuthorizationException;
+
+public abstract BlobReplication getBlobReplication(String Key) throws AuthorizationException, KeyNotFoundException;
+
+public abstract BlobReplication updateBlobReplication(String Key, int replication) throws AuthorizationException, KeyNotFoundException
+```
+
+## Appendix C
+
+```
+service Nimbus {
+...
+string beginCreateBlob(1: string key, 2: SettableBlobMeta meta) throws (1: AuthorizationException aze, 2: KeyAlreadyExistsException kae);
+
+string beginUpdateBlob(1: string key) throws (1: AuthorizationException aze, 2: KeyNotFoundException knf);
+
+void uploadBlobChunk(1: string session, 2: binary chunk) throws (1: AuthorizationException aze);
+
+void finishBlobUpload(1: string session) throws (1: AuthorizationException aze);
+
+void cancelBlobUpload(1: string session) throws (1: AuthorizationException aze);
+
+ReadableBlobMeta getBlobMeta(1: string key) throws (1: AuthorizationException aze, 2: KeyNotFoundException knf);
+
+void setBlobMeta(1: string key, 2: SettableBlobMeta meta) throws (1: AuthorizationException aze, 2: KeyNotFoundException knf);
+
+BeginDownloadResult beginBlobDownload(1: string key) throws (1: AuthorizationException aze, 2: KeyNotFoundException knf);
+
+binary downloadBlobChunk(1: string session) throws (1: AuthorizationException aze);
+
+void deleteBlob(1: string key) throws (1: AuthorizationException aze, 2: KeyNotFoundException knf);
+
+ListBlobsResult listBlobs(1: string session);
+
+BlobReplication getBlobReplication(1: string key) throws (1: AuthorizationException aze, 2: KeyNotFoundException knf);
+
+BlobReplication updateBlobReplication(1: string key, 2: i32 replication) throws (1: AuthorizationException aze, 2: KeyNotFoundException knf);
+...
+}
+
+struct BlobReplication {
+1: required i32 replication;
+}
+
+exception AuthorizationException {
+ 1: required string msg;
+}
+
+exception KeyNotFoundException {
+ 1: required string msg;
+}
+
+exception KeyAlreadyExistsException {
+ 1: required string msg;
+}
+
+enum AccessControlType {
+ OTHER = 1,
+ USER = 2
+ //eventually ,GROUP=3
+}
+
+struct AccessControl {
+ 1: required AccessControlType type;
+ 2: optional string name; //Name of user or group in ACL
+ 3: required i32 access; //bitmasks READ=0x1, WRITE=0x2, ADMIN=0x4
+}
+
+struct SettableBlobMeta {
+ 1: required list<AccessControl> acl;
+ 2: optional i32 replication_factor
+}
+
+struct ReadableBlobMeta {
+ 1: required SettableBlobMeta settable;
+ //This is some indication of a version of a BLOB.  The only guarantee is
+ // if the data changed in the blob the version will be different.
+ 2: required i64 version;
+}
+
+struct ListBlobsResult {
+ 1: required list<string> keys;
+ 2: required string session;
+}
+
+struct BeginDownloadResult {
+ //Same version as in ReadableBlobMeta
+ 1: required i64 version;
+ 2: required string session;
+ 3: optional i64 data_size;
+}
+```
diff --git a/documentation/dynamic-log-level-settings.md b/documentation/dynamic-log-level-settings.md
new file mode 100644
index 0000000..65b2d0a
--- /dev/null
+++ b/documentation/dynamic-log-level-settings.md
@@ -0,0 +1,45 @@
+---
+title: Dynamic Log Level Settings
+layout: documentation
+documentation: true
+---
+
+
+We have added the ability to set log level settings for a running topology using the Storm UI and the Storm CLI. 
+
+The log level settings apply the same way as you'd expect from log4j, as all we are doing is telling log4j to set the level of the logger you provide. If you set the log level of a parent logger, the children loggers start using that level (unless the children have a more restrictive level already). A timeout can optionally be provided (except for DEBUG mode, where it’s required in the UI), if workers should reset log levels automatically.
+
+This revert action is triggered using a polling mechanism (every 30 seconds, but this is configurable), so you should expect your timeouts to be the value you provided plus anywhere between 0 and the setting's value.
+
+Using the Storm UI
+-------------
+
+In order to set a level, click on a running topology, and then click on “Change Log Level” in the Topology Actions section.
+
+![Change Log Level dialog](images/dynamic_log_level_settings_1.png "Change Log Level dialog")
+
+Next, provide the logger name, select the level you expect (e.g. WARN), and a timeout in seconds (or 0 if not needed). Then click on “Add”.
+
+![After adding a log level setting](images/dynamic_log_level_settings_2.png "After adding a log level setting")
+
+To clear the log level click on the “Clear” button. This reverts the log level back to what it was before you added the setting. The log level line will disappear from the UI.
+
+While there is a delay resetting log levels back, setting the log level in the first place is immediate (or as quickly as the message can travel from the UI/CLI to the workers by way of nimbus and zookeeper).
+
+Using the CLI
+-------------
+
+Using the CLI, issue the command:
+
+`./bin/storm set_log_level [topology name] -l [logger name]=[LEVEL]:[TIMEOUT]`
+
+For example:
+
+`./bin/storm set_log_level my_topology -l ROOT=DEBUG:30`
+
+Sets the ROOT logger to DEBUG for 30 seconds.
+
+`./bin/storm set_log_level my_topology -r ROOT`
+
+Clears the ROOT logger dynamic log level, resetting it to its original value.
+
diff --git a/documentation/dynamic-worker-profiling.md b/documentation/dynamic-worker-profiling.md
new file mode 100644
index 0000000..f1b83e9
--- /dev/null
+++ b/documentation/dynamic-worker-profiling.md
@@ -0,0 +1,37 @@
+---
+title: Dynamic Worker Profiling
+layout: documentation
+documentation: true
+---
+
+
+In multi-tenant mode, storm launches long-running JVMs across cluster without sudo access to user. Self-serving of Java heap-dumps, jstacks and java profiling of these JVMs would improve users' ability to analyze and debug issues when monitoring it actively.
+
+The storm dynamic profiler lets you dynamically take heap-dumps, jprofile or jstack for a worker jvm running on stock cluster. It let user download these dumps from the browser and use your favorite tools to analyze it  The UI component page provides list workers for the component and action buttons. The logviewer lets you download the dumps generated by these logs. Please see the screenshots for more information.
+
+Using the Storm UI
+-------------
+
+In order to request for heap-dump, jstack, start/stop/dump jprofile or restart a worker, click on a running topology, then click on specific component, then you can select workers by checking the box of any of the worker's executors in the Executors table, and then click on “Start","Heap", "Jstack" or "Restart Worker" in the "Profiling and Debugging" section.
+
+![Selecting Workers](images/dynamic_profiling_debugging_4.png "Selecting Workers")
+
+In the Executors table, click the checkbox in the Actions column next to any executor, and any other executors belonging to the same worker are automatically selected. When the action has completed, any output files created will available at the link in the Actions column.
+
+![Profiling and Debugging](images/dynamic_profiling_debugging_1.png "Profiling and Debugging")
+
+For start jprofile, provide a timeout in minutes (or 10 if not needed). Then click on “Start”.
+
+![After starting jprofile for worker](images/dynamic_profiling_debugging_2.png "After jprofile for worker ")
+
+To stop the jprofile logging click on the “Stop” button. This dumps the jprofile stats and stops the profiling. Refresh the page for the line to disappear from the UI.
+
+Click on "My Dump Files" to go the logviewer UI for list of worker specific dump files.
+
+![Dump Files Links for worker](images/dynamic_profiling_debugging_3.png "Dump Files Links for worker")
+
+Configuration
+-------------
+
+The "worker.profiler.command" can be configured to point to specific pluggable profiler, heapdump commands. The "worker.profiler.enabled" can be disabled if plugin is not available or jdk does not support Jprofile flight recording so that worker JVM options will not have "worker.profiler.childopts". To use different profiler plugin, you can change these configuration.
+
diff --git a/documentation/flux.md b/documentation/flux.md
index 6f678d5..8f2b264 100644
--- a/documentation/flux.md
+++ b/documentation/flux.md
@@ -216,10 +216,10 @@
 switches to pass through to the `storm` command.
 
 For example, you can use the `storm` command switch `-c` to override a topology configuration property. The following
-example command will run Flux and override the `nimus.host` configuration:
+example command will run Flux and override the `nimbus.seeds` configuration:
 
 ```bash
-storm jar myTopology-0.1.0-SNAPSHOT.jar org.apache.storm.flux.Flux --remote my_config.yaml -c nimbus.host=localhost
+storm jar myTopology-0.1.0-SNAPSHOT.jar org.apache.storm.flux.Flux --remote my_config.yaml -c 'nimbus.seeds=["localhost"]'
 ```
 
 ### Sample output
diff --git a/documentation/images/dynamic_log_level_settings_1.png b/documentation/images/dynamic_log_level_settings_1.png
new file mode 100644
index 0000000..71d42e7
--- /dev/null
+++ b/documentation/images/dynamic_log_level_settings_1.png
Binary files differ
diff --git a/documentation/images/dynamic_log_level_settings_2.png b/documentation/images/dynamic_log_level_settings_2.png
new file mode 100644
index 0000000..d0e61a7
--- /dev/null
+++ b/documentation/images/dynamic_log_level_settings_2.png
Binary files differ
diff --git a/documentation/images/dynamic_profiling_debugging_1.png b/documentation/images/dynamic_profiling_debugging_1.png
new file mode 100644
index 0000000..6be1f86
--- /dev/null
+++ b/documentation/images/dynamic_profiling_debugging_1.png
Binary files differ
diff --git a/documentation/images/dynamic_profiling_debugging_2.png b/documentation/images/dynamic_profiling_debugging_2.png
new file mode 100644
index 0000000..342ad94
--- /dev/null
+++ b/documentation/images/dynamic_profiling_debugging_2.png
Binary files differ
diff --git a/documentation/images/dynamic_profiling_debugging_3.png b/documentation/images/dynamic_profiling_debugging_3.png
new file mode 100644
index 0000000..5706d7e
--- /dev/null
+++ b/documentation/images/dynamic_profiling_debugging_3.png
Binary files differ
diff --git a/documentation/images/dynamic_profiling_debugging_4.png b/documentation/images/dynamic_profiling_debugging_4.png
new file mode 100644
index 0000000..0afe9f4
--- /dev/null
+++ b/documentation/images/dynamic_profiling_debugging_4.png
Binary files differ
diff --git a/documentation/images/hdfs_blobstore.png b/documentation/images/hdfs_blobstore.png
new file mode 100644
index 0000000..11c5c10
--- /dev/null
+++ b/documentation/images/hdfs_blobstore.png
Binary files differ
diff --git a/documentation/images/local_blobstore.png b/documentation/images/local_blobstore.png
new file mode 100644
index 0000000..ff8001e
--- /dev/null
+++ b/documentation/images/local_blobstore.png
Binary files differ
diff --git a/documentation/images/nimbus_ha_blobstore.png b/documentation/images/nimbus_ha_blobstore.png
new file mode 100644
index 0000000..26e8c2a
--- /dev/null
+++ b/documentation/images/nimbus_ha_blobstore.png
Binary files differ
diff --git a/documentation/images/search-a-topology.png b/documentation/images/search-a-topology.png
new file mode 100644
index 0000000..8d6153c
--- /dev/null
+++ b/documentation/images/search-a-topology.png
Binary files differ
diff --git a/documentation/images/search-for-a-single-worker-log.png b/documentation/images/search-for-a-single-worker-log.png
new file mode 100644
index 0000000..8c6f423
--- /dev/null
+++ b/documentation/images/search-for-a-single-worker-log.png
Binary files differ
diff --git a/documentation/images/storm-sql-internal-example.png b/documentation/images/storm-sql-internal-example.png
new file mode 100644
index 0000000..74828d5
--- /dev/null
+++ b/documentation/images/storm-sql-internal-example.png
Binary files differ
diff --git a/documentation/images/storm-sql-internal-workflow.png b/documentation/images/storm-sql-internal-workflow.png
new file mode 100644
index 0000000..655c1c4
--- /dev/null
+++ b/documentation/images/storm-sql-internal-workflow.png
Binary files differ
diff --git a/documentation/nimbus-ha-design.md b/documentation/nimbus-ha-design.md
index 672eece..d0d6fd2 100644
--- a/documentation/nimbus-ha-design.md
+++ b/documentation/nimbus-ha-design.md
@@ -1,4 +1,9 @@
-#Highly Available Nimbus design proposal
+---
+title: Highly Available Nimbus Design
+layout: documentation
+documentation: true
+---
+
 ##Problem Statement:
 Currently the storm master aka nimbus, is a process that runs on a single machine under supervision. In most cases the 
 nimbus failure is transient and it is restarted by the supervisor. However sometimes when disks fail and networks 
diff --git a/documentation/storm-metrics-profiling-internal-actions.md b/documentation/storm-metrics-profiling-internal-actions.md
new file mode 100644
index 0000000..e549c0c
--- /dev/null
+++ b/documentation/storm-metrics-profiling-internal-actions.md
@@ -0,0 +1,70 @@
+# Storm Metrics for Profiling Various Storm Internal Actions
+
+With the addition of these metrics, Storm users can collect, view, and analyze the performance of various internal actions.  The actions that are profiled include thrift rpc calls and http quests within the storm daemons. For instance, in the Storm Nimbus daemon, the following thrift calls defined in the Nimbus$Iface are profiled:
+
+- submitTopology
+- submitTopologyWithOpts
+- killTopology
+- killTopologyWithOpts
+- activate
+- deactivate
+- rebalance
+- setLogConfig
+- getLogConfig
+
+Various HTTP GET and POST requests are marked for profiling as well such as the GET and POST requests for the Storm UI daemon (ui/core.cj)
+To implement these metrics the following packages are used: 
+- io.dropwizard.metrics
+- metrics-clojure
+
+## How it works
+
+By using packages io.dropwizard.metrics and metrics-clojure (clojure wrapper for the metrics Java API), we can mark functions to profile by declaring (defmeter num-some-func-calls) and then adding the (mark! num-some-func-calls) to where the function is invoked. For example:
+
+    (defmeter num-some-func-calls)
+    (defn some-func [args]
+        (mark! num-some-func-calls)
+        (body))
+        
+What essentially the mark! API call does is increment a counter that represents how many times a certain action occured.  For instantanous measurements user can use gauges.  For example: 
+
+    (defgauge nimbus:num-supervisors
+         (fn [] (.size (.supervisors (:storm-cluster-state nimbus) nil))))
+         
+The above example will get the number of supervisors in the cluster.  This metric is not accumlative like one previously discussed.
+
+A metrics reporting server needs to also be activated to collect the metrics. You can do this by calling the following function:
+
+    (defn start-metrics-reporters []
+        (jmx/start (jmx/reporter {})))
+
+## How to collect the metrics
+
+Metrics can be reported via JMX or HTTP.  A user can use JConsole or VisualVM to connect to the jvm process and view the stats.
+
+To view the metrics in a GUI use VisualVM or JConsole.  Screenshot of using VisualVm for metrics: 
+
+![Viewing metrics with VisualVM](images/viewing_metrics_with_VisualVM.png)
+
+For detailed information regarding how to collect the metrics please reference: 
+
+https://dropwizard.github.io/metrics/3.1.0/getting-started/
+
+If you want use JMX and view metrics through JConsole or VisualVM, remember launch JVM processes your want to profile with the correct JMX configurations.  For example in Storm you would add the following to conf/storm.yaml
+
+    nimbus.childopts: "-Xmx1024m -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=3333  -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"
+    
+    ui.childopts: "-Xmx768m -Dcom.sun.management.jmxremote.port=3334 -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"
+    
+    logviewer.childopts: "-Xmx128m -Dcom.sun.management.jmxremote.port=3335 -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"
+    
+    drpc.childopts: "-Xmx768m -Dcom.sun.management.jmxremote.port=3336 -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"
+   
+    supervisor.childopts: "-Xmx256m -Dcom.sun.management.jmxremote.port=3337 -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"
+
+### Please Note:
+Since we shade all of the packages we use, additional plugins for collecting metrics might not work at this time.  Currently collecting the metrics via JMX is supported.
+   
+For more information about io.dropwizard.metrics and metrics-clojure packages please reference their original documentation:
+- https://dropwizard.github.io/metrics/3.1.0/
+- http://metrics-clojure.readthedocs.org/en/latest/
\ No newline at end of file
diff --git a/documentation/storm-sql-internal.md b/documentation/storm-sql-internal.md
new file mode 100644
index 0000000..08969c6
--- /dev/null
+++ b/documentation/storm-sql-internal.md
@@ -0,0 +1,55 @@
+---
+title: The Internals of Storm SQL
+layout: documentation
+documentation: true
+---
+
+This page describes the design and the implementation of the Storm SQL integration.
+
+## Overview
+
+SQL is a well-adopted yet complicated standard. Several projects including Drill, Hive, Phoenix and Spark have invested significantly in their SQL layers. One of the main design goal of StormSQL is to leverage the existing investments for these projects. StormSQL leverages [Apache Calcite](///calcite.apache.org) to implement the SQL standard. StormSQL focuses on compiling the SQL statements to Storm / Trident topologies so that they can be executed in Storm clusters.
+
+Figure 1 describes the workflow of executing a SQL query in StormSQL. First, users provide a sequence of SQL statements. StormSQL parses the SQL statements and translates them to a Calcite logical plan. A logical plan consists of a sequence of SQL logical operators that describe how the query should be executed irrespective to the underlying execution engines. Some examples of logical operators include `TableScan`, `Filter`, `Projection` and `GroupBy`.
+
+<div align="center">
+<img title="Workflow of StormSQL" src="images/storm-sql-internal-workflow.png" style="max-width: 80rem"/>
+
+<p>Figure 1: Workflow of StormSQL.</p>
+</div>
+
+The next step is to compile the logical execution plan down to a physical execution plan. A physical plan consists of physical operators that describes how to execute the SQL query in *StormSQL*. Physical operators such as `Filter`, `Projection`, and `GroupBy` are directly mapped to operations in Trident topologies. StormSQL also compiles expressions in the SQL statements into Java byte codes and plugs them into the Trident topologies.
+
+Finally, StormSQL packages both the Java byte codes and the topology into a JAR and submits it to the Storm cluster. Storm schedules and executes the JAR in the same way of it executes other Storm topologies.
+
+The follow code blocks show an example query that filters and projects results from a Kafka stream.
+
+```
+CREATE EXTERNAL TABLE ORDERS (ID INT PRIMARY KEY, UNIT_PRICE INT, QUANTITY INT) LOCATION 'kafka://localhost:2181/brokers?topic=orders' ...
+
+CREATE EXTERNAL TABLE LARGE_ORDERS (ID INT PRIMARY KEY, TOTAL INT) LOCATION 'kafka://localhost:2181/brokers?topic=large_orders' ...
+
+INSERT INTO LARGE_ORDERS SELECT ID, UNIT_PRICE * QUANTITY AS TOTAL FROM ORDERS WHERE UNIT_PRICE * QUANTITY > 50
+```
+
+The first two SQL statements define the inputs and outputs of external data. Figure 2 describes the processes of how StormSQL takes the last `SELECT` query and compiles it down to Trident topology.
+
+<div align="center">
+<img title="Compiling the example query to Trident topology" src="images/storm-sql-internal-example.png" style="max-width: 80rem"/>
+
+<p>Figure 2: Compiling the example query to Trident topology.</p>
+</div>
+
+
+## Constraints of querying streaming tables
+
+There are several constraints when querying tables that represent a real-time data stream:
+
+* The `ORDER BY` clause cannot be applied to a stream.
+* There is at least one monotonic field in the `GROUP BY` clauses to allow StormSQL bounds the size of the buffer.
+
+For more information please refer to http://calcite.apache.org/docs/stream.html.
+
+## Dependency
+
+StormSQL does not ship the dependency of the external data sources in the packaged JAR. The users have to provide the dependency in the `extlib` directory of the worker node.
diff --git a/documentation/storm-sql.md b/documentation/storm-sql.md
new file mode 100644
index 0000000..fd28cb2
--- /dev/null
+++ b/documentation/storm-sql.md
@@ -0,0 +1,97 @@
+---
+title: Storm SQL integration
+layout: documentation
+documentation: true
+---
+
+The Storm SQL integration allows users to run SQL queries over streaming data in Storm. Not only the SQL interface allows faster development cycles on streaming analytics, but also opens up the opportunities to unify batch data processing like [Apache Hive](///hive.apache.org) and real-time streaming data analytics.
+
+At a very high level StormSQL compiles the SQL queries to [Trident](Trident-API-Overview.html) topologies and executes them in Storm clusters. This document provides information of how to use StormSQL as end users. For people that are interested in more details in the design and the implementation of StormSQL please refer to the [this](storm-sql-internal.html) page.
+
+## Usage
+
+Run the ``storm sql`` command to compile SQL statements into Trident topology, and submit it to the Storm cluster
+
+```
+$ bin/storm sql <sql-file> <topo-name>
+```
+
+In which `sql-file` contains a list of SQL statements to be executed, and `topo-name` is the name of the topology.
+
+
+## Supported Features
+
+The following features are supported in the current repository:
+
+* Streaming from and to external data sources
+* Filtering tuples
+* Projections
+
+## Specifying External Data Sources
+
+In StormSQL data is represented by external tables. Users can specify data sources using the `CREATE EXTERNAL TABLE` statement. The syntax of `CREATE EXTERNAL TABLE` closely follows the one defined in [Hive Data Definition Language](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL):
+
+```
+CREATE EXTERNAL TABLE table_name field_list
+    [ STORED AS
+      INPUTFORMAT input_format_classname
+      OUTPUTFORMAT output_format_classname
+    ]
+    LOCATION location
+    [ TBLPROPERTIES tbl_properties ]
+    [ AS select_stmt ]
+```
+
+You can find detailed explanations of the properties in [Hive Data Definition Language](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL). For example, the following statement specifies a Kafka spouts and sink:
+
+```
+CREATE EXTERNAL TABLE FOO (ID INT PRIMARY KEY) LOCATION 'kafka://localhost:2181/brokers?topic=test' TBLPROPERTIES '{"producer":{"bootstrap.servers":"localhost:9092","acks":"1","key.serializer":"org.apache.storm.kafka.IntSerializer","value.serializer":"org.apache.storm.kafka.ByteBufferSerializer"}}'
+```
+
+## Plugging in External Data Sources
+
+Users plug in external data sources through implementing the `ISqlTridentDataSource` interface and registers them using the mechanisms of Java's service loader. The external data source will be chosen based on the scheme of the URI of the tables. Please refer to the implementation of `storm-sql-kafka` for more details.
+
+## Example: Filtering Kafka Stream
+
+Let's say there is a Kafka stream that represents the transactions of orders. Each message in the stream contains the id of the order, the unit price of the product and the quantity of the orders. The goal is to filter orders where the transactions are significant and to insert these orders into another Kafka stream for further analysis.
+
+The user can specify the following SQL statements in the SQL file:
+
+```
+CREATE EXTERNAL TABLE ORDERS (ID INT PRIMARY KEY, UNIT_PRICE INT, QUANTITY INT) LOCATION 'kafka://localhost:2181/brokers?topic=orders' TBLPROPERTIES '{"producer":{"bootstrap.servers":"localhost:9092","acks":"1","key.serializer":"org.apache.storm.kafka.IntSerializer","value.serializer":"org.apache.storm.kafka.ByteBufferSerializer"}}'
+CREATE EXTERNAL TABLE LARGE_ORDERS (ID INT PRIMARY KEY, TOTAL INT) LOCATION 'kafka://localhost:2181/brokers?topic=large_orders' TBLPROPERTIES '{"producer":{"bootstrap.servers":"localhost:9092","acks":"1","key.serializer":"org.apache.storm.kafka.IntSerializer","value.serializer":"org.apache.storm.kafka.ByteBufferSerializer"}}'
+INSERT INTO LARGE_ORDERS SELECT ID, UNIT_PRICE * QUANTITY AS TOTAL FROM ORDERS WHERE UNIT_PRICE * QUANTITY > 50
+```
+
+The first statement defines the table `ORDER` which represents the input stream. The `LOCATION` clause specifies the ZkHost (`localhost:2181`), the path of the brokers in ZooKeeper (`/brokers`) and the topic (`orders`). The `TBLPROPERTIES` clause specifies the configuration of [KafkaProducer](http://kafka.apache.org/documentation.html#producerconfigs).
+Current implementation of `storm-sql-kafka` requires specifying both `LOCATION` and `TBLPROPERTIES` clauses even though the table is read-only or write-only.
+
+Similarly, the second statement specifies the table `LARGE_ORDERS` which represents the output stream. The third statement is a `SELECT` statement which defines the topology: it instructs StormSQL to filter all orders in the external table `ORDERS`, calculates the total price and inserts matching records into the Kafka stream specified by `LARGE_ORDER`.
+
+To run this example, users need to include the data sources (`storm-sql-kafka` in this case) and its dependency in the class path. One approach is to put the required jars into the `extlib` directory:
+
+```
+$ cp curator-client-2.5.0.jar curator-framework-2.5.0.jar zookeeper-3.4.6.jar
+ extlib/
+$ cp scala-library-2.10.4.jar kafka-clients-0.8.2.1.jar kafka_2.10-0.8.2.1.jar metrics-core-2.2.0.jar extlib/
+$ cp json-simple-1.1.1.jar extlib/
+$ cp jackson-annotations-2.6.0.jar extlib/
+$ cp storm-kafka-*.jar storm-sql-kafka-*.jar storm-sql-runtime-*.jar extlib/
+```
+
+The next step is to submit the SQL statements to StormSQL:
+
+```
+$ bin/storm sql order_filtering order_filtering.sql
+```
+
+By now you should be able to see the `order_filtering` topology in the Storm UI.
+
+## Current Limitations
+
+Aggregation, windowing and joining tables are yet to be implemented. Specifying parallelism hints in the topology is not yet supported. All processors have a parallelism hint of 1.
+
+Users also need to provide the dependency of the external data sources in the `extlib` directory. Otherwise the topology will fail to run because of `ClassNotFoundException`.
+
+The current implementation of the Kafka connector in StormSQL assumes both the input and the output are in JSON formats. The connector has not yet recognized the `INPUTFORMAT` and `OUTPUTFORMAT` clauses yet.
diff --git a/documentation/ui-rest-api.md b/documentation/ui-rest-api.md
new file mode 100644
index 0000000..d40a9ba
--- /dev/null
+++ b/documentation/ui-rest-api.md
@@ -0,0 +1,1017 @@
+---
+title: Storm UI REST API
+layout: documentation
+documentation: true
+---
+
+
+The Storm UI daemon provides a REST API that allows you to interact with a Storm cluster, which includes retrieving
+metrics data and configuration information as well as management operations such as starting or stopping topologies.
+
+
+# Data format
+
+The REST API returns JSON responses and supports JSONP.
+Clients can pass a callback query parameter to wrap JSON in the callback function.
+
+
+# Using the UI REST API
+
+_Note: It is recommended to ignore undocumented elements in the JSON response because future versions of Storm may not_
+_support those elements anymore._
+
+
+## REST API Base URL
+
+The REST API is part of the UI daemon of Storm (started by `storm ui`) and thus runs on the same host and port as the
+Storm UI (the UI daemon is often run on the same host as the Nimbus daemon).  The port is configured by `ui.port`,
+which is set to `8080` by default (see [defaults.yaml](conf/defaults.yaml)).
+
+The API base URL would thus be:
+
+    http://<ui-host>:<ui-port>/api/v1/...
+
+You can use a tool such as `curl` to talk to the REST API:
+
+    # Request the cluster configuration.
+    # Note: We assume ui.port is configured to the default value of 8080.
+    $ curl http://<ui-host>:8080/api/v1/cluster/configuration
+
+##Impersonating a user in secure environment
+In a secure environment an authenticated user can impersonate another user. To impersonate a user the caller must pass
+`doAsUser` param or header with value set to the user that the request needs to be performed as. Please see SECURITY.MD
+to learn more about how to setup impersonation ACLs and authorization. The rest API uses the same configs and acls that
+are used by nimbus.
+
+Examples:
+
+```no-highlight
+ 1. http://ui-daemon-host-name:8080/api/v1/topology/wordcount-1-1425844354\?doAsUser=testUSer1
+ 2. curl 'http://localhost:8080/api/v1/topology/wordcount-1-1425844354/activate' -X POST -H 'doAsUser:testUSer1'
+```
+
+## GET Operations
+
+### /api/v1/cluster/configuration (GET)
+
+Returns the cluster configuration.
+
+Sample response (does not include all the data fields):
+
+```json
+  {
+    "dev.zookeeper.path": "/tmp/dev-storm-zookeeper",
+    "topology.tick.tuple.freq.secs": null,
+    "topology.builtin.metrics.bucket.size.secs": 60,
+    "topology.fall.back.on.java.serialization": true,
+    "topology.max.error.report.per.interval": 5,
+    "zmq.linger.millis": 5000,
+    "topology.skip.missing.kryo.registrations": false,
+    "storm.messaging.netty.client_worker_threads": 1,
+    "ui.childopts": "-Xmx768m",
+    "storm.zookeeper.session.timeout": 20000,
+    "nimbus.reassign": true,
+    "topology.trident.batch.emit.interval.millis": 500,
+    "storm.messaging.netty.flush.check.interval.ms": 10,
+    "nimbus.monitor.freq.secs": 10,
+    "logviewer.childopts": "-Xmx128m",
+    "java.library.path": "/usr/local/lib:/opt/local/lib:/usr/lib",
+    "topology.executor.send.buffer.size": 1024,
+    }
+```
+
+### /api/v1/cluster/summary (GET)
+
+Returns cluster summary information such as nimbus uptime or number of supervisors.
+
+Response fields:
+
+|Field  |Value|Description
+|---	|---	|---
+|stormVersion|String| Storm version|
+|supervisors|Integer| Number of supervisors running|
+|topologies| Integer| Number of topologies running| 
+|slotsTotal| Integer|Total number of available worker slots|
+|slotsUsed| Integer| Number of worker slots used|
+|slotsFree| Integer |Number of worker slots available|
+|executorsTotal| Integer |Total number of executors|
+|tasksTotal| Integer |Total tasks|
+
+Sample response:
+
+```json
+   {
+    "stormVersion": "0.9.2-incubating-SNAPSHOT",
+    "supervisors": 1,
+    "slotsTotal": 4,
+    "slotsUsed": 3,
+    "slotsFree": 1,
+    "executorsTotal": 28,
+    "tasksTotal": 28
+    }
+```
+
+### /api/v1/supervisor/summary (GET)
+
+Returns summary information for all supervisors.
+
+Response fields:
+
+|Field  |Value|Description|
+|---	|---	|---
+|id| String | Supervisor's id|
+|host| String| Supervisor's host name|
+|uptime| String| Shows how long the supervisor is running|
+|uptimeSeconds| Integer| Shows how long the supervisor is running in seconds|
+|slotsTotal| Integer| Total number of available worker slots for this supervisor|
+|slotsUsed| Integer| Number of worker slots used on this supervisor|
+|totalMem| Double| Total memory capacity on this supervisor|
+|totalCpu| Double| Total CPU capacity on this supervisor|
+|usedMem| Double| Used memory capacity on this supervisor|
+|usedCpu| Double| Used CPU capacity on this supervisor|
+
+Sample response:
+
+```json
+{
+    "supervisors": [
+        {
+            "id": "0b879808-2a26-442b-8f7d-23101e0c3696",
+            "host": "10.11.1.7",
+            "uptime": "5m 58s",
+            "uptimeSeconds": 358,
+            "slotsTotal": 4,
+            "slotsUsed": 3,
+            "totalMem": 3000,
+            "totalCpu": 400,
+            "usedMem": 1280,
+            "usedCPU": 160
+        }
+    ],
+    "schedulerDisplayResource": true
+}
+```
+
+### /api/v1/nimbus/summary (GET)
+
+Returns summary information for all nimbus hosts.
+
+Response fields:
+
+|Field  |Value|Description|
+|---	|---	|---
+|host| String | Nimbus' host name|
+|port| int| Nimbus' port number|
+|status| String| Possible values are Leader, Not a Leader, Dead|
+|nimbusUpTime| String| Shows since how long the nimbus has been running|
+|nimbusUpTimeSeconds| String| Shows since how long the nimbus has been running in seconds|
+|nimbusLogLink| String| Logviewer url to view the nimbus.log|
+|version| String| Version of storm this nimbus host is running|
+
+Sample response:
+
+```json
+{
+    "nimbuses":[
+        {
+            "host":"192.168.202.1",
+            "port":6627,
+            "nimbusLogLink":"http:\/\/192.168.202.1:8000\/log?file=nimbus.log",
+            "status":Leader,
+            "version":"0.10.0-SNAPSHOT",
+            "nimbusUpTime":"3m 33s",
+            "nimbusUpTimeSeconds":"213"
+        }
+    ]
+}
+```
+
+### /api/v1/history/summary (GET)
+
+Returns a list of all running topologies' IDs submitted by the current user.
+
+Response fields:
+
+|Field  |Value | Description|
+|---	|---	|---
+|topo-history| List| List of Topologies' IDs|
+
+Sample response:
+
+```json
+{
+    "topo-history":[
+        "wc6-1-1446571009",
+        "wc8-2-1446587178"
+     ]
+}
+```
+
+### /api/v1/topology/summary (GET)
+
+Returns summary information for all topologies.
+
+Response fields:
+
+|Field  |Value | Description|
+|---	|---	|---
+|id| String| Topology Id|
+|name| String| Topology Name|
+|status| String| Topology Status|
+|uptime| String|  Shows how long the topology is running|
+|uptimeSeconds| Integer|  Shows how long the topology is running in seconds|
+|tasksTotal| Integer |Total number of tasks for this topology|
+|workersTotal| Integer |Number of workers used for this topology|
+|executorsTotal| Integer |Number of executors used for this topology|
+|replicationCount| Integer |Number of nimbus hosts on which this topology code is replicated|
+|requestedMemOnHeap| Double|Requested On-Heap Memory by User (MB)
+|requestedMemOffHeap| Double|Requested Off-Heap Memory by User (MB)|
+|requestedTotalMem| Double|Requested Total Memory by User (MB)|
+|requestedCpu| Double|Requested CPU by User (%)|
+|assignedMemOnHeap| Double|Assigned On-Heap Memory by Scheduler (MB)|
+|assignedMemOffHeap| Double|Assigned Off-Heap Memory by Scheduler (MB)|
+|assignedTotalMem| Double|Assigned Total Memory by Scheduler (MB)|
+|assignedCpu| Double|Assigned CPU by Scheduler (%)|
+
+Sample response:
+
+```json
+{
+    "topologies": [
+        {
+            "id": "WordCount3-1-1402960825",
+            "name": "WordCount3",
+            "status": "ACTIVE",
+            "uptime": "6m 5s",
+            "uptimeSeconds": 365,
+            "tasksTotal": 28,
+            "workersTotal": 3,
+            "executorsTotal": 28,
+            "replicationCount": 1,
+            "requestedMemOnHeap": 640,
+            "requestedMemOffHeap": 128,
+            "requestedTotalMem": 768,
+            "requestedCpu": 80,
+            "assignedMemOnHeap": 640,
+            "assignedMemOffHeap": 128,
+            "assignedTotalMem": 768,
+            "assignedCpu": 80
+        }
+    ]
+    "schedulerDisplayResource": true
+}
+```
+
+### /api/v1/topology-workers/:id (GET)
+
+Returns the worker' information (host and port) for a topology.
+
+Response fields:
+
+|Field  |Value | Description|
+|---	|---	|---
+|hostPortList| List| Workers' information for a topology|
+|name| Integer| Logviewer Port|
+
+Sample response:
+
+```json
+{
+    "hostPortList":[
+            {
+                "host":"192.168.202.2",
+                "port":6701
+            },
+            {
+                "host":"192.168.202.2",
+                "port":6702
+            },
+            {
+                "host":"192.168.202.3",
+                "port":6700
+            }
+        ],
+    "logviewerPort":8000
+}
+```
+
+### /api/v1/topology/:id (GET)
+
+Returns topology information and statistics.  Substitute id with topology id.
+
+Request parameters:
+
+|Parameter |Value   |Description  |
+|----------|--------|-------------|
+|id   	   |String (required)| Topology Id  |
+|window    |String. Default value :all-time| Window duration for metrics in seconds|
+|sys       |String. Values 1 or 0. Default value 0| Controls including sys stats part of the response|
+
+
+Response fields:
+
+|Field  |Value |Description|
+|---	|---	|---
+|id| String| Topology Id|
+|name| String |Topology Name|
+|uptime| String |How long the topology has been running|
+|uptimeSeconds| Integer |How long the topology has been running in seconds|
+|status| String |Current status of the topology, e.g. "ACTIVE"|
+|tasksTotal| Integer |Total number of tasks for this topology|
+|workersTotal| Integer |Number of workers used for this topology|
+|executorsTotal| Integer |Number of executors used for this topology|
+|msgTimeout| Integer | Number of seconds a tuple has before the spout considers it failed |
+|windowHint| String | window param value in "hh mm ss" format. Default value is "All Time"|
+|schedulerDisplayResource| Boolean | Whether to display scheduler resource information|
+|topologyStats| Array | Array of all the topology related stats per time window|
+|topologyStats.windowPretty| String |Duration passed in HH:MM:SS format|
+|topologyStats.window| String |User requested time window for metrics|
+|topologyStats.emitted| Long |Number of messages emitted in given window|
+|topologyStats.trasferred| Long |Number messages transferred in given window|
+|topologyStats.completeLatency| String (double value returned in String format) |Total latency for processing the message|
+|topologyStats.acked| Long |Number of messages acked in given window|
+|topologyStats.failed| Long |Number of messages failed in given window|
+|spouts| Array | Array of all the spout components in the topology|
+|spouts.spoutId| String |Spout id|
+|spouts.executors| Integer |Number of executors for the spout|
+|spouts.emitted| Long |Number of messages emitted in given window |
+|spouts.completeLatency| String (double value returned in String format) |Total latency for processing the message|
+|spouts.transferred| Long |Total number of messages  transferred in given window|
+|spouts.tasks| Integer |Total number of tasks for the spout|
+|spouts.lastError| String |Shows the last error happened in a spout|
+|spouts.errorLapsedSecs| Integer | Number of seconds elapsed since that last error happened in a spout|
+|spouts.errorWorkerLogLink| String | Link to the worker log that reported the exception |
+|spouts.acked| Long |Number of messages acked|
+|spouts.failed| Long |Number of messages failed|
+|bolts| Array | Array of bolt components in the topology|
+|bolts.boltId| String |Bolt id|
+|bolts.capacity| String (double value returned in String format) |This value indicates number of messages executed * average execute latency / time window|
+|bolts.processLatency| String (double value returned in String format)  |Average time of the bolt to ack a message after it was received|
+|bolts.executeLatency| String (double value returned in String format) |Average time to run the execute method of the bolt|
+|bolts.executors| Integer |Number of executor tasks in the bolt component|
+|bolts.tasks| Integer |Number of instances of bolt|
+|bolts.acked| Long |Number of tuples acked by the bolt|
+|bolts.failed| Long |Number of tuples failed by the bolt|
+|bolts.lastError| String |Shows the last error occurred in the bolt|
+|bolts.errorLapsedSecs| Integer |Number of seconds elapsed since that last error happened in a bolt|
+|bolts.errorWorkerLogLink| String | Link to the worker log that reported the exception |
+|bolts.emitted| Long |Number of tuples emitted|
+|replicationCount| Integer |Number of nimbus hosts on which this topology code is replicated|
+
+Examples:
+
+```no-highlight
+ 1. http://ui-daemon-host-name:8080/api/v1/topology/WordCount3-1-1402960825
+ 2. http://ui-daemon-host-name:8080/api/v1/topology/WordCount3-1-1402960825?sys=1
+ 3. http://ui-daemon-host-name:8080/api/v1/topology/WordCount3-1-1402960825?window=600
+```
+
+Sample response:
+
+```json
+ {
+    "name": "WordCount3",
+    "id": "WordCount3-1-1402960825",
+    "workersTotal": 3,
+    "window": "600",
+    "status": "ACTIVE",
+    "tasksTotal": 28,
+    "executorsTotal": 28,
+    "uptime": "29m 19s",
+    "uptimeSeconds": 1759,
+    "msgTimeout": 30,
+    "windowHint": "10m 0s",
+    "schedulerDisplayResource": true,
+    "topologyStats": [
+        {
+            "windowPretty": "10m 0s",
+            "window": "600",
+            "emitted": 397960,
+            "transferred": 213380,
+            "completeLatency": "0.000",
+            "acked": 213460,
+            "failed": 0
+        },
+        {
+            "windowPretty": "3h 0m 0s",
+            "window": "10800",
+            "emitted": 1190260,
+            "transferred": 638260,
+            "completeLatency": "0.000",
+            "acked": 638280,
+            "failed": 0
+        },
+        {
+            "windowPretty": "1d 0h 0m 0s",
+            "window": "86400",
+            "emitted": 1190260,
+            "transferred": 638260,
+            "completeLatency": "0.000",
+            "acked": 638280,
+            "failed": 0
+        },
+        {
+            "windowPretty": "All time",
+            "window": ":all-time",
+            "emitted": 1190260,
+            "transferred": 638260,
+            "completeLatency": "0.000",
+            "acked": 638280,
+            "failed": 0
+        }
+    ],
+    "spouts": [
+        {
+            "executors": 5,
+            "emitted": 28880,
+            "completeLatency": "0.000",
+            "transferred": 28880,
+            "acked": 0,
+            "spoutId": "spout",
+            "tasks": 5,
+            "lastError": "",
+            "errorLapsedSecs": null,
+            "failed": 0
+        }
+    ],
+        "bolts": [
+        {
+            "executors": 12,
+            "emitted": 184580,
+            "transferred": 0,
+            "acked": 184640,
+            "executeLatency": "0.048",
+            "tasks": 12,
+            "executed": 184620,
+            "processLatency": "0.043",
+            "boltId": "count",
+            "lastError": "",
+            "errorLapsedSecs": null,
+            "capacity": "0.003",
+            "failed": 0
+        },
+        {
+            "executors": 8,
+            "emitted": 184500,
+            "transferred": 184500,
+            "acked": 28820,
+            "executeLatency": "0.024",
+            "tasks": 8,
+            "executed": 28780,
+            "processLatency": "2.112",
+            "boltId": "split",
+            "lastError": "",
+            "errorLapsedSecs": null,
+            "capacity": "0.000",
+            "failed": 0
+        }
+    ],
+    "configuration": {
+        "storm.id": "WordCount3-1-1402960825",
+        "dev.zookeeper.path": "/tmp/dev-storm-zookeeper",
+        "topology.tick.tuple.freq.secs": null,
+        "topology.builtin.metrics.bucket.size.secs": 60,
+        "topology.fall.back.on.java.serialization": true,
+        "topology.max.error.report.per.interval": 5,
+        "zmq.linger.millis": 5000,
+        "topology.skip.missing.kryo.registrations": false,
+        "storm.messaging.netty.client_worker_threads": 1,
+        "ui.childopts": "-Xmx768m",
+        "storm.zookeeper.session.timeout": 20000,
+        "nimbus.reassign": true,
+        "topology.trident.batch.emit.interval.millis": 500,
+        "storm.messaging.netty.flush.check.interval.ms": 10,
+        "nimbus.monitor.freq.secs": 10,
+        "logviewer.childopts": "-Xmx128m",
+        "java.library.path": "/usr/local/lib:/opt/local/lib:/usr/lib",
+        "topology.executor.send.buffer.size": 1024,
+        "storm.local.dir": "storm-local",
+        "storm.messaging.netty.buffer_size": 5242880,
+        "supervisor.worker.start.timeout.secs": 120,
+        "topology.enable.message.timeouts": true,
+        "nimbus.cleanup.inbox.freq.secs": 600,
+        "nimbus.inbox.jar.expiration.secs": 3600,
+        "drpc.worker.threads": 64,
+        "topology.worker.shared.thread.pool.size": 4,
+        "nimbus.seeds": [
+            "hw10843.local"
+        ],
+        "storm.messaging.netty.min_wait_ms": 100,
+        "storm.zookeeper.port": 2181,
+        "transactional.zookeeper.port": null,
+        "topology.executor.receive.buffer.size": 1024,
+        "transactional.zookeeper.servers": null,
+        "storm.zookeeper.root": "/storm",
+        "storm.zookeeper.retry.intervalceiling.millis": 30000,
+        "supervisor.enable": true,
+        "storm.messaging.netty.server_worker_threads": 1
+    },
+    "replicationCount": 1
+}
+```
+
+
+### /api/v1/topology/:id/component/:component (GET)
+
+Returns detailed metrics and executor information
+
+|Parameter |Value   |Description  |
+|----------|--------|-------------|
+|id   	   |String (required)| Topology Id  |
+|component |String (required)| Component Id |
+|window    |String. Default value :all-time| window duration for metrics in seconds|
+|sys       |String. Values 1 or 0. Default value 0| controls including sys stats part of the response|
+
+Response fields:
+
+|Field  |Value |Description|
+|---	|---	|---
+|id   | String | Component id|
+|name | String | Topology name|
+|componentType | String | component type: SPOUT or BOLT|
+|windowHint| String | window param value in "hh mm ss" format. Default value is "All Time"|
+|executors| Integer |Number of executor tasks in the component|
+|componentErrors| Array of Errors | List of component errors|
+|componentErrors.errorTime| Long | Timestamp when the exception occurred (Prior to 0.11.0, this field was named 'time'.)|
+|componentErrors.errorHost| String | host name for the error|
+|componentErrors.errorPort| String | port for the error|
+|componentErrors.error| String |Shows the error happened in a component|
+|componentErrors.errorLapsedSecs| Integer | Number of seconds elapsed since the error happened in a component |
+|componentErrors.errorWorkerLogLink| String | Link to the worker log that reported the exception |
+|topologyId| String | Topology id|
+|tasks| Integer |Number of instances of component|
+|window    |String. Default value "All Time" | window duration for metrics in seconds|
+|spoutSummary or boltStats| Array |Array of component stats. **Please note this element tag can be spoutSummary or boltStats depending on the componentType**|
+|spoutSummary.windowPretty| String |Duration passed in HH:MM:SS format|
+|spoutSummary.window| String | window duration for metrics in seconds|
+|spoutSummary.emitted| Long |Number of messages emitted in given window |
+|spoutSummary.completeLatency| String (double value returned in String format) |Total latency for processing the message|
+|spoutSummary.transferred| Long |Total number of messages  transferred in given window|
+|spoutSummary.acked| Long |Number of messages acked|
+|spoutSummary.failed| Long |Number of messages failed|
+|boltStats.windowPretty| String |Duration passed in HH:MM:SS format|
+|boltStats..window| String | window duration for metrics in seconds|
+|boltStats.transferred| Long |Total number of messages  transferred in given window|
+|boltStats.processLatency| String (double value returned in String format)  |Average time of the bolt to ack a message after it was received|
+|boltStats.acked| Long |Number of messages acked|
+|boltStats.failed| Long |Number of messages failed|
+|profilingAndDebuggingCapable| Boolean |true if there is support for Profiling and Debugging Actions|
+|profileActionEnabled| Boolean |true if worker profiling (Java Flight Recorder) is enabled|
+|profilerActive| Array |Array of currently active Profiler Actions|
+
+
+Examples:
+
+```no-highlight
+1. http://ui-daemon-host-name:8080/api/v1/topology/WordCount3-1-1402960825/component/spout
+2. http://ui-daemon-host-name:8080/api/v1/topology/WordCount3-1-1402960825/component/spout?sys=1
+3. http://ui-daemon-host-name:8080/api/v1/topology/WordCount3-1-1402960825/component/spout?window=600
+```
+
+Sample response:
+
+```json
+{
+    "name": "WordCount3",
+    "id": "spout",
+    "componentType": "spout",
+    "windowHint": "10m 0s",
+    "executors": 5,
+    "componentErrors":[{"errorTime": 1406006074000,
+                        "errorHost": "10.11.1.70",
+                        "errorPort": 6701,
+                        "errorWorkerLogLink": "http://10.11.1.7:8000/log?file=worker-6701.log",
+                        "errorLapsedSecs": 16,
+                        "error": "java.lang.RuntimeException: java.lang.StringIndexOutOfBoundsException: Some Error\n\tat backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:128)\n\tat backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:99)\n\tat backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:80)\n\tat backtype...more.."
+    }],
+    "topologyId": "WordCount3-1-1402960825",
+    "tasks": 5,
+    "window": "600",
+    "profilerActive": [
+        {
+            "host": "10.11.1.70",
+            "port": "6701",
+            "dumplink":"http:\/\/10.11.1.70:8000\/dumps\/ex-1-1452718803\/10.11.1.70%3A6701",
+            "timestamp":"576328"
+        }
+    ],
+    "profilingAndDebuggingCapable": true,
+    "profileActionEnabled": true,
+    "spoutSummary": [
+        {
+            "windowPretty": "10m 0s",
+            "window": "600",
+            "emitted": 28500,
+            "transferred": 28460,
+            "completeLatency": "0.000",
+            "acked": 0,
+            "failed": 0
+        },
+        {
+            "windowPretty": "3h 0m 0s",
+            "window": "10800",
+            "emitted": 127640,
+            "transferred": 127440,
+            "completeLatency": "0.000",
+            "acked": 0,
+            "failed": 0
+        },
+        {
+            "windowPretty": "1d 0h 0m 0s",
+            "window": "86400",
+            "emitted": 127640,
+            "transferred": 127440,
+            "completeLatency": "0.000",
+            "acked": 0,
+            "failed": 0
+        },
+        {
+            "windowPretty": "All time",
+            "window": ":all-time",
+            "emitted": 127640,
+            "transferred": 127440,
+            "completeLatency": "0.000",
+            "acked": 0,
+            "failed": 0
+        }
+    ],
+    "outputStats": [
+        {
+            "stream": "__metrics",
+            "emitted": 40,
+            "transferred": 0,
+            "completeLatency": "0",
+            "acked": 0,
+            "failed": 0
+        },
+        {
+            "stream": "default",
+            "emitted": 28460,
+            "transferred": 28460,
+            "completeLatency": "0",
+            "acked": 0,
+            "failed": 0
+        }
+    ],
+    "executorStats": [
+        {
+            "workerLogLink": "http://10.11.1.7:8000/log?file=worker-6701.log",
+            "emitted": 5720,
+            "port": 6701,
+            "completeLatency": "0.000",
+            "transferred": 5720,
+            "host": "10.11.1.7",
+            "acked": 0,
+            "uptime": "43m 4s",
+            "uptimeSeconds": 2584,
+            "id": "[24-24]",
+            "failed": 0
+        },
+        {
+            "workerLogLink": "http://10.11.1.7:8000/log?file=worker-6703.log",
+            "emitted": 5700,
+            "port": 6703,
+            "completeLatency": "0.000",
+            "transferred": 5700,
+            "host": "10.11.1.7",
+            "acked": 0,
+            "uptime": "42m 57s",
+            "uptimeSeconds": 2577,
+            "id": "[25-25]",
+            "failed": 0
+        },
+        {
+            "workerLogLink": "http://10.11.1.7:8000/log?file=worker-6702.log",
+            "emitted": 5700,
+            "port": 6702,
+            "completeLatency": "0.000",
+            "transferred": 5680,
+            "host": "10.11.1.7",
+            "acked": 0,
+            "uptime": "42m 57s",
+            "uptimeSeconds": 2577,
+            "id": "[26-26]",
+            "failed": 0
+        },
+        {
+            "workerLogLink": "http://10.11.1.7:8000/log?file=worker-6701.log",
+            "emitted": 5700,
+            "port": 6701,
+            "completeLatency": "0.000",
+            "transferred": 5680,
+            "host": "10.11.1.7",
+            "acked": 0,
+            "uptime": "43m 4s",
+            "uptimeSeconds": 2584,
+            "id": "[27-27]",
+            "failed": 0
+        },
+        {
+            "workerLogLink": "http://10.11.1.7:8000/log?file=worker-6703.log",
+            "emitted": 5680,
+            "port": 6703,
+            "completeLatency": "0.000",
+            "transferred": 5680,
+            "host": "10.11.1.7",
+            "acked": 0,
+            "uptime": "42m 57s",
+            "uptimeSeconds": 2577,
+            "id": "[28-28]",
+            "failed": 0
+        }
+    ]
+}
+```
+
+## Profiling and Debugging GET Operations
+
+###  /api/v1/topology/:id/profiling/start/:host-port/:timeout (GET)
+
+Request to start profiler on worker with timeout. Returns status and link to profiler artifacts for worker.
+
+|Parameter |Value   |Description  |
+|----------|--------|-------------|
+|id   	   |String (required)| Topology Id  |
+|host-port |String (required)| Worker Id |
+|timeout |String (required)| Time out for profiler to stop in minutes |
+
+Response fields:
+
+|Field  |Value |Description|
+|-----	|----- |-----------|
+|id   | String | Worker id|
+|status | String | Response Status |
+|timeout | String | Requested timeout
+|dumplink | String | Link to logviewer URL for worker profiler documents.|
+
+Examples:
+
+```no-highlight
+1. http://ui-daemon-host-name:8080/api/v1/topology/wordcount-1-1446614150/profiling/start/10.11.1.7:6701/10
+2. http://ui-daemon-host-name:8080/api/v1/topology/wordcount-1-1446614150/profiling/start/10.11.1.7:6701/5
+3. http://ui-daemon-host-name:8080/api/v1/topology/wordcount-1-1446614150/profiling/start/10.11.1.7:6701/20
+```
+
+Sample response:
+
+```json
+{
+   "status": "ok",
+   "id": "10.11.1.7:6701",
+   "timeout": "10",
+   "dumplink": "http:\/\/10.11.1.7:8000\/dumps\/wordcount-1-1446614150\/10.11.1.7%3A6701"
+}
+```
+
+###  /api/v1/topology/:id/profiling/dumpprofile/:host-port (GET)
+
+Request to dump profiler recording on worker. Returns status and worker id for the request.
+
+|Parameter |Value   |Description  |
+|----------|--------|-------------|
+|id   	   |String (required)| Topology Id  |
+|host-port |String (required)| Worker Id |
+
+Response fields:
+
+|Field  |Value |Description|
+|-----	|----- |-----------|
+|id   | String | Worker id|
+|status | String | Response Status |
+
+Examples:
+
+```no-highlight
+1. http://ui-daemon-host-name:8080/api/v1/topology/wordcount-1-1446614150/profiling/dumpprofile/10.11.1.7:6701
+```
+
+Sample response:
+
+```json
+{
+   "status": "ok",
+   "id": "10.11.1.7:6701",
+}
+```
+
+###  /api/v1/topology/:id/profiling/stop/:host-port (GET)
+
+Request to stop profiler on worker. Returns status and worker id for the request.
+
+|Parameter |Value   |Description  |
+|----------|--------|-------------|
+|id   	   |String (required)| Topology Id  |
+|host-port |String (required)| Worker Id |
+
+Response fields:
+
+|Field  |Value |Description|
+|-----	|----- |-----------|
+|id   | String | Worker id|
+|status | String | Response Status |
+
+Examples:
+
+```no-highlight
+1. http://ui-daemon-host-name:8080/api/v1/topology/wordcount-1-1446614150/profiling/stop/10.11.1.7:6701
+```
+
+Sample response:
+
+```json
+{
+   "status": "ok",
+   "id": "10.11.1.7:6701",
+}
+```
+
+###  /api/v1/topology/:id/profiling/dumpjstack/:host-port (GET)
+
+Request to dump jstack on worker. Returns status and worker id for the request.
+
+|Parameter |Value   |Description  |
+|----------|--------|-------------|
+|id   	   |String (required)| Topology Id  |
+|host-port |String (required)| Worker Id |
+
+Response fields:
+
+|Field  |Value |Description|
+|-----	|----- |-----------|
+|id   | String | Worker id|
+|status | String | Response Status |
+
+Examples:
+
+```no-highlight
+1. http://ui-daemon-host-name:8080/api/v1/topology/wordcount-1-1446614150/profiling/dumpjstack/10.11.1.7:6701
+```
+
+Sample response:
+
+```json
+{
+   "status": "ok",
+   "id": "10.11.1.7:6701",
+}
+```
+
+###  /api/v1/topology/:id/profiling/dumpheap/:host-port (GET)
+
+Request to dump heap (jmap) on worker. Returns status and worker id for the request.
+
+|Parameter |Value   |Description  |
+|----------|--------|-------------|
+|id   	   |String (required)| Topology Id  |
+|host-port |String (required)| Worker Id |
+
+Response fields:
+
+|Field  |Value |Description|
+|-----	|----- |-----------|
+|id   | String | Worker id|
+|status | String | Response Status |
+
+Examples:
+
+```no-highlight
+1. http://ui-daemon-host-name:8080/api/v1/topology/wordcount-1-1446614150/profiling/dumpheap/10.11.1.7:6701
+```
+
+Sample response:
+
+```json
+{
+   "status": "ok",
+   "id": "10.11.1.7:6701",
+}
+```
+
+###  /api/v1/topology/:id/profiling/restartworker/:host-port (GET)
+
+Request to request the worker. Returns status and worker id for the request.
+
+|Parameter |Value   |Description  |
+|----------|--------|-------------|
+|id   	   |String (required)| Topology Id  |
+|host-port |String (required)| Worker Id |
+
+Response fields:
+
+|Field  |Value |Description|
+|-----	|----- |-----------|
+|id   | String | Worker id|
+|status | String | Response Status |
+
+Examples:
+
+```no-highlight
+1. http://ui-daemon-host-name:8080/api/v1/topology/wordcount-1-1446614150/profiling/restartworker/10.11.1.7:6701
+```
+
+Sample response:
+
+```json
+{
+   "status": "ok",
+   "id": "10.11.1.7:6701",
+}
+```
+
+## POST Operations
+
+### /api/v1/topology/:id/activate (POST)
+
+Activates a topology.
+
+|Parameter |Value   |Description  |
+|----------|--------|-------------|
+|id   	   |String (required)| Topology Id  |
+
+Sample Response:
+
+```json
+{"topologyOperation":"activate","topologyId":"wordcount-1-1420308665","status":"success"}
+```
+
+
+### /api/v1/topology/:id/deactivate (POST)
+
+Deactivates a topology.
+
+|Parameter |Value   |Description  |
+|----------|--------|-------------|
+|id   	   |String (required)| Topology Id  |
+
+Sample Response:
+
+```json
+{"topologyOperation":"deactivate","topologyId":"wordcount-1-1420308665","status":"success"}
+```
+
+
+### /api/v1/topology/:id/rebalance/:wait-time (POST)
+
+Rebalances a topology.
+
+|Parameter |Value   |Description  |
+|----------|--------|-------------|
+|id   	   |String (required)| Topology Id  |
+|wait-time |String (required)| Wait time before rebalance happens |
+|rebalanceOptions| Json (optional) | topology rebalance options |
+
+
+Sample rebalanceOptions json:
+
+```json
+{"rebalanceOptions" : {"numWorkers" : 2, "executors" : {"spout" :4, "count" : 10}}, "callback" : "foo"}
+```
+
+Examples:
+
+```no-highlight
+curl  -i -b ~/cookiejar.txt -c ~/cookiejar.txt -X POST  
+-H "Content-Type: application/json" 
+-d  '{"rebalanceOptions": {"numWorkers": 2, "executors": { "spout" : "5", "split": 7, "count": 5 }}, "callback":"foo"}' 
+http://localhost:8080/api/v1/topology/wordcount-1-1420308665/rebalance/0
+```
+
+Sample Response:
+
+```json
+{"topologyOperation":"rebalance","topologyId":"wordcount-1-1420308665","status":"success"}
+```
+
+
+
+### /api/v1/topology/:id/kill/:wait-time (POST)
+
+Kills a topology.
+
+|Parameter |Value   |Description  |
+|----------|--------|-------------|
+|id   	   |String (required)| Topology Id  |
+|wait-time |String (required)| Wait time before rebalance happens |
+
+Caution: Small wait times (0-5 seconds) may increase the probability of triggering the bug reported in
+[STORM-112](https://issues.apache.org/jira/browse/STORM-112), which may result in broker Supervisor
+daemons.
+
+Sample Response:
+
+```json
+{"topologyOperation":"kill","topologyId":"wordcount-1-1420308665","status":"success"}
+```
+
+## API errors
+
+The API returns 500 HTTP status codes in case of any errors.
+
+Sample response:
+
+```json
+{
+  "error": "Internal Server Error",
+  "errorMessage": "java.lang.NullPointerException\n\tat clojure.core$name.invoke(core.clj:1505)\n\tat backtype.storm.ui.core$component_page.invoke(core.clj:752)\n\tat backtype.storm.ui.core$fn__7766.invoke(core.clj:782)\n\tat compojure.core$make_route$fn__5755.invoke(core.clj:93)\n\tat compojure.core$if_route$fn__5743.invoke(core.clj:39)\n\tat compojure.core$if_method$fn__5736.invoke(core.clj:24)\n\tat compojure.core$routing$fn__5761.invoke(core.clj:106)\n\tat clojure.core$some.invoke(core.clj:2443)\n\tat compojure.core$routing.doInvoke(core.clj:106)\n\tat clojure.lang.RestFn.applyTo(RestFn.java:139)\n\tat clojure.core$apply.invoke(core.clj:619)\n\tat compojure.core$routes$fn__5765.invoke(core.clj:111)\n\tat ring.middleware.reload$wrap_reload$fn__6880.invoke(reload.clj:14)\n\tat backtype.storm.ui.core$catch_errors$fn__7800.invoke(core.clj:836)\n\tat ring.middleware.keyword_params$wrap_keyword_params$fn__6319.invoke(keyword_params.clj:27)\n\tat ring.middleware.nested_params$wrap_nested_params$fn__6358.invoke(nested_params.clj:65)\n\tat ring.middleware.params$wrap_params$fn__6291.invoke(params.clj:55)\n\tat ring.middleware.multipart_params$wrap_multipart_params$fn__6386.invoke(multipart_params.clj:103)\n\tat ring.middleware.flash$wrap_flash$fn__6675.invoke(flash.clj:14)\n\tat ring.middleware.session$wrap_session$fn__6664.invoke(session.clj:43)\n\tat ring.middleware.cookies$wrap_cookies$fn__6595.invoke(cookies.clj:160)\n\tat ring.adapter.jetty$proxy_handler$fn__6112.invoke(jetty.clj:16)\n\tat ring.adapter.jetty.proxy$org.mortbay.jetty.handler.AbstractHandler$0.handle(Unknown Source)\n\tat org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)\n\tat org.mortbay.jetty.Server.handle(Server.java:326)\n\tat org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)\n\tat org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)\n\tat org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)\n\tat org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)\n\tat org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)\n\tat org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)\n\tat org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)\n"
+}
+```
diff --git a/downloads.html b/downloads.html
index 0e7f3e0..06092d1 100644
--- a/downloads.html
+++ b/downloads.html
@@ -23,16 +23,91 @@
     	<div class="row">
         	<div class="col-md-12">
 				  <p>
-				  Downloads for Storm are below. Instructions for how to set up a Storm cluster can be found <a href="/documentation/Setting-up-a-Storm-cluster.html">here</a>.
+				  Downloads for Apache Storm are below. Instructions for how to set up a Storm cluster can be found <a href="/documentation/Setting-up-a-Storm-cluster.html">here</a>.
 				  </p>
 
 				  <h3>Source Code</h3>
-				  Current source code is hosted on GitHub, <a href="https://github.com/apache/storm">apache/storm</a>
+				  Current source code is mirrored on GitHub: <a href="https://github.com/apache/storm">apache/storm</a>
 				  
-				  <h3>Current Beta Release</h3>
-				  The current beta release is 0.10.0-beta1. Source and binary distributions can be found below.
+				  <h3>Current 0.10.x Release</h3>
+				  The current 0.10.x release is 0.10.0. Source and binary distributions can be found below.
 				  
-				  The list of changes for this release can be found <a href="https://github.com/apache/storm/blob/v0.10.0-beta1/CHANGELOG.md">here.</a>
+				  The list of changes for this release can be found <a href="https://github.com/apache/storm/blob/v0.10.0/CHANGELOG.md">here.</a>
+
+				  <ul>
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.10.0/apache-storm-0.10.0.tar.gz">apache-storm-0.10.0.tar.gz</a>
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0/apache-storm-0.10.0.tar.gz.asc">PGP</a>]
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0/apache-storm-0.10.0.tar.gz.sha">SHA512</a>] 
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0/apache-storm-0.10.0.tar.gz.md5">MD5</a>]
+					  </li>
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.10.0/apache-storm-0.10.0.zip">apache-storm-0.10.0.zip</a>
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0/apache-storm-0.10.0.zip.asc">PGP</a>]
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0/apache-storm-0.10.0.zip.sha">SHA512</a>] 
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0/apache-storm-0.10.0.zip.md5">MD5</a>]
+					  </li>
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.10.0/apache-storm-0.10.0-src.tar.gz">apache-storm-0.10.0-src.tar.gz</a>
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0/apache-storm-0.10.0-src.tar.gz.asc">PGP</a>]
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0/apache-storm-0.10.0-src.tar.gz.sha">SHA512</a>] 
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0/apache-storm-0.10.0-src.tar.gz.md5">MD5</a>]
+					  </li>
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.10.0/apache-storm-0.10.0-src.zip">apache-storm-0.10.0-src.zip</a>
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0/apache-storm-0.10.0-src.zip.asc">PGP</a>]
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0/apache-storm-0.10.0-src.zip.sha">SHA512</a>] 
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0/apache-storm-0.10.0-src.zip.md5">MD5</a>]
+					  </li>
+				  </ul>
+				  Storm artifacts are hosted in <a href="http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.storm%22">Maven Central</a>. You can add Storm as a dependency with the following coordinates:
+
+				  <pre>
+groupId: <a href="http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.storm%22">org.apache.storm</a>
+artifactId: storm-core
+version: 0.10.0</pre>				  
+				  
+				  <h3>Current 0.9.x Release</h3>
+				  The current 0.9.x release is 0.9.6. Source and binary distributions can be found below.
+				  
+				  The list of changes for this release can be found <a href="https://github.com/apache/storm/blob/v0.9.6/CHANGELOG.md">here.</a>
+
+				  <ul>
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.9.6/apache-storm-0.9.6.tar.gz">apache-storm-0.9.6.tar.gz</a>
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.6/apache-storm-0.9.6.tar.gz.asc">PGP</a>]
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.6/apache-storm-0.9.6.tar.gz.sha">SHA512</a>] 
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.6/apache-storm-0.9.6.tar.gz.md5">MD5</a>]
+					  </li>
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.9.6/apache-storm-0.9.6.zip">apache-storm-0.9.6.zip</a>
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.6/apache-storm-0.9.6.zip.asc">PGP</a>]
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.6/apache-storm-0.9.6.zip.sha">SHA512</a>] 
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.6/apache-storm-0.9.6.zip.md5">MD5</a>]
+					  </li>
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.9.6/apache-storm-0.9.6-src.tar.gz">apache-storm-0.9.6-src.tar.gz</a>
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.6/apache-storm-0.9.6-src.tar.gz.asc">PGP</a>]
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.6/apache-storm-0.9.6-src.tar.gz.sha">SHA512</a>] 
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.6/apache-storm-0.9.6-src.tar.gz.md5">MD5</a>]
+					  </li>
+					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.9.6/apache-storm-0.9.6-src.zip">apache-storm-0.9.6-src.zip</a>
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.6/apache-storm-0.9.6-src.zip.asc">PGP</a>]
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.6/apache-storm-0.9.6-src.zip.sha">SHA512</a>] 
+					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.9.6/apache-storm-0.9.6-src.zip.md5">MD5</a>]
+					  </li>
+				  </ul>
+
+				  Storm artifacts are hosted in <a href="http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.storm%22">Maven Central</a>. You can add Storm as a dependency with the following coordinates:
+
+
+				  <pre>
+groupId: <a href="http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.storm%22">org.apache.storm</a>
+artifactId: storm-core
+version: 0.9.6</pre>
+				  
+				  
+				  The signing keys for releases can be found <a href="http://www.apache.org/dist/storm/KEYS">here.</a>
+				  
+				  <p>
+					  
+				  </p>
+				  <h3>Previous Releases</h3>
+                  
+                  <b>0.10.0-beta1</b>
 
 				  <ul>
 					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1.tar.gz">apache-storm-0.10.0-beta1.tar.gz</a>
@@ -56,12 +131,9 @@
 					     [<a href="http://www.us.apache.org/dist/storm/apache-storm-0.10.0-beta1/apache-storm-0.10.0-beta1-src.zip.md5">MD5</a>]
 					  </li>
 				  </ul>
-				  
-				  
-				  <h3>Current Release</h3>
-				  The current release is 0.9.5. Source and binary distributions can be found below.
-				  
-				  The list of changes for this release can be found <a href="https://github.com/apache/storm/blob/v0.9.5/CHANGELOG.md">here.</a>
+
+                  
+                  <b>0.9.5</b>
 
 				  <ul>
 					  <li><a href="http://www.apache.org/dyn/closer.lua/storm/apache-storm-0.9.5/apache-storm-0.9.5.tar.gz">apache-storm-0.9.5.tar.gz</a>
@@ -86,20 +158,6 @@
 					  </li>
 				  </ul>
 
-				  Storm artifacts are hosted in <a href="http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.storm%22">Maven Central</a>. You can add Storm as a dependency with the following coordinates:
-				  
-				  <pre>
-groupId: <a href="http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22org.apache.storm%22">org.apache.storm</a>
-artifactId: storm-core
-version: 0.9.5</pre>
-				  
-				  
-				  The signing keys for releases can be found <a href="http://www.apache.org/dist/storm/KEYS">here.</a>
-				  
-				  <p>
-					  
-				  </p>
-				  <h3>Previous Releases</h3>
 				  
 				  <b>0.9.4</b>