Merge pull request #3127 from RuiLi8080/STORM-3507

[STORM-3507] add greylist for superviosrs which are forced out of blacklist due to low resources
diff --git a/.travis.yml b/.travis.yml
index d35d5e3..1dcdc0e 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -43,9 +43,9 @@
   - sudo add-apt-repository ppa:deadsnakes/ppa -y
   - sudo apt-get update
   - sudo apt-get install python3.6
-  - wget http://mirrors.rackhosting.com/apache/maven/maven-3/3.6.1/binaries/apache-maven-3.6.1-bin.tar.gz -P $HOME
-  - tar xzvf $HOME/apache-maven-3.6.1-bin.tar.gz -C $HOME
-  - export PATH=$HOME/apache-maven-3.6.1/bin:$PATH
+  - export MVN_HOME=$HOME/apache-maven-3.6.1
+  - if [ ! -d $MVN_HOME/bin ]; then wget https://archive.apache.org/dist/maven/maven-3/3.6.1/binaries/apache-maven-3.6.1-bin.tar.gz -P $HOME; tar xzvf $HOME/apache-maven-3.6.1-bin.tar.gz -C $HOME; fi
+  - export PATH=$MVN_HOME/bin:$PATH
 install: /bin/bash ./dev-tools/travis/travis-install.sh `pwd`
 script:
   - /bin/bash ./dev-tools/travis/travis-script.sh `pwd` $MODULES
@@ -54,3 +54,4 @@
     - "$HOME/.m2/repository"
     - "$HOME/.rvm"
     - "$NVM_DIR"
+    - "$HOME/apache-maven-3.6.1"
diff --git a/DEPENDENCY-LICENSES b/DEPENDENCY-LICENSES
index e0344fc..3603fd5 100644
--- a/DEPENDENCY-LICENSES
+++ b/DEPENDENCY-LICENSES
@@ -124,7 +124,7 @@
         * Apache Parquet Hadoop Bundle (org.apache.parquet:parquet-hadoop-bundle:1.8.1 - https://parquet.apache.org)
         * Apache Solr Solrj (org.apache.solr:solr-solrj:5.5.5 - http://lucene.apache.org/solr-parent/solr-solrj)
         * Apache Thrift (org.apache.thrift:libfb303:0.9.3 - http://thrift.apache.org)
-        * Apache Thrift (org.apache.thrift:libthrift:0.12.0 - http://thrift.apache.org)
+        * Apache Thrift (org.apache.thrift:libthrift:0.13.0 - http://thrift.apache.org)
         * Apache Thrift (org.apache.thrift:libthrift:0.9.3 - http://thrift.apache.org)
         * Apache Twill API (org.apache.twill:twill-api:0.6.0-incubating - http://twill.incubator.apache.org/twill-api)
         * Apache Twill common library (org.apache.twill:twill-common:0.6.0-incubating - http://twill.incubator.apache.org/twill-common)
diff --git a/LICENSE-binary b/LICENSE-binary
index 9dfcd5f..7c38960 100644
--- a/LICENSE-binary
+++ b/LICENSE-binary
@@ -751,7 +751,7 @@
         * Apache Log4j Web (org.apache.logging.log4j:log4j-web:2.11.2 - https://logging.apache.org/log4j/2.x/log4j-web/)
         * Apache Parquet Hadoop Bundle (org.apache.parquet:parquet-hadoop-bundle:1.8.1 - https://parquet.apache.org)
         * Apache Thrift (org.apache.thrift:libfb303:0.9.3 - http://thrift.apache.org)
-        * Apache Thrift (org.apache.thrift:libthrift:0.12.0 - http://thrift.apache.org)
+        * Apache Thrift (org.apache.thrift:libthrift:0.13.0 - http://thrift.apache.org)
         * Plexus Interpolation API (org.codehaus.plexus:plexus-interpolation:1.25 - http://codehaus-plexus.github.io/plexus-interpolation/)
         * Apache Twill API (org.apache.twill:twill-api:0.6.0-incubating - http://twill.incubator.apache.org/twill-api)
         * Apache Twill common library (org.apache.twill:twill-common:0.6.0-incubating - http://twill.incubator.apache.org/twill-common)
diff --git a/RELEASING.md b/RELEASING.md
index 4e6d8b2..6affa3a 100644
--- a/RELEASING.md
+++ b/RELEASING.md
@@ -1,10 +1,16 @@
-# Committer documentation
+# Release
 
-This document summarizes information relevant to Storm committers.  It includes information about
-the Storm release process.
+This document includes information about the Storm release process.
 
 ---
 
+# Release Policy
+
+Apache Storm follows the basic idea of [Semantic Versioning](https://semver.org/). Given a version number MAJOR.MINOR.PATCH, increment the:
+ 1. MAJOR version when you make incompatible API changes,
+ 2. MINOR version when you add functionality in a backwards compatible manner, and
+ 3. PATCH version when you make backwards compatible bug fixes.
+ 
 # Release process
 
 ## Preparation
@@ -15,19 +21,39 @@
 
 Ensure you have a signed GPG key, and that the GPG key is listed in the Storm KEYS file at https://dist.apache.org/repos/dist/release/storm/KEYS. The key should be hooked into the Apache web of trust. You should read the [Apache release signing page](http://www.apache.org/dev/release-signing.html), the [release distribution page](http://www.apache.org/dev/release-distribution.html#sigs-and-sums), as well as the [release publishing](http://www.apache.org/dev/release-publishing) and [release policy](http://www.apache.org/legal/release-policy.html) pages.
 
+If you are setting up a new MINOR version release, create a new branch based on `master` branch, e.g. `2.2.x-branch`. Then on master branch, set the version to a higher MINOR version (with SNAPSHOT), e.g. `mvn versions:set -DnewVersion=2.3.0-SNAPSHOT -P dist,rat,externals,examples`.
+In this way, you create a new release line and then you can create PATCH version releases from it, e.g. `2.2.0`.
+
 ## Setting up a vote
 
-1. Run `mvn release:prepare` followed `mvn release:perform` on the branch to be released. This will create all the artifacts that will eventually be available in maven central. This step may seem simple, but a lot can go wrong (mainly flaky tests).
+0. Checkout to the branch to be released.
+
+1. Run `mvn release:prepare -P dist,rat,externals,examples` followed `mvn release:perform -P dist,rat,externals,examples`. This will create all the artifacts that will eventually be available in maven central. This step may seem simple, but a lot can go wrong (mainly flaky tests). 
+Note that this will create and push two commits with the commit message starting with "[maven-release-plugin]" and it will also create and publish a git tag, e.g. `v2.2.0`.
 
 2. Once you get a successful maven release, a “staging repository” will be created at http://repository.apache.org in the “open” state, meaning it is still writable. You will need to close it, making it read-only. You can find more information on this step [here](www.apache.org/dev/publishing-maven-artifacts.html).
 
-3. Run `mvn package` for `storm-dist/binary` and `storm-dist/source` to create the actual distributions.
+3. Checkout to the git tag that was published by Step 1 above, e.g. `git checkout tags/v2.2.0 -b v2.2.0`. Run `mvn package` for `storm-dist/binary` and `storm-dist/source` to create the actual distributions.
 
-4. Sign and generate checksums for the *.tar.gz and *.zip distribution files. 
+4. Generate checksums for the *.tar.gz and *.zip distribution files, e.g.
+```bash
+cd storm-dist/source/target
+gpg --print-md SHA512 apache-storm-2.2.0-src.zip > apache-storm-2.2.0-src.zip.sha512
+gpg --print-md SHA512 apache-storm-2.2.0-src.tar.gz > apache-storm-2.2.0-src.tar.gz.sha512
+
+cd storm-dist/binary/final-package/target
+gpg --print-md SHA512 apache-storm-2.2.0.zip > apache-storm-2.2.0.zip.sha512
+gpg --print-md SHA512 apache-storm-2.2.0.tar.gz > apache-storm-2.2.0.tar.gz.sha512
+```
 
 5. Create a directory in the dist svn repo for the release candidate: https://dist.apache.org/repos/dist/dev/storm/apache-storm-x.x.x-rcx
 
-6. Run `dev-tools/release_notes.py` for the release version, piping the output to a RELEASE_NOTES.html file. Move that file to the svn release directory, sign it, and generate checksums.
+6. Run `dev-tools/release_notes.py` for the release version, piping the output to a RELEASE_NOTES.html file. Move that file to the svn release directory, sign it, and generate checksums, e.g.
+```bash
+python dev-tools/release_notes.py 2.2.0 > RELEASE_NOTES.html
+gpg --armor --output RELEASE_NOTES.html.asc --detach-sig RELEASE_NOTES.html
+gpg --print-md SHA512 RELEASE_NOTES.html > RELEASE_NOTES.html.sha512
+```
 
 7. Move the release files from Step 4 and 6 to the svn directory from Step 5. Add and commit the files. This makes them available in the Apache staging repo.
 
@@ -56,3 +82,26 @@
 1. Go to http://repository.apache.org and drop the staging repository.
 
 2. Delete the staged distribution files from https://dist.apache.org/repos/dist/dev/storm/
+
+3. Delete the git tag.
+
+# How to vote on a release candidate
+
+We encourage everyone to review and vote on a release candidate to make an Apache Storm release more reliable and trustworthy.
+
+Below is a checklist that one could do to review a release candidate. 
+Please note this list is not exhaustive and only includes some of the common steps. Feel free to add your own tests.
+
+1. Verify files such as *.asc, *.sha512; some scripts are available under `dev-tools/rc` to help with it;
+2. Build Apache Storm source code and run unit tests, create an Apache Storm distribution;
+3. Set up a standalone cluster using apache-storm-xxx.zip, apache-storm-xxx.tar.gz, the Apache Storm distribution created from step 2, separately;
+4. Launch WordCountTopology and ThroughputVsLatency topology and check logs, UI metrics, etc;
+5. Test basic UI functionalities such as jstack, heap dump, deactivate, activate, rebalance, change log level, log search, kill topology;
+6. Test basic CLI such as kill, list, deactivate, deactivate, rebalance, etc.
+
+It's also preferable to set up a standalone secure Apache Storm cluster and test basic funcionalities on it.
+
+Don't feel the pressure to do everything listed above. After you finish your review, reply to the corresponding email thread with your vote, summarize the work you have performed and elaborate the issues
+you have found if any. Also please feel free to update the checklist if you think anything important is missing there. 
+
+Your contribution is very much appreciated.  
\ No newline at end of file
diff --git a/SECURITY.md b/SECURITY.md
index 668ea8a..5e321e5 100644
--- a/SECURITY.md
+++ b/SECURITY.md
@@ -28,6 +28,7 @@
 |--------------|--------------|------------------------|--------|
 | 2181 | `storm.zookeeper.port` | Nimbus, Supervisors, and Worker processes | ZooKeeper |
 | 6627 | `nimbus.thrift.port` | Storm clients, Supervisors, and UI | Nimbus |
+| 6628 | `supervisor.thrift.port` | Nimbus | Supervisors |
 | 8080 | `ui.port` | Client Web Browsers | UI |
 | 8000 | `logviewer.port` | Client Web Browsers | Logviewer |
 | 3772 | `drpc.port` | External DRPC Clients | DRPC |
diff --git a/bin/storm.py b/bin/storm.py
index e1b47f6..38c5ae2 100755
--- a/bin/storm.py
+++ b/bin/storm.py
@@ -81,9 +81,10 @@
         cmd = os.path.join(JAVA_HOME, 'bin', cmd)
     return cmd
 
-def confvalue(name, storm_config_opts, extrapaths, daemon=True):
+def confvalue(name, storm_config_opts, extrapaths, overriding_conf_file=None, daemon=True):
     command = [
-        JAVA_CMD, "-client", get_config_opts(storm_config_opts), "-Dstorm.conf.file=" + CONF_FILE,
+        JAVA_CMD, "-client", get_config_opts(storm_config_opts),
+        "-Dstorm.conf.file=" + (overriding_conf_file if overriding_conf_file else ""),
         "-cp", get_classpath(extrajars=extrapaths, daemon=daemon), "org.apache.storm.command.ConfigValue", name
     ]
     output = subprocess.Popen(command, stdout=subprocess.PIPE).communicate()[0]
@@ -229,9 +230,12 @@
         raise RuntimeError("dependency handler returns non-json response: sysout<%s>", output)
 
 
-def exec_storm_class(klass, storm_config_opts, jvmtype="-server", jvmopts=[], extrajars=[], args=[], fork=False, daemon=True, client=False, daemonName=""):
-    storm_log_dir = confvalue("storm.log.dir", storm_config_opts=storm_config_opts, extrapaths=[CLUSTER_CONF_DIR])
-    if(storm_log_dir == None or storm_log_dir == "null"):
+def exec_storm_class(klass, storm_config_opts, jvmtype="-server", jvmopts=[],
+                     extrajars=[], main_class_args=[], fork=False, daemon=True, client=False, daemonName="",
+                     overriding_conf_file=None):
+    storm_log_dir = confvalue("storm.log.dir", storm_config_opts=storm_config_opts,
+                              extrapaths=[CLUSTER_CONF_DIR], overriding_conf_file=overriding_conf_file)
+    if storm_log_dir is None or storm_log_dir in ["null", ""]:
         storm_log_dir = os.path.join(STORM_DIR, "logs")
     all_args = [
         JAVA_CMD, jvmtype,
@@ -240,9 +244,9 @@
        "-Dstorm.home=" + STORM_DIR,
        "-Dstorm.log.dir=" + storm_log_dir,
        "-Djava.library.path=" + confvalue("java.library.path", storm_config_opts, extrajars, daemon=daemon),
-       "-Dstorm.conf.file=" + CONF_FILE,
+       "-Dstorm.conf.file=" + (overriding_conf_file if overriding_conf_file else ""),
        "-cp", get_classpath(extrajars, daemon, client=client),
-    ] + jvmopts + [klass] + list(args)
+    ] + jvmopts + [klass] + list(main_class_args)
     print("Running: " + " ".join(all_args))
     sys.stdout.flush()
     exit_code = 0
@@ -278,24 +282,26 @@
         klass, args.storm_config_opts,
         jvmtype="-client",
         extrajars=extra_jars,
-        args=args.topology_main_args,
+        main_class_args=args.main_args,
         daemon=False,
         jvmopts=JAR_JVM_OPTS + extrajvmopts + ["-Dstorm.jar=" + jarfile] +
                 ["-Dstorm.dependency.jars=" + ",".join(local_jars)] +
-                ["-Dstorm.dependency.artifacts=" + json.dumps(artifact_to_file_jars)])
+                ["-Dstorm.dependency.artifacts=" + json.dumps(artifact_to_file_jars)],
+        overriding_conf_file=args.config)
 
 
 def print_localconfvalue(args):
-    print(args.conf_name + ": " + confvalue(args.conf_name, args.storm_config_opts, [USER_CONF_DIR]))
+    print(args.conf_name + ": " + confvalue(args.conf_name, args.storm_config_opts,
+                                            [USER_CONF_DIR], overriding_conf_file=args.config))
 
 
 def print_remoteconfvalue(args):
-    print(args.conf_name + ": " + confvalue(args.conf_name, args.storm_config_opts, [CLUSTER_CONF_DIR]))
+    print(args.conf_name + ": " + confvalue(args.conf_name, args.storm_config_opts,
+                                            [CLUSTER_CONF_DIR], overriding_conf_file=args.config))
 
 
 def initialize_main_command():
     main_parser = argparse.ArgumentParser(prog="storm", formatter_class=SortingHelpFormatter)
-    add_common_options(main_parser)
 
     subparsers = main_parser.add_subparsers(help="")
 
@@ -360,12 +366,25 @@
     add_common_options(sub_parser)
 
 
-def add_common_options(parser):
+def add_common_options(parser, main_args=True):
     parser.add_argument("--config", default=None, help="Override default storm conf file")
     parser.add_argument(
         "-storm_config_opts", "-c", action="append", default=[],
         help="Override storm conf properties , e.g. nimbus.ui.port=4443"
     )
+    if main_args:
+        parser.add_argument(
+            "main_args", metavar="main_args",
+            nargs='*', help="Runs the main method with the specified arguments."
+        )
+
+def remove_common_options(sys_args):
+    flags_to_filter = ["-c", "-storm_config_opts", "--config"]
+    filtered_sys_args = [
+        sys_args[i] for i in range(0, len(sys_args)) if (not (sys_args[i] in flags_to_filter) and ((i<1) or
+                                         not (sys_args[i - 1] in flags_to_filter)))
+    ]
+    return filtered_sys_args
 
 def add_topology_jar_options(parser):
     parser.add_argument(
@@ -376,10 +395,6 @@
         "topology_main_class", metavar="topology-main-class",
     help="main class of the topology jar being submitted"
     )
-    parser.add_argument(
-        "topology_main_args", metavar="topology_main_args",
-        nargs='*', help="Runs the main method with the specified arguments."
-    )
 
 
 def add_client_jar_options(parser):
@@ -510,12 +525,6 @@
         raise argparse.ArgumentTypeError("%s is not a positive integer" % value)
     return ivalue
 
-def check_even_list(cred_list):
-    if not (len(cred_list) % 2):
-        raise argparse.ArgumentTypeError("please provide a list of cred key and value pairs")
-    return cred_list
-
-
 def initialize_upload_credentials_subcommand(subparsers):
     command_help = """Uploads a new set of credentials to a running topology."""
     sub_parser = subparsers.add_parser("upload-credentials", help=command_help, formatter_class=SortingHelpFormatter)
@@ -533,8 +542,7 @@
     )
 
     sub_parser.add_argument(
-        "cred_list", nargs='*', help="List of credkeys and their values [credkey credvalue]*",
-        type=check_even_list
+        "cred_list", nargs='*', help="List of credkeys and their values [credkey credvalue]*"
     )
 
     sub_parser.set_defaults(func=upload_credentials)
@@ -563,7 +571,7 @@
     group.add_argument("--explain", action="store_true", help="activate explain mode")
 
     sub_parser.set_defaults(func=sql)
-    add_common_options(sub_parser)
+    add_common_options(sub_parser, main_args=False)
 
 
 def initialize_blobstore_subcommand(subparsers):
@@ -580,8 +588,8 @@
         "list", help="lists blobs currently in the blob store", formatter_class=SortingHelpFormatter
     )
     list_parser.add_argument(
-        "keys", nargs='+')
-    add_common_options(list_parser)
+        "keys", nargs='*')
+    add_common_options(list_parser, main_args=False)
 
     cat_parser = sub_sub_parsers.add_parser(
         "cat", help="read a blob and then either write it to a file, or STDOUT (requires read access).", formatter_class=SortingHelpFormatter
@@ -892,7 +900,7 @@
     sub_parser.add_argument("args", nargs='*', default=[])
 
     sub_parser.set_defaults(func=shell)
-    add_common_options(sub_parser)
+    add_common_options(sub_parser, main_args=False)
 
 
 def initialize_repl_subcommand(subparsers):
@@ -987,7 +995,7 @@
     sub_parser.add_argument("function_arguments", nargs='*', default=[])
 
     sub_parser.set_defaults(func=drpc_client)
-    add_common_options(sub_parser)
+    add_common_options(sub_parser, main_args=False)
 
 
 def initialize_drpc_subcommand(subparsers):
@@ -1061,7 +1069,7 @@
     extrajvmopts = ["-Dstorm.local.sleeptime=" + args.local_ttl]
     if args.java_debug:
         extrajvmopts += ["-agentlib:jdwp=" + args.java_debug]
-    args.topology_main_args = [args.topology_main_class] + args.topology_main_args
+    args.main_args = [args.topology_main_class] + args.main_args
     run_client_jar(
         "org.apache.storm.LocalCluster", args,
         client=False, daemon=False, extrajvmopts=extrajvmopts)
@@ -1098,26 +1106,31 @@
         "org.apache.storm.sql.StormSqlRunner", storm_config_opts=args.storm_config_opts,
         jvmtype="-client",
         extrajars=extra_jars,
-        args=sql_args,
+        main_class_args=sql_args,
         daemon=False,
         jvmopts=["-Dstorm.dependency.jars=" + ",".join(local_jars)] +
-                ["-Dstorm.dependency.artifacts=" + json.dumps(artifact_to_file_jars)])
+                ["-Dstorm.dependency.artifacts=" + json.dumps(artifact_to_file_jars)],
+        overriding_conf_file=args.config)
 
 
 def kill(args):
     exec_storm_class(
         "org.apache.storm.command.KillTopology",
-        args=sys.argv[2:], storm_config_opts=args.storm_config_opts,
+        main_class_args=remove_common_options(sys.argv[2:]), storm_config_opts=args.storm_config_opts,
         jvmtype="-client",
-        extrajars=[USER_CONF_DIR, STORM_BIN_DIR])
+        extrajars=[USER_CONF_DIR, STORM_BIN_DIR],
+        overriding_conf_file=args.config)
 
 
 def upload_credentials(args):
+    if (len(args.cred_list) % 2 != 0):
+        raise argparse.ArgumentTypeError("please provide a list of cred key and value pairs " + cred_list)
     exec_storm_class(
         "org.apache.storm.command.UploadCredentials",
-        args=sys.argv[2:], storm_config_opts=args.storm_config_opts,
+        main_class_args=remove_common_options(sys.argv[2:]), storm_config_opts=args.storm_config_opts,
         jvmtype="-client",
-        extrajars=[USER_CONF_DIR, STORM_BIN_DIR])
+        extrajars=[USER_CONF_DIR, STORM_BIN_DIR],
+        overriding_conf_file=args.config)
 
 
 def blob(args):
@@ -1125,32 +1138,36 @@
         raise argparse.ArgumentTypeError("Replication factor needed when doing blob update")
     exec_storm_class(
         "org.apache.storm.command.Blobstore",
-        args=sys.argv[2:], storm_config_opts=args.storm_config_opts,
+        main_class_args=remove_common_options(sys.argv[2:]), storm_config_opts=args.storm_config_opts,
         jvmtype="-client",
-        extrajars=[USER_CONF_DIR, STORM_BIN_DIR])
+        extrajars=[USER_CONF_DIR, STORM_BIN_DIR],
+        overriding_conf_file=args.config)
 
 
 def heartbeats(args):
     exec_storm_class(
         "org.apache.storm.command.Heartbeats",
-        args=sys.argv[2:], storm_config_opts=args.storm_config_opts,
+        main_class_args=remove_common_options(sys.argv[2:]), storm_config_opts=args.storm_config_opts,
         jvmtype="-client",
-        extrajars=[USER_CONF_DIR, STORM_BIN_DIR])
+        extrajars=[USER_CONF_DIR, STORM_BIN_DIR],
+        overriding_conf_file=args.config)
 
 
 def activate(args):
     exec_storm_class(
         "org.apache.storm.command.Activate",
-        args=sys.argv[2:], storm_config_opts=args.storm_config_opts,
+        main_class_args=remove_common_options(sys.argv[2:]), storm_config_opts=args.storm_config_opts,
         jvmtype="-client",
-        extrajars=[USER_CONF_DIR, STORM_BIN_DIR])
+        extrajars=[USER_CONF_DIR, STORM_BIN_DIR],
+        overriding_conf_file=args.config)
 
 def listtopos(args):
     exec_storm_class(
         "org.apache.storm.command.ListTopologies",
-        args=sys.argv[2:], storm_config_opts=args.storm_config_opts,
+        main_class_args=remove_common_options(sys.argv[2:]), storm_config_opts=args.storm_config_opts,
         jvmtype="-client",
-        extrajars=[USER_CONF_DIR, STORM_BIN_DIR])
+        extrajars=[USER_CONF_DIR, STORM_BIN_DIR],
+        overriding_conf_file=args.config)
 
 def set_log_level(args):
     for log_level in args.l:
@@ -1163,16 +1180,18 @@
             raise argparse.ArgumentTypeError("Should be in the form[logger name]=[log level][:optional timeout]")
     exec_storm_class(
         "org.apache.storm.command.SetLogLevel",
-        args=sys.argv[2:], storm_config_opts=args.storm_config_opts,
+        main_class_args=remove_common_options(sys.argv[2:]), storm_config_opts=args.storm_config_opts,
         jvmtype="-client",
-        extrajars=[USER_CONF_DIR, STORM_BIN_DIR])
+        extrajars=[USER_CONF_DIR, STORM_BIN_DIR],
+        overriding_conf_file=args.config)
 
 def deactivate(args):
     exec_storm_class(
         "org.apache.storm.command.Deactivate",
-        args=sys.argv[2:], storm_config_opts=args.storm_config_opts,
+        main_class_args=remove_common_options(sys.argv[2:]), storm_config_opts=args.storm_config_opts,
         jvmtype="-client",
-        extrajars=[USER_CONF_DIR, STORM_BIN_DIR])
+        extrajars=[USER_CONF_DIR, STORM_BIN_DIR],
+        overriding_conf_file=args.config)
 
 
 def rebalance(args):
@@ -1186,41 +1205,46 @@
             raise argparse.ArgumentTypeError("Should be in the form component_name:new_executor_count")
     exec_storm_class(
         "org.apache.storm.command.Rebalance",
-        args=sys.argv[2:], storm_config_opts=args.storm_config_opts,
+        main_class_args=remove_common_options(sys.argv[2:]), storm_config_opts=args.storm_config_opts,
         jvmtype="-client",
-        extrajars=[USER_CONF_DIR, STORM_BIN_DIR])
+        extrajars=[USER_CONF_DIR, STORM_BIN_DIR],
+        overriding_conf_file=args.config)
 
 
 def get_errors(args):
     exec_storm_class(
         "org.apache.storm.command.GetErrors",
-        args=sys.argv[2:], storm_config_opts=args.storm_config_opts,
+        main_class_args=remove_common_options(sys.argv[2:]), storm_config_opts=args.storm_config_opts,
         jvmtype="-client",
-        extrajars=[USER_CONF_DIR, STORM_BIN_DIR])
+        extrajars=[USER_CONF_DIR, STORM_BIN_DIR],
+        overriding_conf_file=args.config)
 
 
 def healthcheck(args):
     exec_storm_class(
         "org.apache.storm.command.HealthCheck",
-        args=sys.argv[2:], storm_config_opts=args.storm_config_opts,
+        main_class_args=remove_common_options(sys.argv[2:]), storm_config_opts=args.storm_config_opts,
         jvmtype="-client",
-        extrajars=[USER_CONF_DIR, STORM_BIN_DIR])
+        extrajars=[USER_CONF_DIR, STORM_BIN_DIR],
+        overriding_conf_file=args.config)
 
 
 def kill_workers(args):
     exec_storm_class(
         "org.apache.storm.command.KillWorkers",
-        args=sys.argv[2:], storm_config_opts=args.storm_config_opts,
+        main_class_args=remove_common_options(sys.argv[2:]), storm_config_opts=args.storm_config_opts,
         jvmtype="-client",
-        extrajars=[USER_CONF_DIR, STORM_BIN_DIR])
+        extrajars=[USER_CONF_DIR, STORM_BIN_DIR],
+        overriding_conf_file=args.config)
 
 
 def admin(args):
     exec_storm_class(
         "org.apache.storm.command.AdminCommands",
-        args=sys.argv[2:], storm_config_opts=args.storm_config_opts,
+        main_class_args=remove_common_options(sys.argv[2:]), storm_config_opts=args.storm_config_opts,
         jvmtype="-client",
-        extrajars=[USER_CONF_DIR, STORM_BIN_DIR])
+        extrajars=[USER_CONF_DIR, STORM_BIN_DIR],
+        overriding_conf_file=args.config)
 
 
 def shell(args):
@@ -1230,24 +1254,27 @@
     runnerargs.extend(args.args)
     exec_storm_class(
         "org.apache.storm.command.ShellSubmission", storm_config_opts=args.storm_config_opts,
-        args=runnerargs,
+        main_class_args=runnerargs,
         jvmtype="-client",
         extrajars=[USER_CONF_DIR],
-        fork=True)
+        fork=True,
+        overriding_conf_file=args.config)
     os.system("rm " + tmpjarpath)
 
 
 def repl(args):
     cppaths = [CLUSTER_CONF_DIR]
     exec_storm_class(
-        "clojure.main", storm_config_opts=args.storm_config_opts, jvmtype="-client", extrajars=cppaths
+        "clojure.main", storm_config_opts=args.storm_config_opts, jvmtype="-client", extrajars=cppaths,
+        overriding_conf_file=args.config
     )
 
 
-def get_log4j2_conf_dir(storm_config_opts):
+def get_log4j2_conf_dir(storm_config_opts, args):
     cppaths = [CLUSTER_CONF_DIR]
     storm_log4j2_conf_dir = confvalue(
-        "storm.log4j2.conf.dir", storm_config_opts=storm_config_opts, extrapaths=cppaths
+        "storm.log4j2.conf.dir", storm_config_opts=storm_config_opts,
+        extrapaths=cppaths, overriding_conf_file=args.config
     )
     if(not storm_log4j2_conf_dir or storm_log4j2_conf_dir == "null"):
         storm_log4j2_conf_dir = STORM_LOG4J2_CONF_DIR
@@ -1260,18 +1287,19 @@
     cppaths = [CLUSTER_CONF_DIR]
     storm_config_opts = get_config_opts(args.storm_config_opts)
     jvmopts = shlex.split(confvalue(
-        "nimbus.childopts", storm_config_opts=storm_config_opts, extrapaths=cppaths
+        "nimbus.childopts", storm_config_opts=storm_config_opts, extrapaths=cppaths, overriding_conf_file=args.config
         )) + [
             "-Djava.deserialization.disabled=true",
             "-Dlogfile.name=nimbus.log",
-            "-Dlog4j.configurationFile=" + os.path.join(get_log4j2_conf_dir(storm_config_opts), "cluster.xml"),
+            "-Dlog4j.configurationFile=" + os.path.join(get_log4j2_conf_dir(storm_config_opts, args), "cluster.xml"),
         ]
     exec_storm_class(
         "org.apache.storm.daemon.nimbus.Nimbus", storm_config_opts=args.storm_config_opts,
         jvmtype="-server",
         daemonName="nimbus",
         extrajars=cppaths,
-        jvmopts=jvmopts)
+        jvmopts=jvmopts,
+        overriding_conf_file=args.config)
 
 
 def pacemaker(args):
@@ -1279,47 +1307,51 @@
     storm_config_opts = get_config_opts(args.storm_config_opts)
 
     jvmopts = shlex.split(confvalue(
-        "pacemaker.childopts", storm_config_opts=storm_config_opts, extrapaths=cppaths)
+        "pacemaker.childopts", storm_config_opts=storm_config_opts,
+        extrapaths=cppaths, overriding_conf_file=args.config)
     ) + [
         "-Djava.deserialization.disabled=true",
         "-Dlogfile.name=pacemaker.log",
-        "-Dlog4j.configurationFile=" + os.path.join(get_log4j2_conf_dir(storm_config_opts), "cluster.xml"),
+        "-Dlog4j.configurationFile=" + os.path.join(get_log4j2_conf_dir(storm_config_opts, args), "cluster.xml"),
         ]
     exec_storm_class(
         "org.apache.storm.pacemaker.Pacemaker", storm_config_opts=args.storm_config_opts,
         jvmtype="-server",
         daemonName="pacemaker",
         extrajars=cppaths,
-        jvmopts=jvmopts)
+        jvmopts=jvmopts,
+        overriding_conf_file=args.config)
 
 
 def supervisor(args):
     cppaths = [CLUSTER_CONF_DIR]
     storm_config_opts = get_config_opts(args.storm_config_opts)
     jvmopts = shlex.split(confvalue(
-        "supervisor.childopts", storm_config_opts=storm_config_opts, extrapaths=cppaths)
+        "supervisor.childopts", storm_config_opts=storm_config_opts,
+        extrapaths=cppaths, overriding_conf_file=args.config)
     ) + [
         "-Djava.deserialization.disabled=true",
         "-Dlogfile.name=" + STORM_SUPERVISOR_LOG_FILE,
-        "-Dlog4j.configurationFile=" + os.path.join(get_log4j2_conf_dir(storm_config_opts), "cluster.xml"),
+        "-Dlog4j.configurationFile=" + os.path.join(get_log4j2_conf_dir(storm_config_opts, args), "cluster.xml"),
         ]
     exec_storm_class(
         "org.apache.storm.daemon.supervisor.Supervisor", storm_config_opts=args.storm_config_opts,
         jvmtype="-server",
         daemonName="supervisor",
         extrajars=cppaths,
-        jvmopts=jvmopts)
+        jvmopts=jvmopts,
+        overriding_conf_file=args.config)
 
 
 def ui(args):
     cppaths = [CLUSTER_CONF_DIR]
     storm_config_opts = get_config_opts(args.storm_config_opts)
     jvmopts = shlex.split(confvalue(
-        "ui.childopts", storm_config_opts=storm_config_opts, extrapaths=cppaths)
+        "ui.childopts", storm_config_opts=storm_config_opts, extrapaths=cppaths, overriding_conf_file=args.config)
     ) + [
         "-Djava.deserialization.disabled=true",
         "-Dlogfile.name=ui.log",
-        "-Dlog4j.configurationFile=" + os.path.join(get_log4j2_conf_dir(storm_config_opts), "cluster.xml")
+        "-Dlog4j.configurationFile=" + os.path.join(get_log4j2_conf_dir(storm_config_opts, args), "cluster.xml")
     ]
 
     allextrajars = get_wildcard_dir(STORM_WEBAPP_LIB_DIR)
@@ -1329,7 +1361,8 @@
         jvmtype="-server",
         daemonName="ui",
         jvmopts=jvmopts,
-        extrajars=allextrajars)
+        extrajars=allextrajars,
+        overriding_conf_file=args.config)
 
 
 def logviewer(args):
@@ -1337,12 +1370,13 @@
     storm_config_opts = get_config_opts(args.storm_config_opts)
     jvmopts = shlex.split(
         confvalue(
-            "logviewer.childopts", storm_config_opts=storm_config_opts, extrapaths=cppaths
+            "logviewer.childopts", storm_config_opts=storm_config_opts,
+            extrapaths=cppaths, overriding_conf_file=args.config
         )
     ) + [
         "-Djava.deserialization.disabled=true",
         "-Dlogfile.name=logviewer.log",
-        "-Dlog4j.configurationFile=" + os.path.join(get_log4j2_conf_dir(storm_config_opts), "cluster.xml")
+        "-Dlog4j.configurationFile=" + os.path.join(get_log4j2_conf_dir(storm_config_opts, args), "cluster.xml")
     ]
 
     allextrajars = get_wildcard_dir(STORM_WEBAPP_LIB_DIR)
@@ -1352,11 +1386,12 @@
         jvmtype="-server",
         daemonName="logviewer",
         jvmopts=jvmopts,
-        extrajars=allextrajars)
+        extrajars=allextrajars,
+        overriding_conf_file=args.config)
 
 
 def drpc_client(args):
-    if not args.function and not (len(args.function_arguments) % 2):
+    if not args.function and (len(args.function_arguments) % 2):
         raise argparse.ArgumentTypeError(
             "If no -f is supplied arguments need to be in the form [function arg]. " +
             "This has {} args".format(
@@ -1366,9 +1401,10 @@
 
     exec_storm_class(
         "org.apache.storm.command.BasicDrpcClient",
-        args=sys.argv[2],
+        main_class_args=remove_common_options(sys.argv[2:]), storm_config_opts=args.storm_config_opts,
         jvmtype="-client",
-        extrajars=[USER_CONF_DIR, STORM_BIN_DIR])
+        extrajars=[USER_CONF_DIR, STORM_BIN_DIR],
+        overriding_conf_file=args.config)
 
 
 def drpc(args):
@@ -1376,12 +1412,12 @@
     storm_config_opts = get_config_opts(args.storm_config_opts)
     jvmopts = shlex.split(
         confvalue(
-            "drpc.childopts", storm_config_opts=storm_config_opts, extrapaths=cppaths
+            "drpc.childopts", storm_config_opts=storm_config_opts, extrapaths=cppaths, overriding_conf_file=args.config
         )
     ) + [
         "-Djava.deserialization.disabled=true",
         "-Dlogfile.name=drpc.log",
-        "-Dlog4j.configurationFile=" + os.path.join(get_log4j2_conf_dir(storm_config_opts), "cluster.xml")
+        "-Dlog4j.configurationFile=" + os.path.join(get_log4j2_conf_dir(storm_config_opts, args), "cluster.xml")
     ]
     allextrajars = get_wildcard_dir(STORM_WEBAPP_LIB_DIR)
     allextrajars.append(CLUSTER_CONF_DIR)
@@ -1390,28 +1426,31 @@
         jvmtype="-server",
         daemonName="drpc",
         jvmopts=jvmopts,
-        extrajars=allextrajars)
+        extrajars=allextrajars,
+        overriding_conf_file=args.config)
 
 
 def dev_zookeeper(args):
     storm_config_opts = get_config_opts(args.storm_config_opts)
     jvmopts = [
         "-Dlogfile.name=dev-zookeeper.log",
-        "-Dlog4j.configurationFile=" + os.path.join(get_log4j2_conf_dir(storm_config_opts), "cluster.xml")
+        "-Dlog4j.configurationFile=" + os.path.join(get_log4j2_conf_dir(storm_config_opts, args), "cluster.xml")
     ]
     exec_storm_class(
         "org.apache.storm.command.DevZookeeper", storm_config_opts=args.storm_config_opts,
         jvmtype="-server",
         daemonName="dev_zookeeper",
         jvmopts=jvmopts,
-        extrajars=[CLUSTER_CONF_DIR])
+        extrajars=[CLUSTER_CONF_DIR],
+        overriding_conf_file=args.config)
 
 
 def version(args):
     exec_storm_class(
         "org.apache.storm.utils.VersionInfo", storm_config_opts=args.storm_config_opts,
         jvmtype="-client",
-        extrajars=[CLUSTER_CONF_DIR])
+        extrajars=[CLUSTER_CONF_DIR],
+        overriding_conf_file=args.config)
 
 
 def print_classpath(args):
@@ -1425,18 +1464,19 @@
 def monitor(args):
     exec_storm_class(
         "org.apache.storm.command.Monitor", storm_config_opts=args.storm_config_opts,
-        args=sys.argv[2],
+        main_class_args=remove_common_options(sys.argv[2:]),
         jvmtype="-client",
         extrajars=[USER_CONF_DIR, STORM_BIN_DIR])
 
-
 def main():
     init_storm_env()
     storm_parser = initialize_main_command()
     if len(sys.argv) == 1:
         storm_parser.print_help(sys.stderr)
         sys.exit(1)
-    raw_args = storm_parser.parse_args()
+    raw_args, unknown_args = storm_parser.parse_known_args()
+    if hasattr(raw_args, "main_args"):
+        raw_args.main_args += unknown_args
     raw_args.func(raw_args)
 
 
diff --git a/conf/defaults.yaml b/conf/defaults.yaml
index 43a623b..4d2f2f7 100644
--- a/conf/defaults.yaml
+++ b/conf/defaults.yaml
@@ -171,6 +171,8 @@
 supervisor.heartbeat.frequency.secs: 5
 #max timeout for a node worker heartbeats when master gains leadership
 supervisor.worker.heartbeats.max.timeout.secs: 600
+#For topology configurable heartbeat timeout, maximum allowed heartbeat timeout.
+worker.max.timeout.secs: 600
 supervisor.enable: true
 supervisor.supervisors: []
 supervisor.supervisors.commands: []
diff --git a/dev-tools/rc/verify-release-file.sh b/dev-tools/rc/verify-release-file.sh
index 2e33965..3e6a869 100755
--- a/dev-tools/rc/verify-release-file.sh
+++ b/dev-tools/rc/verify-release-file.sh
@@ -34,21 +34,6 @@
   echo 'Signature seems not correct'
 fi
 
-# checking MD5
-GPG_MD5_FILE="/tmp/${TARGET_FILE}_GPG.md5"
-gpg --print-md MD5 ${TARGET_FILE} > ${GPG_MD5_FILE}
-MD5_TARGET_FILE="${TARGET_FILE}.md5"
-
-echo ">> checking MD5 file... (${MD5_TARGET_FILE})"
-diff ${GPG_MD5_FILE} ${MD5_TARGET_FILE}
-
-if [ $? -eq 0 ];
-then
-  echo 'MD5 file is correct'
-else
-  echo 'MD5 file is not correct'
-fi
-
 # checking SHA
 GPG_SHA_FILE="/tmp/${TARGET_FILE}_GPG.sha512"
 gpg --print-md SHA512 ${TARGET_FILE} > ${GPG_SHA_FILE}
diff --git a/docs/ClusterMetrics.md b/docs/ClusterMetrics.md
index 4e4d0f1..f7f7b4f 100644
--- a/docs/ClusterMetrics.md
+++ b/docs/ClusterMetrics.md
@@ -185,6 +185,7 @@
 | supervisor:num-launched | meter | number of times the supervisor is launched. |
 | supervisor:num-shell-exceptions | meter | number of exceptions calling shell commands. |
 | supervisor:num-slots-used-gauge | gauge | number of slots used on the supervisor. |
+| supervisor:num-worker-start-timed-out | meter | number of times worker start timed out. |
 | supervisor:num-worker-transitions-into-empty | meter | number of transitions into empty state. |
 | supervisor:num-worker-transitions-into-kill | meter | number of transitions into kill state. |
 | supervisor:num-worker-transitions-into-kill-and-relaunch | meter | number of transitions into kill-and-relaunch state |
diff --git a/docs/Generic-resources.md b/docs/Generic-resources.md
new file mode 100644
index 0000000..f3bfe3e
--- /dev/null
+++ b/docs/Generic-resources.md
@@ -0,0 +1,39 @@
+---
+title: Generic Resources
+layout: documentation
+documentation: true
+---
+
+### Generic Resources
+Generic Resources allow Storm to reference arbitrary resource types. Generic Resources may be considered an extension of the resources enumerated by the [Resource Aware Scheduler](Resource_Aware_Scheduler_overview.html), which accounts for CPU and memory.
+
+### API Overview
+For a Storm Topology, the user can now specify the amount of generic resources a topology component (i.e. Spout or Bolt) is required to run a single instance of the component. The user can specify the resource requirement for a topology component by using the following API call.
+```
+   public T addResource(String resourceName, Number resourceValue)
+```
+Parameters:
+-   resourceName – The name of the generic resource
+-   resourceValue – The amount of the generic resource
+
+Example of Usage:
+```
+   SpoutDeclarer s1 = builder.setSpout("word", new TestWordSpout(), 10);
+   s1.addResouce("gpu.count", 1.0);
+```
+
+### Specifying Generic Cluster Resources
+
+A storm administrator can specify node resource availability by modifying the _conf/storm.yaml_ file located in the storm home directory of that node.
+```
+   supervisor.resources.map: {[type<String>] : [amount<Double>]}
+```
+Example of Usage:
+```
+   supervisor.resources.map: {"gpu.count" : 2.0}
+```
+
+
+### Generic Resources in UI
+
+![Storm Cluster UI](images/storm_ui.png)
diff --git a/docs/SECURITY.md b/docs/SECURITY.md
index 74a67ef..6e4d7e1 100644
--- a/docs/SECURITY.md
+++ b/docs/SECURITY.md
@@ -36,6 +36,7 @@
 |--------------|--------------|------------------------|--------|
 | 2181 | `storm.zookeeper.port` | Nimbus, Supervisors, and Worker processes | Zookeeper |
 | 6627 | `nimbus.thrift.port` | Storm clients, Supervisors, and UI | Nimbus |
+| 6628 | `supervisor.thrift.port` | Nimbus | Supervisors |
 | 8080 | `ui.port` | Client Web Browsers | UI |
 | 8000 | `logviewer.port` | Client Web Browsers | Logviewer |
 | 3772 | `drpc.port` | External DRPC Clients | DRPC |
diff --git a/docs/Setting-up-a-Storm-cluster.md b/docs/Setting-up-a-Storm-cluster.md
index ea05da4..b57639e 100644
--- a/docs/Setting-up-a-Storm-cluster.md
+++ b/docs/Setting-up-a-Storm-cluster.md
@@ -30,7 +30,7 @@
 Next you need to install Storm's dependencies on Nimbus and the worker machines. These are:
 
 1. Java 8+ (Apache Storm 2.x is tested through travis ci against a java 8 JDK)
-2. Python 2.6.6 (Python 3.x should work too, but is not tested as part of our CI enviornment)
+2. Python 2.7.x or Python 3.x
 
 These are the versions of the dependencies that have been tested with Storm. Storm may or may not work with different versions of Java and/or Python.
 
diff --git a/docs/Setting-up-development-environment.md b/docs/Setting-up-development-environment.md
index bfa98a2..72e3472 100644
--- a/docs/Setting-up-development-environment.md
+++ b/docs/Setting-up-development-environment.md
@@ -5,7 +5,7 @@
 ---
 This page outlines what you need to do to get a Storm development environment set up. In summary, the steps are:
 
-1. Download a [Storm release](..//downloads.html) , unpack it, and put the unpacked `bin/` directory on your PATH
+1. Download a [Storm release](../../downloads.html) , unpack it, and put the unpacked `bin/` directory on your PATH
 2. To be able to start and stop topologies on a remote cluster, put the cluster information in `~/.storm/storm.yaml`
 
 More detail on each of these steps is below.
diff --git a/docs/Trident-state.md b/docs/Trident-state.md
index ead8d86..030dd8c 100644
--- a/docs/Trident-state.md
+++ b/docs/Trident-state.md
@@ -28,7 +28,7 @@
 2. There's no overlap between batches of tuples (tuples are in one batch or another, never multiple).
 3. Every tuple is in a batch (no tuples are skipped)
 
-This is a pretty easy type of spout to understand, the stream is divided into fixed batches that never change. Storm has [an implementation of a transactional spout]({{page.git-tree-base}}/external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/trident/KafkaTridentSpoutTransactional) for Kafka.
+This is a pretty easy type of spout to understand, the stream is divided into fixed batches that never change. Storm has [an implementation of a transactional spout]({{page.git-tree-base}}/external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/trident/KafkaTridentSpoutTransactional.java) for Kafka.
 
 You might be wondering – why wouldn't you just always use a transactional spout? They're simple and easy to understand. One reason you might not use one is because they're not necessarily very fault-tolerant. For example, the way TransactionalTridentKafkaSpout works is the batch for a txid will contain tuples from all the Kafka partitions for a topic. Once a batch has been emitted, any time that batch is re-emitted in the future the exact same set of tuples must be emitted to meet the semantics of transactional spouts. Now suppose a batch is emitted from TransactionalTridentKafkaSpout, the batch fails to process, and at the same time one of the Kafka nodes goes down. You're now incapable of replaying the same batch as you did before (since the node is down and some partitions for the topic are not unavailable), and processing will halt. 
 
diff --git a/docs/flux.md b/docs/flux.md
index 27d28bb..000270f 100644
--- a/docs/flux.md
+++ b/docs/flux.md
@@ -47,7 +47,7 @@
 If you would like to build Flux from source and run the unit/integration tests, you will need the following installed
 on your system:
 
-* Python 2.6.x or later
+* Python 2.7.x or later
 * Node.js 0.10.x or later
 
 #### Building with unit tests enabled:
diff --git a/docs/images/storm_ui.png b/docs/images/storm_ui.png
new file mode 100644
index 0000000..45aae41
--- /dev/null
+++ b/docs/images/storm_ui.png
Binary files differ
diff --git a/docs/index.md b/docs/index.md
index df1df93..36cf63f 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -65,6 +65,7 @@
 * [CGroup Enforcement](cgroups_in_storm.html)
 * [Pacemaker reduces load on zookeeper for large clusters](Pacemaker.html)
 * [Resource Aware Scheduler](Resource_Aware_Scheduler_overview.html)
+* [Generic Resources](Generic-resources.html)
 * [Daemon Metrics/Monitoring](ClusterMetrics.html)
 * [Windows users guide](windows-users-guide.html)
 * [Classpath handling](Classpath-handling.html)
diff --git a/external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/metrics/KafkaOffsetMetric.java b/external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/metrics/KafkaOffsetMetric.java
index da84979..496e1d8 100644
--- a/external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/metrics/KafkaOffsetMetric.java
+++ b/external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/metrics/KafkaOffsetMetric.java
@@ -24,6 +24,7 @@
 import java.util.function.Supplier;
 import org.apache.kafka.clients.consumer.Consumer;
 import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.errors.RetriableException;
 import org.apache.storm.kafka.spout.internal.OffsetManager;
 import org.apache.storm.metric.api.IMetric;
 import org.slf4j.Logger;
@@ -76,8 +77,17 @@
         Map<String,TopicMetrics> topicMetricsMap = new HashMap<>();
         Set<TopicPartition> topicPartitions = offsetManagers.keySet();
 
-        Map<TopicPartition, Long> beginningOffsets = consumer.beginningOffsets(topicPartitions);
-        Map<TopicPartition, Long> endOffsets = consumer.endOffsets(topicPartitions);
+        Map<TopicPartition, Long> beginningOffsets;
+        Map<TopicPartition, Long> endOffsets;
+
+        try {
+            beginningOffsets = consumer.beginningOffsets(topicPartitions);
+            endOffsets = consumer.endOffsets(topicPartitions);
+        } catch (RetriableException e) {
+            LOG.warn("Failed to get offsets from Kafka! Will retry on next metrics tick.", e);
+            return null;
+        }
+
         //map to hold partition level and topic level metrics
         Map<String, Long> result = new HashMap<>();
 
diff --git a/external/storm-kafka-client/src/test/java/org/apache/storm/kafka/spout/KafkaSpoutSingleTopicTest.java b/external/storm-kafka-client/src/test/java/org/apache/storm/kafka/spout/KafkaSpoutSingleTopicTest.java
index 512d274..d7f563f 100644
--- a/external/storm-kafka-client/src/test/java/org/apache/storm/kafka/spout/KafkaSpoutSingleTopicTest.java
+++ b/external/storm-kafka-client/src/test/java/org/apache/storm/kafka/spout/KafkaSpoutSingleTopicTest.java
@@ -21,17 +21,14 @@
 import static org.hamcrest.CoreMatchers.is;
 import static org.hamcrest.MatcherAssert.assertThat;
 import static org.junit.jupiter.api.Assertions.assertEquals;
+import static org.junit.jupiter.api.Assertions.assertNull;
 import static org.mockito.ArgumentMatchers.any;
 import static org.mockito.ArgumentMatchers.anyList;
 import static org.mockito.ArgumentMatchers.anyListOf;
 import static org.mockito.ArgumentMatchers.anyObject;
 import static org.mockito.ArgumentMatchers.anyString;
 import static org.mockito.ArgumentMatchers.eq;
-import static org.mockito.Mockito.clearInvocations;
-import static org.mockito.Mockito.never;
-import static org.mockito.Mockito.reset;
-import static org.mockito.Mockito.times;
-import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.*;
 
 import java.util.HashSet;
 import java.util.List;
@@ -39,8 +36,10 @@
 import java.util.Set;
 import java.util.regex.Pattern;
 import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.consumer.KafkaConsumer;
 import org.apache.kafka.clients.consumer.OffsetAndMetadata;
 import org.apache.kafka.common.TopicPartition;
+import org.apache.kafka.common.errors.TimeoutException;
 import org.apache.storm.kafka.spout.config.builder.SingleTopicKafkaSpoutConfiguration;
 import org.apache.storm.tuple.Values;
 import org.apache.storm.utils.Time;
@@ -428,4 +427,16 @@
         assertEquals(offsetMetric.get(SingleTopicKafkaSpoutConfiguration.TOPIC+"/totalLatestCompletedOffset").longValue(), 10);
         assertEquals(offsetMetric.get(SingleTopicKafkaSpoutConfiguration.TOPIC+"/totalSpoutLag").longValue(), 0);
     }
+
+    @Test
+    public void testOffsetMetricsReturnsNullWhenRetriableExceptionThrown() throws Exception {
+        final int messageCount = 10;
+        prepareSpout(messageCount);
+
+        // Ensure a timeout exception results in the return value being null
+        when(getKafkaConsumer().beginningOffsets(anyCollection())).thenThrow(TimeoutException.class);
+
+        Map<String, Long> offsetMetric  = (Map<String, Long>) spout.getKafkaOffsetMetric().getValueAndReset();
+        assertNull(offsetMetric);
+    }
 }
diff --git a/flux/flux-core/src/main/java/org/apache/storm/flux/parser/FluxParser.java b/flux/flux-core/src/main/java/org/apache/storm/flux/parser/FluxParser.java
index 8299c14..50570e1 100644
--- a/flux/flux-core/src/main/java/org/apache/storm/flux/parser/FluxParser.java
+++ b/flux/flux-core/src/main/java/org/apache/storm/flux/parser/FluxParser.java
@@ -18,12 +18,17 @@
 
 package org.apache.storm.flux.parser;
 
-import java.io.ByteArrayOutputStream;
+import java.io.BufferedReader;
 import java.io.FileInputStream;
 import java.io.IOException;
 import java.io.InputStream;
+import java.io.InputStreamReader;
 import java.util.Map;
+import java.util.Optional;
 import java.util.Properties;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+import java.util.stream.Collectors;
 
 import org.apache.storm.flux.model.BoltDef;
 import org.apache.storm.flux.model.IncludeDef;
@@ -40,17 +45,20 @@
  */
 public class FluxParser {
     private static final Logger LOG = LoggerFactory.getLogger(FluxParser.class);
+    private static final Pattern propertyPattern =
+            Pattern.compile(".*\\$\\{(?<var>ENV-(?<envVar>.+)|(?<list>.+)\\[(?<listIndex>\\d+)]|.+)}.*");
 
     private FluxParser() {
     }
 
     /**
      * Parse a flux topology definition.
-     * @param inputFile source YAML file
-     * @param dumpYaml if true, dump the parsed YAML to stdout
+     *
+     * @param inputFile       source YAML file
+     * @param dumpYaml        if true, dump the parsed YAML to stdout
      * @param processIncludes whether or not to process includes
-     * @param properties properties file for variable substitution
-     * @param envSub whether or not to perform environment variable substitution
+     * @param properties      properties file for variable substitution
+     * @param envSub          whether or not to perform environment variable substitution
      * @return resulting topologuy definition
      * @throws IOException if there is a problem reading file(s)
      */
@@ -65,11 +73,12 @@
 
     /**
      * Parse a flux topology definition from a classpath resource..
-     * @param resource YAML resource
-     * @param dumpYaml if true, dump the parsed YAML to stdout
+     *
+     * @param resource        YAML resource
+     * @param dumpYaml        if true, dump the parsed YAML to stdout
      * @param processIncludes whether or not to process includes
-     * @param properties properties file for variable substitution
-     * @param envSub whether or not to perform environment variable substitution
+     * @param properties      properties file for variable substitution
+     * @param envSub          whether or not to perform environment variable substitution
      * @return resulting topologuy definition
      * @throws IOException if there is a problem reading file(s)
      */
@@ -84,11 +93,12 @@
 
     /**
      * Parse a flux topology definition.
-     * @param inputStream InputStream representation of YAML file
-     * @param dumpYaml if true, dump the parsed YAML to stdout
+     *
+     * @param inputStream     InputStream representation of YAML file
+     * @param dumpYaml        if true, dump the parsed YAML to stdout
      * @param processIncludes whether or not to process includes
-     * @param properties properties file for variable substitution
-     * @param envSub whether or not to perform environment variable substitution
+     * @param properties      properties file for variable substitution
+     * @param envSub          whether or not to perform environment variable substitution
      * @return resulting topology definition
      * @throws IOException if there is a problem reading file(s)
      */
@@ -116,10 +126,11 @@
 
     /**
      * Parse filter properties file.
+     *
      * @param propertiesFile properties file for variable substitution
-     * @param resource whether or not to load properties file from classpath
+     * @param resource       whether or not to load properties file from classpath
      * @return resulting filter properties
-     * @throws IOException  if there is a problem reading file
+     * @throws IOException if there is a problem reading file
      */
     public static Properties parseProperties(String propertiesFile, boolean resource) throws IOException {
         Properties properties = null;
@@ -140,36 +151,43 @@
     }
 
     private static TopologyDef loadYaml(Yaml yaml, InputStream in, Properties properties, boolean envSubstitution) throws IOException {
-        ByteArrayOutputStream bos = new ByteArrayOutputStream();
         LOG.info("loading YAML from input stream...");
-        int b = -1;
-        while ((b = in.read()) != -1) {
-            bos.write(b);
-        }
+        try (BufferedReader reader = new BufferedReader(new InputStreamReader(in))) {
+            String conf = reader.lines().map(line -> {
+                Matcher m = propertyPattern.matcher(line);
+                return m.find()
+                        ? getPropertyReplacement(properties, m, envSubstitution)
+                        .map(propValue -> line.replace("${" + m.group("var") + "}", propValue))
+                        .orElseGet(() -> {
+                            LOG.warn("Could not find replacement for property: " + m.group("var"));
+                            return line;
+                        })
+                        : line;
+            }).collect(Collectors.joining(System.lineSeparator()));
 
-        // TODO substitution implementation is not exactly efficient or kind to memory...
-        String str = bos.toString();
-        // properties file substitution
-        if (properties != null) {
-            LOG.info("Performing property substitution.");
-            for (Object key : properties.keySet()) {
-                str = str.replace("${" + key + "}", properties.getProperty((String)key));
-            }
-        } else {
-            LOG.info("Not performing property substitution.");
+            return (TopologyDef) yaml.load(conf);
         }
+    }
 
-        // environment variable substitution
-        if (envSubstitution) {
-            LOG.info("Performing environment variable substitution...");
-            Map<String, String> envs = System.getenv();
-            for (String key : envs.keySet()) {
-                str = str.replace("${ENV-" + key + "}", envs.get(key));
-            }
+    private static Optional<String> getPropertyReplacement(Properties properties, Matcher match, boolean envSubstitution) {
+        if (match.group("listIndex") != null) {
+            String prop = properties.getProperty(match.group("list"));
+            return Optional.of(parseListAndExtractElem(prop, match.group("listIndex")));
+        } else if (envSubstitution && match.group("envVar") != null) {
+            String envVar = System.getenv().get(match.group("envVar"));
+            return Optional.ofNullable(envVar);
         } else {
-            LOG.info("Not performing environment variable substitution.");
+            return Optional.ofNullable(properties.getProperty(match.group("var")));
         }
-        return (TopologyDef) yaml.load(str);
+    }
+
+    private static String parseListAndExtractElem(String strList, String index) {
+        String[] listProp = strList.substring(1, strList.length() - 1).split(",");
+        String listElem = listProp[Integer.parseInt(index)];
+
+        // remove whitespaces and double quotes from beginning and end of a given string
+        String trimmed = listElem.trim();
+        return trimmed.substring(1, trimmed.length() - 1);
     }
 
     private static void dumpYaml(TopologyDef topology, Yaml yaml) {
@@ -191,14 +209,15 @@
 
     /**
      * Process includes contained within a yaml file.
+     *
      * @param yaml        the yaml parser for parsing the include file(s)
      * @param topologyDef the topology definition containing (possibly zero) includes
-     * @param properties properties file for variable substitution
-     * @param envSub whether or not to perform environment variable substitution
+     * @param properties  properties file for variable substitution
+     * @param envSub      whether or not to perform environment variable substitution
      * @return The TopologyDef with includes resolved.
      */
     private static TopologyDef processIncludes(Yaml yaml, TopologyDef topologyDef, Properties properties, boolean envSub)
-        throws IOException {
+            throws IOException {
         //TODO support multiple levels of includes
         if (topologyDef.getIncludes() != null) {
             for (IncludeDef include : topologyDef.getIncludes()) {
diff --git a/flux/flux-core/src/test/java/org/apache/storm/flux/TCKTest.java b/flux/flux-core/src/test/java/org/apache/storm/flux/TCKTest.java
index 90613c9..275a720 100644
--- a/flux/flux-core/src/test/java/org/apache/storm/flux/TCKTest.java
+++ b/flux/flux-core/src/test/java/org/apache/storm/flux/TCKTest.java
@@ -275,6 +275,11 @@
                Collections.singletonList("A string list"),
                is(context.getTopologyDef().getConfig().get("list.property.target")));
 
+        //Test substitution where the target type is a List element
+        assertThat("List element property is not replaced by the expected value",
+                "A string list",
+                is(context.getTopologyDef().getConfig().get("list.element.property.target")));
+
     }
     
     @Test
diff --git a/flux/flux-core/src/test/resources/configs/substitution-test.yaml b/flux/flux-core/src/test/resources/configs/substitution-test.yaml
index 9707936..67ac92a 100644
--- a/flux/flux-core/src/test/resources/configs/substitution-test.yaml
+++ b/flux/flux-core/src/test/resources/configs/substitution-test.yaml
@@ -45,6 +45,8 @@
   test.env.value: "${ENV-PATH}"
   # test variable substitution for list type
   list.property.target: ${a.list.property}
+  # test variable substitution for list element
+  list.element.property.target: ${a.list.property[0]}
 
 # spout definitions
 spouts:
diff --git a/pom.xml b/pom.xml
index 0601a84..5ee3d29 100644
--- a/pom.xml
+++ b/pom.xml
@@ -236,6 +236,15 @@
             </roles>
             <timezone>-6</timezone>
         </developer>
+        <developer>
+            <id>agresch</id>
+            <name>Aaron Gresch</name>
+            <email>agresch@gmail.com</email>
+            <roles>
+                <role>Committer</role>
+            </roles>
+            <timezone>-6</timezone>
+        </developer>
 
     </developers>
 
@@ -296,7 +305,7 @@
         <kryo.version>3.0.3</kryo.version>
         <servlet.version>3.1.0</servlet.version>
         <joda-time.version>2.3</joda-time.version>
-        <thrift.version>0.12.0</thrift.version>
+        <thrift.version>0.13.0</thrift.version>
         <junit.jupiter.version>5.5.1</junit.jupiter.version>
         <surefire.version>2.22.1</surefire.version>
         <awaitility.version>3.1.0</awaitility.version>
diff --git a/storm-client/src/jvm/org/apache/storm/Config.java b/storm-client/src/jvm/org/apache/storm/Config.java
index 434ec9c..4bcb2e3 100644
--- a/storm-client/src/jvm/org/apache/storm/Config.java
+++ b/storm-client/src/jvm/org/apache/storm/Config.java
@@ -1052,12 +1052,30 @@
     public static final String STORM_THRIFT_TRANSPORT_PLUGIN = "storm.thrift.transport";
     /**
      * How long a worker can go without heartbeating before the supervisor tries to restart the worker process.
+     * Can be overridden by {@link #TOPOLOGY_WORKER_TIMEOUT_SECS}, if set.
      */
     @IsInteger
     @IsPositiveNumber
     @NotNull
     public static final String SUPERVISOR_WORKER_TIMEOUT_SECS = "supervisor.worker.timeout.secs";
     /**
+     * Enforce maximum on {@link #TOPOLOGY_WORKER_TIMEOUT_SECS}.
+     */
+    @IsInteger
+    @IsPositiveNumber
+    @NotNull
+    public static final String WORKER_MAX_TIMEOUT_SECS = "worker.max.timeout.secs";
+    /**
+     * Topology configurable worker heartbeat timeout before the supervisor tries to restart the worker process.
+     * Maximum value constrained by {@link #WORKER_MAX_TIMEOUT_SECS}.
+     * When topology timeout is greater, the following configs are effectively overridden:
+     * {@link #SUPERVISOR_WORKER_TIMEOUT_SECS}, SUPERVISOR_WORKER_START_TIMEOUT_SECS, NIMBUS_TASK_TIMEOUT_SECS and NIMBUS_TASK_LAUNCH_SECS.
+     */
+    @IsInteger
+    @IsPositiveNumber
+    @NotNull
+    public static final String TOPOLOGY_WORKER_TIMEOUT_SECS = "topology.worker.timeout.secs";
+    /**
      * How many seconds to allow for graceful worker shutdown when killing workers before resorting to force kill.
      * If a worker fails to shut down gracefully within this delay, it will either suicide or be forcibly killed by the supervisor.
      */
diff --git a/storm-client/src/jvm/org/apache/storm/Constants.java b/storm-client/src/jvm/org/apache/storm/Constants.java
index 57af8d1..7a1c518 100644
--- a/storm-client/src/jvm/org/apache/storm/Constants.java
+++ b/storm-client/src/jvm/org/apache/storm/Constants.java
@@ -55,5 +55,7 @@
     public static final String COMMON_ONHEAP_MEMORY_RESOURCE_NAME = "onheap.memory.mb";
     public static final String COMMON_OFFHEAP_MEMORY_RESOURCE_NAME = "offheap.memory.mb";
     public static final String COMMON_TOTAL_MEMORY_RESOURCE_NAME = "memory.mb";
+
+    public static final String NIMBUS_SEND_ASSIGNMENT_EXCEPTIONS = "nimbus:num-send-assignment-exceptions";
 }
     
diff --git a/storm-client/src/jvm/org/apache/storm/daemon/Task.java b/storm-client/src/jvm/org/apache/storm/daemon/Task.java
index 2f2d53b..59f2547 100644
--- a/storm-client/src/jvm/org/apache/storm/daemon/Task.java
+++ b/storm-client/src/jvm/org/apache/storm/daemon/Task.java
@@ -206,8 +206,8 @@
     public void sendUnanchored(String stream, List<Object> values, ExecutorTransfer transfer, Queue<AddressedTuple> pendingEmits) {
         Tuple tuple = getTuple(stream, values);
         List<Integer> tasks = getOutgoingTasks(stream, values);
-        for (Integer t : tasks) {
-            AddressedTuple addressedTuple = new AddressedTuple(t, tuple);
+        for (int i = 0; i < tasks.size(); i++) {
+            AddressedTuple addressedTuple = new AddressedTuple(tasks.get(i), tuple);
             transfer.tryTransfer(addressedTuple, pendingEmits);
         }
     }
diff --git a/storm-client/src/jvm/org/apache/storm/daemon/worker/BackPressureTracker.java b/storm-client/src/jvm/org/apache/storm/daemon/worker/BackPressureTracker.java
index dae5cca..3c590e5 100644
--- a/storm-client/src/jvm/org/apache/storm/daemon/worker/BackPressureTracker.java
+++ b/storm-client/src/jvm/org/apache/storm/daemon/worker/BackPressureTracker.java
@@ -49,8 +49,12 @@
                 entry -> new BackpressureState(entry.getValue())));
     }
 
-    private void recordNoBackPressure(Integer taskId) {
-        tasks.get(taskId).backpressure.set(false);
+    public BackpressureState getBackpressureState(Integer taskId) {
+        return tasks.get(taskId);
+    }
+
+    private void recordNoBackPressure(BackpressureState state) {
+        state.backpressure.set(false);
     }
 
     /**
@@ -60,8 +64,8 @@
      *
      * @return true if an update was recorded, false if taskId is already under BP
      */
-    public boolean recordBackPressure(Integer taskId) {
-        return tasks.get(taskId).backpressure.getAndSet(true) == false;
+    public boolean recordBackPressure(BackpressureState state) {
+        return state.backpressure.getAndSet(true) == false;
     }
 
     // returns true if there was a change in the BP situation
@@ -71,7 +75,7 @@
         for (Entry<Integer, BackpressureState> entry : tasks.entrySet()) {
             BackpressureState state = entry.getValue();
             if (state.backpressure.get() && state.queue.isEmptyOverflow()) {
-                recordNoBackPressure(entry.getKey());
+                recordNoBackPressure(state);
                 changed = true;
             }
         }
@@ -95,11 +99,24 @@
         }
         return new BackPressureStatus(workerId, bpTasks, nonBpTasks);
     }
+
+    public int getLastOverflowCount(BackpressureState state) {
+        return state.lastOverflowCount;
+    }
+
+    public void setLastOverflowCount(BackpressureState state, int value) {
+        state.lastOverflowCount = value;
+    }
+
+
     
-    private static class BackpressureState {
+    public static class BackpressureState {
         private final JCQueue queue;
         //No task is under backpressure initially
         private final AtomicBoolean backpressure = new AtomicBoolean(false);
+        //The overflow count last time BP status was sent
+        private int lastOverflowCount = 0;
+
 
         BackpressureState(JCQueue queue) {
             this.queue = queue;
diff --git a/storm-client/src/jvm/org/apache/storm/daemon/worker/WorkerState.java b/storm-client/src/jvm/org/apache/storm/daemon/worker/WorkerState.java
index f380769..eaab4e9 100644
--- a/storm-client/src/jvm/org/apache/storm/daemon/worker/WorkerState.java
+++ b/storm-client/src/jvm/org/apache/storm/daemon/worker/WorkerState.java
@@ -42,6 +42,7 @@
 import org.apache.storm.cluster.VersionedData;
 import org.apache.storm.daemon.StormCommon;
 import org.apache.storm.daemon.supervisor.AdvancedFSOps;
+import org.apache.storm.daemon.worker.BackPressureTracker.BackpressureState;
 import org.apache.storm.executor.IRunningExecutor;
 import org.apache.storm.generated.Assignment;
 import org.apache.storm.generated.DebugOptions;
@@ -88,6 +89,7 @@
 
     private static final Logger LOG = LoggerFactory.getLogger(WorkerState.class);
     private static final long LOAD_REFRESH_INTERVAL_MS = 5000L;
+    private static final int RESEND_BACKPRESSURE_SIZE = 10000;
     private static long dropCount = 0;
     final Map<String, Object> conf;
     final IContext mqContext;
@@ -533,8 +535,6 @@
     // Receives msgs from remote workers and feeds them to local executors. If any receiving local executor is under Back Pressure,
     // informs other workers about back pressure situation. Runs in the NettyWorker thread.
     private void transferLocalBatch(ArrayList<AddressedTuple> tupleBatch) {
-        int lastOverflowCount = 0; // overflowQ size at the time the last BPStatus was sent
-
         for (int i = 0; i < tupleBatch.size(); i++) {
             AddressedTuple tuple = tupleBatch.get(i);
             JCQueue queue = taskToExecutorQueue.get(tuple.dest);
@@ -548,16 +548,18 @@
 
             // 2- BP detected (i.e MainQ is full). So try adding to overflow
             int currOverflowCount = queue.getOverflowCount();
-            if (bpTracker.recordBackPressure(tuple.dest)) {
+            // get BP state object so only have to lookup once
+            BackpressureState bpState = bpTracker.getBackpressureState(tuple.dest);
+            if (bpTracker.recordBackPressure(bpState)) {
                 receiver.sendBackPressureStatus(bpTracker.getCurrStatus());
-                lastOverflowCount = currOverflowCount;
+                bpTracker.setLastOverflowCount(bpState, currOverflowCount);
             } else {
 
-                if (currOverflowCount - lastOverflowCount > 10000) {
+                if (currOverflowCount - bpTracker.getLastOverflowCount(bpState) > RESEND_BACKPRESSURE_SIZE) {
                     // resend BP status, in case prev notification was missed or reordered
                     BackPressureStatus bpStatus = bpTracker.getCurrStatus();
                     receiver.sendBackPressureStatus(bpStatus);
-                    lastOverflowCount = currOverflowCount;
+                    bpTracker.setLastOverflowCount(bpState, currOverflowCount);
                     LOG.debug("Re-sent BackPressure Status. OverflowCount = {}, BP Status ID = {}. ", currOverflowCount, bpStatus.id);
                 }
             }
diff --git a/storm-client/src/jvm/org/apache/storm/generated/AccessControl.java b/storm-client/src/jvm/org/apache/storm/generated/AccessControl.java
index 58b166b..1077051 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/AccessControl.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/AccessControl.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class AccessControl implements org.apache.storm.thrift.TBase<AccessControl, AccessControl._Fields>, java.io.Serializable, Cloneable, Comparable<AccessControl> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("AccessControl");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/AccessControlType.java b/storm-client/src/jvm/org/apache/storm/generated/AccessControlType.java
index 9a32f8b..3d2415d 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/AccessControlType.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/AccessControlType.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public enum AccessControlType implements org.apache.storm.thrift.TEnum {
   OTHER(1),
   USER(2);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/AlreadyAliveException.java b/storm-client/src/jvm/org/apache/storm/generated/AlreadyAliveException.java
index 02f3174..9d57361 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/AlreadyAliveException.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/AlreadyAliveException.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class AlreadyAliveException extends org.apache.storm.thrift.TException implements org.apache.storm.thrift.TBase<AlreadyAliveException, AlreadyAliveException._Fields>, java.io.Serializable, Cloneable, Comparable<AlreadyAliveException> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("AlreadyAliveException");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/Assignment.java b/storm-client/src/jvm/org/apache/storm/generated/Assignment.java
index bc11340..d62618c 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/Assignment.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/Assignment.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class Assignment implements org.apache.storm.thrift.TBase<Assignment, Assignment._Fields>, java.io.Serializable, Cloneable, Comparable<Assignment> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("Assignment");
 
@@ -942,15 +942,15 @@
           case 2: // NODE_HOST
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map686 = iprot.readMapBegin();
-                struct.node_host = new java.util.HashMap<java.lang.String,java.lang.String>(2*_map686.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key687;
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _val688;
-                for (int _i689 = 0; _i689 < _map686.size; ++_i689)
+                org.apache.storm.thrift.protocol.TMap _map736 = iprot.readMapBegin();
+                struct.node_host = new java.util.HashMap<java.lang.String,java.lang.String>(2*_map736.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key737;
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _val738;
+                for (int _i739 = 0; _i739 < _map736.size; ++_i739)
                 {
-                  _key687 = iprot.readString();
-                  _val688 = iprot.readString();
-                  struct.node_host.put(_key687, _val688);
+                  _key737 = iprot.readString();
+                  _val738 = iprot.readString();
+                  struct.node_host.put(_key737, _val738);
                 }
                 iprot.readMapEnd();
               }
@@ -962,26 +962,26 @@
           case 3: // EXECUTOR_NODE_PORT
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map690 = iprot.readMapBegin();
-                struct.executor_node_port = new java.util.HashMap<java.util.List<java.lang.Long>,NodeInfo>(2*_map690.size);
-                @org.apache.storm.thrift.annotation.Nullable java.util.List<java.lang.Long> _key691;
-                @org.apache.storm.thrift.annotation.Nullable NodeInfo _val692;
-                for (int _i693 = 0; _i693 < _map690.size; ++_i693)
+                org.apache.storm.thrift.protocol.TMap _map740 = iprot.readMapBegin();
+                struct.executor_node_port = new java.util.HashMap<java.util.List<java.lang.Long>,NodeInfo>(2*_map740.size);
+                @org.apache.storm.thrift.annotation.Nullable java.util.List<java.lang.Long> _key741;
+                @org.apache.storm.thrift.annotation.Nullable NodeInfo _val742;
+                for (int _i743 = 0; _i743 < _map740.size; ++_i743)
                 {
                   {
-                    org.apache.storm.thrift.protocol.TList _list694 = iprot.readListBegin();
-                    _key691 = new java.util.ArrayList<java.lang.Long>(_list694.size);
-                    long _elem695;
-                    for (int _i696 = 0; _i696 < _list694.size; ++_i696)
+                    org.apache.storm.thrift.protocol.TList _list744 = iprot.readListBegin();
+                    _key741 = new java.util.ArrayList<java.lang.Long>(_list744.size);
+                    long _elem745;
+                    for (int _i746 = 0; _i746 < _list744.size; ++_i746)
                     {
-                      _elem695 = iprot.readI64();
-                      _key691.add(_elem695);
+                      _elem745 = iprot.readI64();
+                      _key741.add(_elem745);
                     }
                     iprot.readListEnd();
                   }
-                  _val692 = new NodeInfo();
-                  _val692.read(iprot);
-                  struct.executor_node_port.put(_key691, _val692);
+                  _val742 = new NodeInfo();
+                  _val742.read(iprot);
+                  struct.executor_node_port.put(_key741, _val742);
                 }
                 iprot.readMapEnd();
               }
@@ -993,25 +993,25 @@
           case 4: // EXECUTOR_START_TIME_SECS
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map697 = iprot.readMapBegin();
-                struct.executor_start_time_secs = new java.util.HashMap<java.util.List<java.lang.Long>,java.lang.Long>(2*_map697.size);
-                @org.apache.storm.thrift.annotation.Nullable java.util.List<java.lang.Long> _key698;
-                long _val699;
-                for (int _i700 = 0; _i700 < _map697.size; ++_i700)
+                org.apache.storm.thrift.protocol.TMap _map747 = iprot.readMapBegin();
+                struct.executor_start_time_secs = new java.util.HashMap<java.util.List<java.lang.Long>,java.lang.Long>(2*_map747.size);
+                @org.apache.storm.thrift.annotation.Nullable java.util.List<java.lang.Long> _key748;
+                long _val749;
+                for (int _i750 = 0; _i750 < _map747.size; ++_i750)
                 {
                   {
-                    org.apache.storm.thrift.protocol.TList _list701 = iprot.readListBegin();
-                    _key698 = new java.util.ArrayList<java.lang.Long>(_list701.size);
-                    long _elem702;
-                    for (int _i703 = 0; _i703 < _list701.size; ++_i703)
+                    org.apache.storm.thrift.protocol.TList _list751 = iprot.readListBegin();
+                    _key748 = new java.util.ArrayList<java.lang.Long>(_list751.size);
+                    long _elem752;
+                    for (int _i753 = 0; _i753 < _list751.size; ++_i753)
                     {
-                      _elem702 = iprot.readI64();
-                      _key698.add(_elem702);
+                      _elem752 = iprot.readI64();
+                      _key748.add(_elem752);
                     }
                     iprot.readListEnd();
                   }
-                  _val699 = iprot.readI64();
-                  struct.executor_start_time_secs.put(_key698, _val699);
+                  _val749 = iprot.readI64();
+                  struct.executor_start_time_secs.put(_key748, _val749);
                 }
                 iprot.readMapEnd();
               }
@@ -1023,17 +1023,17 @@
           case 5: // WORKER_RESOURCES
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map704 = iprot.readMapBegin();
-                struct.worker_resources = new java.util.HashMap<NodeInfo,WorkerResources>(2*_map704.size);
-                @org.apache.storm.thrift.annotation.Nullable NodeInfo _key705;
-                @org.apache.storm.thrift.annotation.Nullable WorkerResources _val706;
-                for (int _i707 = 0; _i707 < _map704.size; ++_i707)
+                org.apache.storm.thrift.protocol.TMap _map754 = iprot.readMapBegin();
+                struct.worker_resources = new java.util.HashMap<NodeInfo,WorkerResources>(2*_map754.size);
+                @org.apache.storm.thrift.annotation.Nullable NodeInfo _key755;
+                @org.apache.storm.thrift.annotation.Nullable WorkerResources _val756;
+                for (int _i757 = 0; _i757 < _map754.size; ++_i757)
                 {
-                  _key705 = new NodeInfo();
-                  _key705.read(iprot);
-                  _val706 = new WorkerResources();
-                  _val706.read(iprot);
-                  struct.worker_resources.put(_key705, _val706);
+                  _key755 = new NodeInfo();
+                  _key755.read(iprot);
+                  _val756 = new WorkerResources();
+                  _val756.read(iprot);
+                  struct.worker_resources.put(_key755, _val756);
                 }
                 iprot.readMapEnd();
               }
@@ -1045,15 +1045,15 @@
           case 6: // TOTAL_SHARED_OFF_HEAP
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map708 = iprot.readMapBegin();
-                struct.total_shared_off_heap = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map708.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key709;
-                double _val710;
-                for (int _i711 = 0; _i711 < _map708.size; ++_i711)
+                org.apache.storm.thrift.protocol.TMap _map758 = iprot.readMapBegin();
+                struct.total_shared_off_heap = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map758.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key759;
+                double _val760;
+                for (int _i761 = 0; _i761 < _map758.size; ++_i761)
                 {
-                  _key709 = iprot.readString();
-                  _val710 = iprot.readDouble();
-                  struct.total_shared_off_heap.put(_key709, _val710);
+                  _key759 = iprot.readString();
+                  _val760 = iprot.readDouble();
+                  struct.total_shared_off_heap.put(_key759, _val760);
                 }
                 iprot.readMapEnd();
               }
@@ -1093,10 +1093,10 @@
           oprot.writeFieldBegin(NODE_HOST_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRING, struct.node_host.size()));
-            for (java.util.Map.Entry<java.lang.String, java.lang.String> _iter712 : struct.node_host.entrySet())
+            for (java.util.Map.Entry<java.lang.String, java.lang.String> _iter762 : struct.node_host.entrySet())
             {
-              oprot.writeString(_iter712.getKey());
-              oprot.writeString(_iter712.getValue());
+              oprot.writeString(_iter762.getKey());
+              oprot.writeString(_iter762.getValue());
             }
             oprot.writeMapEnd();
           }
@@ -1108,17 +1108,17 @@
           oprot.writeFieldBegin(EXECUTOR_NODE_PORT_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.LIST, org.apache.storm.thrift.protocol.TType.STRUCT, struct.executor_node_port.size()));
-            for (java.util.Map.Entry<java.util.List<java.lang.Long>, NodeInfo> _iter713 : struct.executor_node_port.entrySet())
+            for (java.util.Map.Entry<java.util.List<java.lang.Long>, NodeInfo> _iter763 : struct.executor_node_port.entrySet())
             {
               {
-                oprot.writeListBegin(new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.I64, _iter713.getKey().size()));
-                for (long _iter714 : _iter713.getKey())
+                oprot.writeListBegin(new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.I64, _iter763.getKey().size()));
+                for (long _iter764 : _iter763.getKey())
                 {
-                  oprot.writeI64(_iter714);
+                  oprot.writeI64(_iter764);
                 }
                 oprot.writeListEnd();
               }
-              _iter713.getValue().write(oprot);
+              _iter763.getValue().write(oprot);
             }
             oprot.writeMapEnd();
           }
@@ -1130,17 +1130,17 @@
           oprot.writeFieldBegin(EXECUTOR_START_TIME_SECS_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.LIST, org.apache.storm.thrift.protocol.TType.I64, struct.executor_start_time_secs.size()));
-            for (java.util.Map.Entry<java.util.List<java.lang.Long>, java.lang.Long> _iter715 : struct.executor_start_time_secs.entrySet())
+            for (java.util.Map.Entry<java.util.List<java.lang.Long>, java.lang.Long> _iter765 : struct.executor_start_time_secs.entrySet())
             {
               {
-                oprot.writeListBegin(new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.I64, _iter715.getKey().size()));
-                for (long _iter716 : _iter715.getKey())
+                oprot.writeListBegin(new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.I64, _iter765.getKey().size()));
+                for (long _iter766 : _iter765.getKey())
                 {
-                  oprot.writeI64(_iter716);
+                  oprot.writeI64(_iter766);
                 }
                 oprot.writeListEnd();
               }
-              oprot.writeI64(_iter715.getValue());
+              oprot.writeI64(_iter765.getValue());
             }
             oprot.writeMapEnd();
           }
@@ -1152,10 +1152,10 @@
           oprot.writeFieldBegin(WORKER_RESOURCES_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRUCT, org.apache.storm.thrift.protocol.TType.STRUCT, struct.worker_resources.size()));
-            for (java.util.Map.Entry<NodeInfo, WorkerResources> _iter717 : struct.worker_resources.entrySet())
+            for (java.util.Map.Entry<NodeInfo, WorkerResources> _iter767 : struct.worker_resources.entrySet())
             {
-              _iter717.getKey().write(oprot);
-              _iter717.getValue().write(oprot);
+              _iter767.getKey().write(oprot);
+              _iter767.getValue().write(oprot);
             }
             oprot.writeMapEnd();
           }
@@ -1167,10 +1167,10 @@
           oprot.writeFieldBegin(TOTAL_SHARED_OFF_HEAP_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, struct.total_shared_off_heap.size()));
-            for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter718 : struct.total_shared_off_heap.entrySet())
+            for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter768 : struct.total_shared_off_heap.entrySet())
             {
-              oprot.writeString(_iter718.getKey());
-              oprot.writeDouble(_iter718.getValue());
+              oprot.writeString(_iter768.getKey());
+              oprot.writeDouble(_iter768.getValue());
             }
             oprot.writeMapEnd();
           }
@@ -1225,62 +1225,62 @@
       if (struct.is_set_node_host()) {
         {
           oprot.writeI32(struct.node_host.size());
-          for (java.util.Map.Entry<java.lang.String, java.lang.String> _iter719 : struct.node_host.entrySet())
+          for (java.util.Map.Entry<java.lang.String, java.lang.String> _iter769 : struct.node_host.entrySet())
           {
-            oprot.writeString(_iter719.getKey());
-            oprot.writeString(_iter719.getValue());
+            oprot.writeString(_iter769.getKey());
+            oprot.writeString(_iter769.getValue());
           }
         }
       }
       if (struct.is_set_executor_node_port()) {
         {
           oprot.writeI32(struct.executor_node_port.size());
-          for (java.util.Map.Entry<java.util.List<java.lang.Long>, NodeInfo> _iter720 : struct.executor_node_port.entrySet())
+          for (java.util.Map.Entry<java.util.List<java.lang.Long>, NodeInfo> _iter770 : struct.executor_node_port.entrySet())
           {
             {
-              oprot.writeI32(_iter720.getKey().size());
-              for (long _iter721 : _iter720.getKey())
+              oprot.writeI32(_iter770.getKey().size());
+              for (long _iter771 : _iter770.getKey())
               {
-                oprot.writeI64(_iter721);
+                oprot.writeI64(_iter771);
               }
             }
-            _iter720.getValue().write(oprot);
+            _iter770.getValue().write(oprot);
           }
         }
       }
       if (struct.is_set_executor_start_time_secs()) {
         {
           oprot.writeI32(struct.executor_start_time_secs.size());
-          for (java.util.Map.Entry<java.util.List<java.lang.Long>, java.lang.Long> _iter722 : struct.executor_start_time_secs.entrySet())
+          for (java.util.Map.Entry<java.util.List<java.lang.Long>, java.lang.Long> _iter772 : struct.executor_start_time_secs.entrySet())
           {
             {
-              oprot.writeI32(_iter722.getKey().size());
-              for (long _iter723 : _iter722.getKey())
+              oprot.writeI32(_iter772.getKey().size());
+              for (long _iter773 : _iter772.getKey())
               {
-                oprot.writeI64(_iter723);
+                oprot.writeI64(_iter773);
               }
             }
-            oprot.writeI64(_iter722.getValue());
+            oprot.writeI64(_iter772.getValue());
           }
         }
       }
       if (struct.is_set_worker_resources()) {
         {
           oprot.writeI32(struct.worker_resources.size());
-          for (java.util.Map.Entry<NodeInfo, WorkerResources> _iter724 : struct.worker_resources.entrySet())
+          for (java.util.Map.Entry<NodeInfo, WorkerResources> _iter774 : struct.worker_resources.entrySet())
           {
-            _iter724.getKey().write(oprot);
-            _iter724.getValue().write(oprot);
+            _iter774.getKey().write(oprot);
+            _iter774.getValue().write(oprot);
           }
         }
       }
       if (struct.is_set_total_shared_off_heap()) {
         {
           oprot.writeI32(struct.total_shared_off_heap.size());
-          for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter725 : struct.total_shared_off_heap.entrySet())
+          for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter775 : struct.total_shared_off_heap.entrySet())
           {
-            oprot.writeString(_iter725.getKey());
-            oprot.writeDouble(_iter725.getValue());
+            oprot.writeString(_iter775.getKey());
+            oprot.writeDouble(_iter775.getValue());
           }
         }
       }
@@ -1297,96 +1297,96 @@
       java.util.BitSet incoming = iprot.readBitSet(6);
       if (incoming.get(0)) {
         {
-          org.apache.storm.thrift.protocol.TMap _map726 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRING, iprot.readI32());
-          struct.node_host = new java.util.HashMap<java.lang.String,java.lang.String>(2*_map726.size);
-          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key727;
-          @org.apache.storm.thrift.annotation.Nullable java.lang.String _val728;
-          for (int _i729 = 0; _i729 < _map726.size; ++_i729)
+          org.apache.storm.thrift.protocol.TMap _map776 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRING, iprot.readI32());
+          struct.node_host = new java.util.HashMap<java.lang.String,java.lang.String>(2*_map776.size);
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key777;
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _val778;
+          for (int _i779 = 0; _i779 < _map776.size; ++_i779)
           {
-            _key727 = iprot.readString();
-            _val728 = iprot.readString();
-            struct.node_host.put(_key727, _val728);
+            _key777 = iprot.readString();
+            _val778 = iprot.readString();
+            struct.node_host.put(_key777, _val778);
           }
         }
         struct.set_node_host_isSet(true);
       }
       if (incoming.get(1)) {
         {
-          org.apache.storm.thrift.protocol.TMap _map730 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.LIST, org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.executor_node_port = new java.util.HashMap<java.util.List<java.lang.Long>,NodeInfo>(2*_map730.size);
-          @org.apache.storm.thrift.annotation.Nullable java.util.List<java.lang.Long> _key731;
-          @org.apache.storm.thrift.annotation.Nullable NodeInfo _val732;
-          for (int _i733 = 0; _i733 < _map730.size; ++_i733)
+          org.apache.storm.thrift.protocol.TMap _map780 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.LIST, org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.executor_node_port = new java.util.HashMap<java.util.List<java.lang.Long>,NodeInfo>(2*_map780.size);
+          @org.apache.storm.thrift.annotation.Nullable java.util.List<java.lang.Long> _key781;
+          @org.apache.storm.thrift.annotation.Nullable NodeInfo _val782;
+          for (int _i783 = 0; _i783 < _map780.size; ++_i783)
           {
             {
-              org.apache.storm.thrift.protocol.TList _list734 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
-              _key731 = new java.util.ArrayList<java.lang.Long>(_list734.size);
-              long _elem735;
-              for (int _i736 = 0; _i736 < _list734.size; ++_i736)
+              org.apache.storm.thrift.protocol.TList _list784 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
+              _key781 = new java.util.ArrayList<java.lang.Long>(_list784.size);
+              long _elem785;
+              for (int _i786 = 0; _i786 < _list784.size; ++_i786)
               {
-                _elem735 = iprot.readI64();
-                _key731.add(_elem735);
+                _elem785 = iprot.readI64();
+                _key781.add(_elem785);
               }
             }
-            _val732 = new NodeInfo();
-            _val732.read(iprot);
-            struct.executor_node_port.put(_key731, _val732);
+            _val782 = new NodeInfo();
+            _val782.read(iprot);
+            struct.executor_node_port.put(_key781, _val782);
           }
         }
         struct.set_executor_node_port_isSet(true);
       }
       if (incoming.get(2)) {
         {
-          org.apache.storm.thrift.protocol.TMap _map737 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.LIST, org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
-          struct.executor_start_time_secs = new java.util.HashMap<java.util.List<java.lang.Long>,java.lang.Long>(2*_map737.size);
-          @org.apache.storm.thrift.annotation.Nullable java.util.List<java.lang.Long> _key738;
-          long _val739;
-          for (int _i740 = 0; _i740 < _map737.size; ++_i740)
+          org.apache.storm.thrift.protocol.TMap _map787 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.LIST, org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
+          struct.executor_start_time_secs = new java.util.HashMap<java.util.List<java.lang.Long>,java.lang.Long>(2*_map787.size);
+          @org.apache.storm.thrift.annotation.Nullable java.util.List<java.lang.Long> _key788;
+          long _val789;
+          for (int _i790 = 0; _i790 < _map787.size; ++_i790)
           {
             {
-              org.apache.storm.thrift.protocol.TList _list741 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
-              _key738 = new java.util.ArrayList<java.lang.Long>(_list741.size);
-              long _elem742;
-              for (int _i743 = 0; _i743 < _list741.size; ++_i743)
+              org.apache.storm.thrift.protocol.TList _list791 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
+              _key788 = new java.util.ArrayList<java.lang.Long>(_list791.size);
+              long _elem792;
+              for (int _i793 = 0; _i793 < _list791.size; ++_i793)
               {
-                _elem742 = iprot.readI64();
-                _key738.add(_elem742);
+                _elem792 = iprot.readI64();
+                _key788.add(_elem792);
               }
             }
-            _val739 = iprot.readI64();
-            struct.executor_start_time_secs.put(_key738, _val739);
+            _val789 = iprot.readI64();
+            struct.executor_start_time_secs.put(_key788, _val789);
           }
         }
         struct.set_executor_start_time_secs_isSet(true);
       }
       if (incoming.get(3)) {
         {
-          org.apache.storm.thrift.protocol.TMap _map744 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRUCT, org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.worker_resources = new java.util.HashMap<NodeInfo,WorkerResources>(2*_map744.size);
-          @org.apache.storm.thrift.annotation.Nullable NodeInfo _key745;
-          @org.apache.storm.thrift.annotation.Nullable WorkerResources _val746;
-          for (int _i747 = 0; _i747 < _map744.size; ++_i747)
+          org.apache.storm.thrift.protocol.TMap _map794 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRUCT, org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.worker_resources = new java.util.HashMap<NodeInfo,WorkerResources>(2*_map794.size);
+          @org.apache.storm.thrift.annotation.Nullable NodeInfo _key795;
+          @org.apache.storm.thrift.annotation.Nullable WorkerResources _val796;
+          for (int _i797 = 0; _i797 < _map794.size; ++_i797)
           {
-            _key745 = new NodeInfo();
-            _key745.read(iprot);
-            _val746 = new WorkerResources();
-            _val746.read(iprot);
-            struct.worker_resources.put(_key745, _val746);
+            _key795 = new NodeInfo();
+            _key795.read(iprot);
+            _val796 = new WorkerResources();
+            _val796.read(iprot);
+            struct.worker_resources.put(_key795, _val796);
           }
         }
         struct.set_worker_resources_isSet(true);
       }
       if (incoming.get(4)) {
         {
-          org.apache.storm.thrift.protocol.TMap _map748 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, iprot.readI32());
-          struct.total_shared_off_heap = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map748.size);
-          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key749;
-          double _val750;
-          for (int _i751 = 0; _i751 < _map748.size; ++_i751)
+          org.apache.storm.thrift.protocol.TMap _map798 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, iprot.readI32());
+          struct.total_shared_off_heap = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map798.size);
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key799;
+          double _val800;
+          for (int _i801 = 0; _i801 < _map798.size; ++_i801)
           {
-            _key749 = iprot.readString();
-            _val750 = iprot.readDouble();
-            struct.total_shared_off_heap.put(_key749, _val750);
+            _key799 = iprot.readString();
+            _val800 = iprot.readDouble();
+            struct.total_shared_off_heap.put(_key799, _val800);
           }
         }
         struct.set_total_shared_off_heap_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/AuthorizationException.java b/storm-client/src/jvm/org/apache/storm/generated/AuthorizationException.java
index 56e03c3..29efde3 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/AuthorizationException.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/AuthorizationException.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class AuthorizationException extends org.apache.storm.thrift.TException implements org.apache.storm.thrift.TBase<AuthorizationException, AuthorizationException._Fields>, java.io.Serializable, Cloneable, Comparable<AuthorizationException> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("AuthorizationException");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/BeginDownloadResult.java b/storm-client/src/jvm/org/apache/storm/generated/BeginDownloadResult.java
index 2dbb1fb..540246a 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/BeginDownloadResult.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/BeginDownloadResult.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class BeginDownloadResult implements org.apache.storm.thrift.TBase<BeginDownloadResult, BeginDownloadResult._Fields>, java.io.Serializable, Cloneable, Comparable<BeginDownloadResult> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("BeginDownloadResult");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/Bolt.java b/storm-client/src/jvm/org/apache/storm/generated/Bolt.java
index 1eb5943..23cdf63 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/Bolt.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/Bolt.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class Bolt implements org.apache.storm.thrift.TBase<Bolt, Bolt._Fields>, java.io.Serializable, Cloneable, Comparable<Bolt> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("Bolt");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/BoltAggregateStats.java b/storm-client/src/jvm/org/apache/storm/generated/BoltAggregateStats.java
index de829cb..56f601e 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/BoltAggregateStats.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/BoltAggregateStats.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class BoltAggregateStats implements org.apache.storm.thrift.TBase<BoltAggregateStats, BoltAggregateStats._Fields>, java.io.Serializable, Cloneable, Comparable<BoltAggregateStats> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("BoltAggregateStats");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/BoltStats.java b/storm-client/src/jvm/org/apache/storm/generated/BoltStats.java
index a66b1a9..dbf7f06 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/BoltStats.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/BoltStats.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class BoltStats implements org.apache.storm.thrift.TBase<BoltStats, BoltStats._Fields>, java.io.Serializable, Cloneable, Comparable<BoltStats> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("BoltStats");
 
@@ -857,28 +857,28 @@
           case 1: // ACKED
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map160 = iprot.readMapBegin();
-                struct.acked = new java.util.HashMap<java.lang.String,java.util.Map<GlobalStreamId,java.lang.Long>>(2*_map160.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key161;
-                @org.apache.storm.thrift.annotation.Nullable java.util.Map<GlobalStreamId,java.lang.Long> _val162;
-                for (int _i163 = 0; _i163 < _map160.size; ++_i163)
+                org.apache.storm.thrift.protocol.TMap _map190 = iprot.readMapBegin();
+                struct.acked = new java.util.HashMap<java.lang.String,java.util.Map<GlobalStreamId,java.lang.Long>>(2*_map190.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key191;
+                @org.apache.storm.thrift.annotation.Nullable java.util.Map<GlobalStreamId,java.lang.Long> _val192;
+                for (int _i193 = 0; _i193 < _map190.size; ++_i193)
                 {
-                  _key161 = iprot.readString();
+                  _key191 = iprot.readString();
                   {
-                    org.apache.storm.thrift.protocol.TMap _map164 = iprot.readMapBegin();
-                    _val162 = new java.util.HashMap<GlobalStreamId,java.lang.Long>(2*_map164.size);
-                    @org.apache.storm.thrift.annotation.Nullable GlobalStreamId _key165;
-                    long _val166;
-                    for (int _i167 = 0; _i167 < _map164.size; ++_i167)
+                    org.apache.storm.thrift.protocol.TMap _map194 = iprot.readMapBegin();
+                    _val192 = new java.util.HashMap<GlobalStreamId,java.lang.Long>(2*_map194.size);
+                    @org.apache.storm.thrift.annotation.Nullable GlobalStreamId _key195;
+                    long _val196;
+                    for (int _i197 = 0; _i197 < _map194.size; ++_i197)
                     {
-                      _key165 = new GlobalStreamId();
-                      _key165.read(iprot);
-                      _val166 = iprot.readI64();
-                      _val162.put(_key165, _val166);
+                      _key195 = new GlobalStreamId();
+                      _key195.read(iprot);
+                      _val196 = iprot.readI64();
+                      _val192.put(_key195, _val196);
                     }
                     iprot.readMapEnd();
                   }
-                  struct.acked.put(_key161, _val162);
+                  struct.acked.put(_key191, _val192);
                 }
                 iprot.readMapEnd();
               }
@@ -890,28 +890,28 @@
           case 2: // FAILED
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map168 = iprot.readMapBegin();
-                struct.failed = new java.util.HashMap<java.lang.String,java.util.Map<GlobalStreamId,java.lang.Long>>(2*_map168.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key169;
-                @org.apache.storm.thrift.annotation.Nullable java.util.Map<GlobalStreamId,java.lang.Long> _val170;
-                for (int _i171 = 0; _i171 < _map168.size; ++_i171)
+                org.apache.storm.thrift.protocol.TMap _map198 = iprot.readMapBegin();
+                struct.failed = new java.util.HashMap<java.lang.String,java.util.Map<GlobalStreamId,java.lang.Long>>(2*_map198.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key199;
+                @org.apache.storm.thrift.annotation.Nullable java.util.Map<GlobalStreamId,java.lang.Long> _val200;
+                for (int _i201 = 0; _i201 < _map198.size; ++_i201)
                 {
-                  _key169 = iprot.readString();
+                  _key199 = iprot.readString();
                   {
-                    org.apache.storm.thrift.protocol.TMap _map172 = iprot.readMapBegin();
-                    _val170 = new java.util.HashMap<GlobalStreamId,java.lang.Long>(2*_map172.size);
-                    @org.apache.storm.thrift.annotation.Nullable GlobalStreamId _key173;
-                    long _val174;
-                    for (int _i175 = 0; _i175 < _map172.size; ++_i175)
+                    org.apache.storm.thrift.protocol.TMap _map202 = iprot.readMapBegin();
+                    _val200 = new java.util.HashMap<GlobalStreamId,java.lang.Long>(2*_map202.size);
+                    @org.apache.storm.thrift.annotation.Nullable GlobalStreamId _key203;
+                    long _val204;
+                    for (int _i205 = 0; _i205 < _map202.size; ++_i205)
                     {
-                      _key173 = new GlobalStreamId();
-                      _key173.read(iprot);
-                      _val174 = iprot.readI64();
-                      _val170.put(_key173, _val174);
+                      _key203 = new GlobalStreamId();
+                      _key203.read(iprot);
+                      _val204 = iprot.readI64();
+                      _val200.put(_key203, _val204);
                     }
                     iprot.readMapEnd();
                   }
-                  struct.failed.put(_key169, _val170);
+                  struct.failed.put(_key199, _val200);
                 }
                 iprot.readMapEnd();
               }
@@ -923,28 +923,28 @@
           case 3: // PROCESS_MS_AVG
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map176 = iprot.readMapBegin();
-                struct.process_ms_avg = new java.util.HashMap<java.lang.String,java.util.Map<GlobalStreamId,java.lang.Double>>(2*_map176.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key177;
-                @org.apache.storm.thrift.annotation.Nullable java.util.Map<GlobalStreamId,java.lang.Double> _val178;
-                for (int _i179 = 0; _i179 < _map176.size; ++_i179)
+                org.apache.storm.thrift.protocol.TMap _map206 = iprot.readMapBegin();
+                struct.process_ms_avg = new java.util.HashMap<java.lang.String,java.util.Map<GlobalStreamId,java.lang.Double>>(2*_map206.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key207;
+                @org.apache.storm.thrift.annotation.Nullable java.util.Map<GlobalStreamId,java.lang.Double> _val208;
+                for (int _i209 = 0; _i209 < _map206.size; ++_i209)
                 {
-                  _key177 = iprot.readString();
+                  _key207 = iprot.readString();
                   {
-                    org.apache.storm.thrift.protocol.TMap _map180 = iprot.readMapBegin();
-                    _val178 = new java.util.HashMap<GlobalStreamId,java.lang.Double>(2*_map180.size);
-                    @org.apache.storm.thrift.annotation.Nullable GlobalStreamId _key181;
-                    double _val182;
-                    for (int _i183 = 0; _i183 < _map180.size; ++_i183)
+                    org.apache.storm.thrift.protocol.TMap _map210 = iprot.readMapBegin();
+                    _val208 = new java.util.HashMap<GlobalStreamId,java.lang.Double>(2*_map210.size);
+                    @org.apache.storm.thrift.annotation.Nullable GlobalStreamId _key211;
+                    double _val212;
+                    for (int _i213 = 0; _i213 < _map210.size; ++_i213)
                     {
-                      _key181 = new GlobalStreamId();
-                      _key181.read(iprot);
-                      _val182 = iprot.readDouble();
-                      _val178.put(_key181, _val182);
+                      _key211 = new GlobalStreamId();
+                      _key211.read(iprot);
+                      _val212 = iprot.readDouble();
+                      _val208.put(_key211, _val212);
                     }
                     iprot.readMapEnd();
                   }
-                  struct.process_ms_avg.put(_key177, _val178);
+                  struct.process_ms_avg.put(_key207, _val208);
                 }
                 iprot.readMapEnd();
               }
@@ -956,28 +956,28 @@
           case 4: // EXECUTED
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map184 = iprot.readMapBegin();
-                struct.executed = new java.util.HashMap<java.lang.String,java.util.Map<GlobalStreamId,java.lang.Long>>(2*_map184.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key185;
-                @org.apache.storm.thrift.annotation.Nullable java.util.Map<GlobalStreamId,java.lang.Long> _val186;
-                for (int _i187 = 0; _i187 < _map184.size; ++_i187)
+                org.apache.storm.thrift.protocol.TMap _map214 = iprot.readMapBegin();
+                struct.executed = new java.util.HashMap<java.lang.String,java.util.Map<GlobalStreamId,java.lang.Long>>(2*_map214.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key215;
+                @org.apache.storm.thrift.annotation.Nullable java.util.Map<GlobalStreamId,java.lang.Long> _val216;
+                for (int _i217 = 0; _i217 < _map214.size; ++_i217)
                 {
-                  _key185 = iprot.readString();
+                  _key215 = iprot.readString();
                   {
-                    org.apache.storm.thrift.protocol.TMap _map188 = iprot.readMapBegin();
-                    _val186 = new java.util.HashMap<GlobalStreamId,java.lang.Long>(2*_map188.size);
-                    @org.apache.storm.thrift.annotation.Nullable GlobalStreamId _key189;
-                    long _val190;
-                    for (int _i191 = 0; _i191 < _map188.size; ++_i191)
+                    org.apache.storm.thrift.protocol.TMap _map218 = iprot.readMapBegin();
+                    _val216 = new java.util.HashMap<GlobalStreamId,java.lang.Long>(2*_map218.size);
+                    @org.apache.storm.thrift.annotation.Nullable GlobalStreamId _key219;
+                    long _val220;
+                    for (int _i221 = 0; _i221 < _map218.size; ++_i221)
                     {
-                      _key189 = new GlobalStreamId();
-                      _key189.read(iprot);
-                      _val190 = iprot.readI64();
-                      _val186.put(_key189, _val190);
+                      _key219 = new GlobalStreamId();
+                      _key219.read(iprot);
+                      _val220 = iprot.readI64();
+                      _val216.put(_key219, _val220);
                     }
                     iprot.readMapEnd();
                   }
-                  struct.executed.put(_key185, _val186);
+                  struct.executed.put(_key215, _val216);
                 }
                 iprot.readMapEnd();
               }
@@ -989,28 +989,28 @@
           case 5: // EXECUTE_MS_AVG
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map192 = iprot.readMapBegin();
-                struct.execute_ms_avg = new java.util.HashMap<java.lang.String,java.util.Map<GlobalStreamId,java.lang.Double>>(2*_map192.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key193;
-                @org.apache.storm.thrift.annotation.Nullable java.util.Map<GlobalStreamId,java.lang.Double> _val194;
-                for (int _i195 = 0; _i195 < _map192.size; ++_i195)
+                org.apache.storm.thrift.protocol.TMap _map222 = iprot.readMapBegin();
+                struct.execute_ms_avg = new java.util.HashMap<java.lang.String,java.util.Map<GlobalStreamId,java.lang.Double>>(2*_map222.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key223;
+                @org.apache.storm.thrift.annotation.Nullable java.util.Map<GlobalStreamId,java.lang.Double> _val224;
+                for (int _i225 = 0; _i225 < _map222.size; ++_i225)
                 {
-                  _key193 = iprot.readString();
+                  _key223 = iprot.readString();
                   {
-                    org.apache.storm.thrift.protocol.TMap _map196 = iprot.readMapBegin();
-                    _val194 = new java.util.HashMap<GlobalStreamId,java.lang.Double>(2*_map196.size);
-                    @org.apache.storm.thrift.annotation.Nullable GlobalStreamId _key197;
-                    double _val198;
-                    for (int _i199 = 0; _i199 < _map196.size; ++_i199)
+                    org.apache.storm.thrift.protocol.TMap _map226 = iprot.readMapBegin();
+                    _val224 = new java.util.HashMap<GlobalStreamId,java.lang.Double>(2*_map226.size);
+                    @org.apache.storm.thrift.annotation.Nullable GlobalStreamId _key227;
+                    double _val228;
+                    for (int _i229 = 0; _i229 < _map226.size; ++_i229)
                     {
-                      _key197 = new GlobalStreamId();
-                      _key197.read(iprot);
-                      _val198 = iprot.readDouble();
-                      _val194.put(_key197, _val198);
+                      _key227 = new GlobalStreamId();
+                      _key227.read(iprot);
+                      _val228 = iprot.readDouble();
+                      _val224.put(_key227, _val228);
                     }
                     iprot.readMapEnd();
                   }
-                  struct.execute_ms_avg.put(_key193, _val194);
+                  struct.execute_ms_avg.put(_key223, _val224);
                 }
                 iprot.readMapEnd();
               }
@@ -1036,15 +1036,15 @@
         oprot.writeFieldBegin(ACKED_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.MAP, struct.acked.size()));
-          for (java.util.Map.Entry<java.lang.String, java.util.Map<GlobalStreamId,java.lang.Long>> _iter200 : struct.acked.entrySet())
+          for (java.util.Map.Entry<java.lang.String, java.util.Map<GlobalStreamId,java.lang.Long>> _iter230 : struct.acked.entrySet())
           {
-            oprot.writeString(_iter200.getKey());
+            oprot.writeString(_iter230.getKey());
             {
-              oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRUCT, org.apache.storm.thrift.protocol.TType.I64, _iter200.getValue().size()));
-              for (java.util.Map.Entry<GlobalStreamId, java.lang.Long> _iter201 : _iter200.getValue().entrySet())
+              oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRUCT, org.apache.storm.thrift.protocol.TType.I64, _iter230.getValue().size()));
+              for (java.util.Map.Entry<GlobalStreamId, java.lang.Long> _iter231 : _iter230.getValue().entrySet())
               {
-                _iter201.getKey().write(oprot);
-                oprot.writeI64(_iter201.getValue());
+                _iter231.getKey().write(oprot);
+                oprot.writeI64(_iter231.getValue());
               }
               oprot.writeMapEnd();
             }
@@ -1057,15 +1057,15 @@
         oprot.writeFieldBegin(FAILED_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.MAP, struct.failed.size()));
-          for (java.util.Map.Entry<java.lang.String, java.util.Map<GlobalStreamId,java.lang.Long>> _iter202 : struct.failed.entrySet())
+          for (java.util.Map.Entry<java.lang.String, java.util.Map<GlobalStreamId,java.lang.Long>> _iter232 : struct.failed.entrySet())
           {
-            oprot.writeString(_iter202.getKey());
+            oprot.writeString(_iter232.getKey());
             {
-              oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRUCT, org.apache.storm.thrift.protocol.TType.I64, _iter202.getValue().size()));
-              for (java.util.Map.Entry<GlobalStreamId, java.lang.Long> _iter203 : _iter202.getValue().entrySet())
+              oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRUCT, org.apache.storm.thrift.protocol.TType.I64, _iter232.getValue().size()));
+              for (java.util.Map.Entry<GlobalStreamId, java.lang.Long> _iter233 : _iter232.getValue().entrySet())
               {
-                _iter203.getKey().write(oprot);
-                oprot.writeI64(_iter203.getValue());
+                _iter233.getKey().write(oprot);
+                oprot.writeI64(_iter233.getValue());
               }
               oprot.writeMapEnd();
             }
@@ -1078,15 +1078,15 @@
         oprot.writeFieldBegin(PROCESS_MS_AVG_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.MAP, struct.process_ms_avg.size()));
-          for (java.util.Map.Entry<java.lang.String, java.util.Map<GlobalStreamId,java.lang.Double>> _iter204 : struct.process_ms_avg.entrySet())
+          for (java.util.Map.Entry<java.lang.String, java.util.Map<GlobalStreamId,java.lang.Double>> _iter234 : struct.process_ms_avg.entrySet())
           {
-            oprot.writeString(_iter204.getKey());
+            oprot.writeString(_iter234.getKey());
             {
-              oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRUCT, org.apache.storm.thrift.protocol.TType.DOUBLE, _iter204.getValue().size()));
-              for (java.util.Map.Entry<GlobalStreamId, java.lang.Double> _iter205 : _iter204.getValue().entrySet())
+              oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRUCT, org.apache.storm.thrift.protocol.TType.DOUBLE, _iter234.getValue().size()));
+              for (java.util.Map.Entry<GlobalStreamId, java.lang.Double> _iter235 : _iter234.getValue().entrySet())
               {
-                _iter205.getKey().write(oprot);
-                oprot.writeDouble(_iter205.getValue());
+                _iter235.getKey().write(oprot);
+                oprot.writeDouble(_iter235.getValue());
               }
               oprot.writeMapEnd();
             }
@@ -1099,15 +1099,15 @@
         oprot.writeFieldBegin(EXECUTED_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.MAP, struct.executed.size()));
-          for (java.util.Map.Entry<java.lang.String, java.util.Map<GlobalStreamId,java.lang.Long>> _iter206 : struct.executed.entrySet())
+          for (java.util.Map.Entry<java.lang.String, java.util.Map<GlobalStreamId,java.lang.Long>> _iter236 : struct.executed.entrySet())
           {
-            oprot.writeString(_iter206.getKey());
+            oprot.writeString(_iter236.getKey());
             {
-              oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRUCT, org.apache.storm.thrift.protocol.TType.I64, _iter206.getValue().size()));
-              for (java.util.Map.Entry<GlobalStreamId, java.lang.Long> _iter207 : _iter206.getValue().entrySet())
+              oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRUCT, org.apache.storm.thrift.protocol.TType.I64, _iter236.getValue().size()));
+              for (java.util.Map.Entry<GlobalStreamId, java.lang.Long> _iter237 : _iter236.getValue().entrySet())
               {
-                _iter207.getKey().write(oprot);
-                oprot.writeI64(_iter207.getValue());
+                _iter237.getKey().write(oprot);
+                oprot.writeI64(_iter237.getValue());
               }
               oprot.writeMapEnd();
             }
@@ -1120,15 +1120,15 @@
         oprot.writeFieldBegin(EXECUTE_MS_AVG_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.MAP, struct.execute_ms_avg.size()));
-          for (java.util.Map.Entry<java.lang.String, java.util.Map<GlobalStreamId,java.lang.Double>> _iter208 : struct.execute_ms_avg.entrySet())
+          for (java.util.Map.Entry<java.lang.String, java.util.Map<GlobalStreamId,java.lang.Double>> _iter238 : struct.execute_ms_avg.entrySet())
           {
-            oprot.writeString(_iter208.getKey());
+            oprot.writeString(_iter238.getKey());
             {
-              oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRUCT, org.apache.storm.thrift.protocol.TType.DOUBLE, _iter208.getValue().size()));
-              for (java.util.Map.Entry<GlobalStreamId, java.lang.Double> _iter209 : _iter208.getValue().entrySet())
+              oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRUCT, org.apache.storm.thrift.protocol.TType.DOUBLE, _iter238.getValue().size()));
+              for (java.util.Map.Entry<GlobalStreamId, java.lang.Double> _iter239 : _iter238.getValue().entrySet())
               {
-                _iter209.getKey().write(oprot);
-                oprot.writeDouble(_iter209.getValue());
+                _iter239.getKey().write(oprot);
+                oprot.writeDouble(_iter239.getValue());
               }
               oprot.writeMapEnd();
             }
@@ -1156,75 +1156,75 @@
       org.apache.storm.thrift.protocol.TTupleProtocol oprot = (org.apache.storm.thrift.protocol.TTupleProtocol) prot;
       {
         oprot.writeI32(struct.acked.size());
-        for (java.util.Map.Entry<java.lang.String, java.util.Map<GlobalStreamId,java.lang.Long>> _iter210 : struct.acked.entrySet())
+        for (java.util.Map.Entry<java.lang.String, java.util.Map<GlobalStreamId,java.lang.Long>> _iter240 : struct.acked.entrySet())
         {
-          oprot.writeString(_iter210.getKey());
+          oprot.writeString(_iter240.getKey());
           {
-            oprot.writeI32(_iter210.getValue().size());
-            for (java.util.Map.Entry<GlobalStreamId, java.lang.Long> _iter211 : _iter210.getValue().entrySet())
+            oprot.writeI32(_iter240.getValue().size());
+            for (java.util.Map.Entry<GlobalStreamId, java.lang.Long> _iter241 : _iter240.getValue().entrySet())
             {
-              _iter211.getKey().write(oprot);
-              oprot.writeI64(_iter211.getValue());
+              _iter241.getKey().write(oprot);
+              oprot.writeI64(_iter241.getValue());
             }
           }
         }
       }
       {
         oprot.writeI32(struct.failed.size());
-        for (java.util.Map.Entry<java.lang.String, java.util.Map<GlobalStreamId,java.lang.Long>> _iter212 : struct.failed.entrySet())
+        for (java.util.Map.Entry<java.lang.String, java.util.Map<GlobalStreamId,java.lang.Long>> _iter242 : struct.failed.entrySet())
         {
-          oprot.writeString(_iter212.getKey());
+          oprot.writeString(_iter242.getKey());
           {
-            oprot.writeI32(_iter212.getValue().size());
-            for (java.util.Map.Entry<GlobalStreamId, java.lang.Long> _iter213 : _iter212.getValue().entrySet())
+            oprot.writeI32(_iter242.getValue().size());
+            for (java.util.Map.Entry<GlobalStreamId, java.lang.Long> _iter243 : _iter242.getValue().entrySet())
             {
-              _iter213.getKey().write(oprot);
-              oprot.writeI64(_iter213.getValue());
+              _iter243.getKey().write(oprot);
+              oprot.writeI64(_iter243.getValue());
             }
           }
         }
       }
       {
         oprot.writeI32(struct.process_ms_avg.size());
-        for (java.util.Map.Entry<java.lang.String, java.util.Map<GlobalStreamId,java.lang.Double>> _iter214 : struct.process_ms_avg.entrySet())
+        for (java.util.Map.Entry<java.lang.String, java.util.Map<GlobalStreamId,java.lang.Double>> _iter244 : struct.process_ms_avg.entrySet())
         {
-          oprot.writeString(_iter214.getKey());
+          oprot.writeString(_iter244.getKey());
           {
-            oprot.writeI32(_iter214.getValue().size());
-            for (java.util.Map.Entry<GlobalStreamId, java.lang.Double> _iter215 : _iter214.getValue().entrySet())
+            oprot.writeI32(_iter244.getValue().size());
+            for (java.util.Map.Entry<GlobalStreamId, java.lang.Double> _iter245 : _iter244.getValue().entrySet())
             {
-              _iter215.getKey().write(oprot);
-              oprot.writeDouble(_iter215.getValue());
+              _iter245.getKey().write(oprot);
+              oprot.writeDouble(_iter245.getValue());
             }
           }
         }
       }
       {
         oprot.writeI32(struct.executed.size());
-        for (java.util.Map.Entry<java.lang.String, java.util.Map<GlobalStreamId,java.lang.Long>> _iter216 : struct.executed.entrySet())
+        for (java.util.Map.Entry<java.lang.String, java.util.Map<GlobalStreamId,java.lang.Long>> _iter246 : struct.executed.entrySet())
         {
-          oprot.writeString(_iter216.getKey());
+          oprot.writeString(_iter246.getKey());
           {
-            oprot.writeI32(_iter216.getValue().size());
-            for (java.util.Map.Entry<GlobalStreamId, java.lang.Long> _iter217 : _iter216.getValue().entrySet())
+            oprot.writeI32(_iter246.getValue().size());
+            for (java.util.Map.Entry<GlobalStreamId, java.lang.Long> _iter247 : _iter246.getValue().entrySet())
             {
-              _iter217.getKey().write(oprot);
-              oprot.writeI64(_iter217.getValue());
+              _iter247.getKey().write(oprot);
+              oprot.writeI64(_iter247.getValue());
             }
           }
         }
       }
       {
         oprot.writeI32(struct.execute_ms_avg.size());
-        for (java.util.Map.Entry<java.lang.String, java.util.Map<GlobalStreamId,java.lang.Double>> _iter218 : struct.execute_ms_avg.entrySet())
+        for (java.util.Map.Entry<java.lang.String, java.util.Map<GlobalStreamId,java.lang.Double>> _iter248 : struct.execute_ms_avg.entrySet())
         {
-          oprot.writeString(_iter218.getKey());
+          oprot.writeString(_iter248.getKey());
           {
-            oprot.writeI32(_iter218.getValue().size());
-            for (java.util.Map.Entry<GlobalStreamId, java.lang.Double> _iter219 : _iter218.getValue().entrySet())
+            oprot.writeI32(_iter248.getValue().size());
+            for (java.util.Map.Entry<GlobalStreamId, java.lang.Double> _iter249 : _iter248.getValue().entrySet())
             {
-              _iter219.getKey().write(oprot);
-              oprot.writeDouble(_iter219.getValue());
+              _iter249.getKey().write(oprot);
+              oprot.writeDouble(_iter249.getValue());
             }
           }
         }
@@ -1235,127 +1235,127 @@
     public void read(org.apache.storm.thrift.protocol.TProtocol prot, BoltStats struct) throws org.apache.storm.thrift.TException {
       org.apache.storm.thrift.protocol.TTupleProtocol iprot = (org.apache.storm.thrift.protocol.TTupleProtocol) prot;
       {
-        org.apache.storm.thrift.protocol.TMap _map220 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.MAP, iprot.readI32());
-        struct.acked = new java.util.HashMap<java.lang.String,java.util.Map<GlobalStreamId,java.lang.Long>>(2*_map220.size);
-        @org.apache.storm.thrift.annotation.Nullable java.lang.String _key221;
-        @org.apache.storm.thrift.annotation.Nullable java.util.Map<GlobalStreamId,java.lang.Long> _val222;
-        for (int _i223 = 0; _i223 < _map220.size; ++_i223)
+        org.apache.storm.thrift.protocol.TMap _map250 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.MAP, iprot.readI32());
+        struct.acked = new java.util.HashMap<java.lang.String,java.util.Map<GlobalStreamId,java.lang.Long>>(2*_map250.size);
+        @org.apache.storm.thrift.annotation.Nullable java.lang.String _key251;
+        @org.apache.storm.thrift.annotation.Nullable java.util.Map<GlobalStreamId,java.lang.Long> _val252;
+        for (int _i253 = 0; _i253 < _map250.size; ++_i253)
         {
-          _key221 = iprot.readString();
+          _key251 = iprot.readString();
           {
-            org.apache.storm.thrift.protocol.TMap _map224 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRUCT, org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
-            _val222 = new java.util.HashMap<GlobalStreamId,java.lang.Long>(2*_map224.size);
-            @org.apache.storm.thrift.annotation.Nullable GlobalStreamId _key225;
-            long _val226;
-            for (int _i227 = 0; _i227 < _map224.size; ++_i227)
+            org.apache.storm.thrift.protocol.TMap _map254 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRUCT, org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
+            _val252 = new java.util.HashMap<GlobalStreamId,java.lang.Long>(2*_map254.size);
+            @org.apache.storm.thrift.annotation.Nullable GlobalStreamId _key255;
+            long _val256;
+            for (int _i257 = 0; _i257 < _map254.size; ++_i257)
             {
-              _key225 = new GlobalStreamId();
-              _key225.read(iprot);
-              _val226 = iprot.readI64();
-              _val222.put(_key225, _val226);
+              _key255 = new GlobalStreamId();
+              _key255.read(iprot);
+              _val256 = iprot.readI64();
+              _val252.put(_key255, _val256);
             }
           }
-          struct.acked.put(_key221, _val222);
+          struct.acked.put(_key251, _val252);
         }
       }
       struct.set_acked_isSet(true);
       {
-        org.apache.storm.thrift.protocol.TMap _map228 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.MAP, iprot.readI32());
-        struct.failed = new java.util.HashMap<java.lang.String,java.util.Map<GlobalStreamId,java.lang.Long>>(2*_map228.size);
-        @org.apache.storm.thrift.annotation.Nullable java.lang.String _key229;
-        @org.apache.storm.thrift.annotation.Nullable java.util.Map<GlobalStreamId,java.lang.Long> _val230;
-        for (int _i231 = 0; _i231 < _map228.size; ++_i231)
+        org.apache.storm.thrift.protocol.TMap _map258 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.MAP, iprot.readI32());
+        struct.failed = new java.util.HashMap<java.lang.String,java.util.Map<GlobalStreamId,java.lang.Long>>(2*_map258.size);
+        @org.apache.storm.thrift.annotation.Nullable java.lang.String _key259;
+        @org.apache.storm.thrift.annotation.Nullable java.util.Map<GlobalStreamId,java.lang.Long> _val260;
+        for (int _i261 = 0; _i261 < _map258.size; ++_i261)
         {
-          _key229 = iprot.readString();
+          _key259 = iprot.readString();
           {
-            org.apache.storm.thrift.protocol.TMap _map232 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRUCT, org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
-            _val230 = new java.util.HashMap<GlobalStreamId,java.lang.Long>(2*_map232.size);
-            @org.apache.storm.thrift.annotation.Nullable GlobalStreamId _key233;
-            long _val234;
-            for (int _i235 = 0; _i235 < _map232.size; ++_i235)
+            org.apache.storm.thrift.protocol.TMap _map262 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRUCT, org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
+            _val260 = new java.util.HashMap<GlobalStreamId,java.lang.Long>(2*_map262.size);
+            @org.apache.storm.thrift.annotation.Nullable GlobalStreamId _key263;
+            long _val264;
+            for (int _i265 = 0; _i265 < _map262.size; ++_i265)
             {
-              _key233 = new GlobalStreamId();
-              _key233.read(iprot);
-              _val234 = iprot.readI64();
-              _val230.put(_key233, _val234);
+              _key263 = new GlobalStreamId();
+              _key263.read(iprot);
+              _val264 = iprot.readI64();
+              _val260.put(_key263, _val264);
             }
           }
-          struct.failed.put(_key229, _val230);
+          struct.failed.put(_key259, _val260);
         }
       }
       struct.set_failed_isSet(true);
       {
-        org.apache.storm.thrift.protocol.TMap _map236 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.MAP, iprot.readI32());
-        struct.process_ms_avg = new java.util.HashMap<java.lang.String,java.util.Map<GlobalStreamId,java.lang.Double>>(2*_map236.size);
-        @org.apache.storm.thrift.annotation.Nullable java.lang.String _key237;
-        @org.apache.storm.thrift.annotation.Nullable java.util.Map<GlobalStreamId,java.lang.Double> _val238;
-        for (int _i239 = 0; _i239 < _map236.size; ++_i239)
+        org.apache.storm.thrift.protocol.TMap _map266 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.MAP, iprot.readI32());
+        struct.process_ms_avg = new java.util.HashMap<java.lang.String,java.util.Map<GlobalStreamId,java.lang.Double>>(2*_map266.size);
+        @org.apache.storm.thrift.annotation.Nullable java.lang.String _key267;
+        @org.apache.storm.thrift.annotation.Nullable java.util.Map<GlobalStreamId,java.lang.Double> _val268;
+        for (int _i269 = 0; _i269 < _map266.size; ++_i269)
         {
-          _key237 = iprot.readString();
+          _key267 = iprot.readString();
           {
-            org.apache.storm.thrift.protocol.TMap _map240 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRUCT, org.apache.storm.thrift.protocol.TType.DOUBLE, iprot.readI32());
-            _val238 = new java.util.HashMap<GlobalStreamId,java.lang.Double>(2*_map240.size);
-            @org.apache.storm.thrift.annotation.Nullable GlobalStreamId _key241;
-            double _val242;
-            for (int _i243 = 0; _i243 < _map240.size; ++_i243)
+            org.apache.storm.thrift.protocol.TMap _map270 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRUCT, org.apache.storm.thrift.protocol.TType.DOUBLE, iprot.readI32());
+            _val268 = new java.util.HashMap<GlobalStreamId,java.lang.Double>(2*_map270.size);
+            @org.apache.storm.thrift.annotation.Nullable GlobalStreamId _key271;
+            double _val272;
+            for (int _i273 = 0; _i273 < _map270.size; ++_i273)
             {
-              _key241 = new GlobalStreamId();
-              _key241.read(iprot);
-              _val242 = iprot.readDouble();
-              _val238.put(_key241, _val242);
+              _key271 = new GlobalStreamId();
+              _key271.read(iprot);
+              _val272 = iprot.readDouble();
+              _val268.put(_key271, _val272);
             }
           }
-          struct.process_ms_avg.put(_key237, _val238);
+          struct.process_ms_avg.put(_key267, _val268);
         }
       }
       struct.set_process_ms_avg_isSet(true);
       {
-        org.apache.storm.thrift.protocol.TMap _map244 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.MAP, iprot.readI32());
-        struct.executed = new java.util.HashMap<java.lang.String,java.util.Map<GlobalStreamId,java.lang.Long>>(2*_map244.size);
-        @org.apache.storm.thrift.annotation.Nullable java.lang.String _key245;
-        @org.apache.storm.thrift.annotation.Nullable java.util.Map<GlobalStreamId,java.lang.Long> _val246;
-        for (int _i247 = 0; _i247 < _map244.size; ++_i247)
+        org.apache.storm.thrift.protocol.TMap _map274 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.MAP, iprot.readI32());
+        struct.executed = new java.util.HashMap<java.lang.String,java.util.Map<GlobalStreamId,java.lang.Long>>(2*_map274.size);
+        @org.apache.storm.thrift.annotation.Nullable java.lang.String _key275;
+        @org.apache.storm.thrift.annotation.Nullable java.util.Map<GlobalStreamId,java.lang.Long> _val276;
+        for (int _i277 = 0; _i277 < _map274.size; ++_i277)
         {
-          _key245 = iprot.readString();
+          _key275 = iprot.readString();
           {
-            org.apache.storm.thrift.protocol.TMap _map248 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRUCT, org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
-            _val246 = new java.util.HashMap<GlobalStreamId,java.lang.Long>(2*_map248.size);
-            @org.apache.storm.thrift.annotation.Nullable GlobalStreamId _key249;
-            long _val250;
-            for (int _i251 = 0; _i251 < _map248.size; ++_i251)
+            org.apache.storm.thrift.protocol.TMap _map278 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRUCT, org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
+            _val276 = new java.util.HashMap<GlobalStreamId,java.lang.Long>(2*_map278.size);
+            @org.apache.storm.thrift.annotation.Nullable GlobalStreamId _key279;
+            long _val280;
+            for (int _i281 = 0; _i281 < _map278.size; ++_i281)
             {
-              _key249 = new GlobalStreamId();
-              _key249.read(iprot);
-              _val250 = iprot.readI64();
-              _val246.put(_key249, _val250);
+              _key279 = new GlobalStreamId();
+              _key279.read(iprot);
+              _val280 = iprot.readI64();
+              _val276.put(_key279, _val280);
             }
           }
-          struct.executed.put(_key245, _val246);
+          struct.executed.put(_key275, _val276);
         }
       }
       struct.set_executed_isSet(true);
       {
-        org.apache.storm.thrift.protocol.TMap _map252 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.MAP, iprot.readI32());
-        struct.execute_ms_avg = new java.util.HashMap<java.lang.String,java.util.Map<GlobalStreamId,java.lang.Double>>(2*_map252.size);
-        @org.apache.storm.thrift.annotation.Nullable java.lang.String _key253;
-        @org.apache.storm.thrift.annotation.Nullable java.util.Map<GlobalStreamId,java.lang.Double> _val254;
-        for (int _i255 = 0; _i255 < _map252.size; ++_i255)
+        org.apache.storm.thrift.protocol.TMap _map282 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.MAP, iprot.readI32());
+        struct.execute_ms_avg = new java.util.HashMap<java.lang.String,java.util.Map<GlobalStreamId,java.lang.Double>>(2*_map282.size);
+        @org.apache.storm.thrift.annotation.Nullable java.lang.String _key283;
+        @org.apache.storm.thrift.annotation.Nullable java.util.Map<GlobalStreamId,java.lang.Double> _val284;
+        for (int _i285 = 0; _i285 < _map282.size; ++_i285)
         {
-          _key253 = iprot.readString();
+          _key283 = iprot.readString();
           {
-            org.apache.storm.thrift.protocol.TMap _map256 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRUCT, org.apache.storm.thrift.protocol.TType.DOUBLE, iprot.readI32());
-            _val254 = new java.util.HashMap<GlobalStreamId,java.lang.Double>(2*_map256.size);
-            @org.apache.storm.thrift.annotation.Nullable GlobalStreamId _key257;
-            double _val258;
-            for (int _i259 = 0; _i259 < _map256.size; ++_i259)
+            org.apache.storm.thrift.protocol.TMap _map286 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRUCT, org.apache.storm.thrift.protocol.TType.DOUBLE, iprot.readI32());
+            _val284 = new java.util.HashMap<GlobalStreamId,java.lang.Double>(2*_map286.size);
+            @org.apache.storm.thrift.annotation.Nullable GlobalStreamId _key287;
+            double _val288;
+            for (int _i289 = 0; _i289 < _map286.size; ++_i289)
             {
-              _key257 = new GlobalStreamId();
-              _key257.read(iprot);
-              _val258 = iprot.readDouble();
-              _val254.put(_key257, _val258);
+              _key287 = new GlobalStreamId();
+              _key287.read(iprot);
+              _val288 = iprot.readDouble();
+              _val284.put(_key287, _val288);
             }
           }
-          struct.execute_ms_avg.put(_key253, _val254);
+          struct.execute_ms_avg.put(_key283, _val284);
         }
       }
       struct.set_execute_ms_avg_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/ClusterSummary.java b/storm-client/src/jvm/org/apache/storm/generated/ClusterSummary.java
index 01497a9..7e47b4f 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/ClusterSummary.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/ClusterSummary.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class ClusterSummary implements org.apache.storm.thrift.TBase<ClusterSummary, ClusterSummary._Fields>, java.io.Serializable, Cloneable, Comparable<ClusterSummary> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("ClusterSummary");
 
@@ -560,14 +560,14 @@
           case 1: // SUPERVISORS
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.LIST) {
               {
-                org.apache.storm.thrift.protocol.TList _list136 = iprot.readListBegin();
-                struct.supervisors = new java.util.ArrayList<SupervisorSummary>(_list136.size);
-                @org.apache.storm.thrift.annotation.Nullable SupervisorSummary _elem137;
-                for (int _i138 = 0; _i138 < _list136.size; ++_i138)
+                org.apache.storm.thrift.protocol.TList _list166 = iprot.readListBegin();
+                struct.supervisors = new java.util.ArrayList<SupervisorSummary>(_list166.size);
+                @org.apache.storm.thrift.annotation.Nullable SupervisorSummary _elem167;
+                for (int _i168 = 0; _i168 < _list166.size; ++_i168)
                 {
-                  _elem137 = new SupervisorSummary();
-                  _elem137.read(iprot);
-                  struct.supervisors.add(_elem137);
+                  _elem167 = new SupervisorSummary();
+                  _elem167.read(iprot);
+                  struct.supervisors.add(_elem167);
                 }
                 iprot.readListEnd();
               }
@@ -579,14 +579,14 @@
           case 3: // TOPOLOGIES
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.LIST) {
               {
-                org.apache.storm.thrift.protocol.TList _list139 = iprot.readListBegin();
-                struct.topologies = new java.util.ArrayList<TopologySummary>(_list139.size);
-                @org.apache.storm.thrift.annotation.Nullable TopologySummary _elem140;
-                for (int _i141 = 0; _i141 < _list139.size; ++_i141)
+                org.apache.storm.thrift.protocol.TList _list169 = iprot.readListBegin();
+                struct.topologies = new java.util.ArrayList<TopologySummary>(_list169.size);
+                @org.apache.storm.thrift.annotation.Nullable TopologySummary _elem170;
+                for (int _i171 = 0; _i171 < _list169.size; ++_i171)
                 {
-                  _elem140 = new TopologySummary();
-                  _elem140.read(iprot);
-                  struct.topologies.add(_elem140);
+                  _elem170 = new TopologySummary();
+                  _elem170.read(iprot);
+                  struct.topologies.add(_elem170);
                 }
                 iprot.readListEnd();
               }
@@ -598,14 +598,14 @@
           case 4: // NIMBUSES
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.LIST) {
               {
-                org.apache.storm.thrift.protocol.TList _list142 = iprot.readListBegin();
-                struct.nimbuses = new java.util.ArrayList<NimbusSummary>(_list142.size);
-                @org.apache.storm.thrift.annotation.Nullable NimbusSummary _elem143;
-                for (int _i144 = 0; _i144 < _list142.size; ++_i144)
+                org.apache.storm.thrift.protocol.TList _list172 = iprot.readListBegin();
+                struct.nimbuses = new java.util.ArrayList<NimbusSummary>(_list172.size);
+                @org.apache.storm.thrift.annotation.Nullable NimbusSummary _elem173;
+                for (int _i174 = 0; _i174 < _list172.size; ++_i174)
                 {
-                  _elem143 = new NimbusSummary();
-                  _elem143.read(iprot);
-                  struct.nimbuses.add(_elem143);
+                  _elem173 = new NimbusSummary();
+                  _elem173.read(iprot);
+                  struct.nimbuses.add(_elem173);
                 }
                 iprot.readListEnd();
               }
@@ -631,9 +631,9 @@
         oprot.writeFieldBegin(SUPERVISORS_FIELD_DESC);
         {
           oprot.writeListBegin(new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, struct.supervisors.size()));
-          for (SupervisorSummary _iter145 : struct.supervisors)
+          for (SupervisorSummary _iter175 : struct.supervisors)
           {
-            _iter145.write(oprot);
+            _iter175.write(oprot);
           }
           oprot.writeListEnd();
         }
@@ -643,9 +643,9 @@
         oprot.writeFieldBegin(TOPOLOGIES_FIELD_DESC);
         {
           oprot.writeListBegin(new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, struct.topologies.size()));
-          for (TopologySummary _iter146 : struct.topologies)
+          for (TopologySummary _iter176 : struct.topologies)
           {
-            _iter146.write(oprot);
+            _iter176.write(oprot);
           }
           oprot.writeListEnd();
         }
@@ -655,9 +655,9 @@
         oprot.writeFieldBegin(NIMBUSES_FIELD_DESC);
         {
           oprot.writeListBegin(new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, struct.nimbuses.size()));
-          for (NimbusSummary _iter147 : struct.nimbuses)
+          for (NimbusSummary _iter177 : struct.nimbuses)
           {
-            _iter147.write(oprot);
+            _iter177.write(oprot);
           }
           oprot.writeListEnd();
         }
@@ -682,23 +682,23 @@
       org.apache.storm.thrift.protocol.TTupleProtocol oprot = (org.apache.storm.thrift.protocol.TTupleProtocol) prot;
       {
         oprot.writeI32(struct.supervisors.size());
-        for (SupervisorSummary _iter148 : struct.supervisors)
+        for (SupervisorSummary _iter178 : struct.supervisors)
         {
-          _iter148.write(oprot);
+          _iter178.write(oprot);
         }
       }
       {
         oprot.writeI32(struct.topologies.size());
-        for (TopologySummary _iter149 : struct.topologies)
+        for (TopologySummary _iter179 : struct.topologies)
         {
-          _iter149.write(oprot);
+          _iter179.write(oprot);
         }
       }
       {
         oprot.writeI32(struct.nimbuses.size());
-        for (NimbusSummary _iter150 : struct.nimbuses)
+        for (NimbusSummary _iter180 : struct.nimbuses)
         {
-          _iter150.write(oprot);
+          _iter180.write(oprot);
         }
       }
     }
@@ -707,38 +707,38 @@
     public void read(org.apache.storm.thrift.protocol.TProtocol prot, ClusterSummary struct) throws org.apache.storm.thrift.TException {
       org.apache.storm.thrift.protocol.TTupleProtocol iprot = (org.apache.storm.thrift.protocol.TTupleProtocol) prot;
       {
-        org.apache.storm.thrift.protocol.TList _list151 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-        struct.supervisors = new java.util.ArrayList<SupervisorSummary>(_list151.size);
-        @org.apache.storm.thrift.annotation.Nullable SupervisorSummary _elem152;
-        for (int _i153 = 0; _i153 < _list151.size; ++_i153)
+        org.apache.storm.thrift.protocol.TList _list181 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+        struct.supervisors = new java.util.ArrayList<SupervisorSummary>(_list181.size);
+        @org.apache.storm.thrift.annotation.Nullable SupervisorSummary _elem182;
+        for (int _i183 = 0; _i183 < _list181.size; ++_i183)
         {
-          _elem152 = new SupervisorSummary();
-          _elem152.read(iprot);
-          struct.supervisors.add(_elem152);
+          _elem182 = new SupervisorSummary();
+          _elem182.read(iprot);
+          struct.supervisors.add(_elem182);
         }
       }
       struct.set_supervisors_isSet(true);
       {
-        org.apache.storm.thrift.protocol.TList _list154 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-        struct.topologies = new java.util.ArrayList<TopologySummary>(_list154.size);
-        @org.apache.storm.thrift.annotation.Nullable TopologySummary _elem155;
-        for (int _i156 = 0; _i156 < _list154.size; ++_i156)
+        org.apache.storm.thrift.protocol.TList _list184 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+        struct.topologies = new java.util.ArrayList<TopologySummary>(_list184.size);
+        @org.apache.storm.thrift.annotation.Nullable TopologySummary _elem185;
+        for (int _i186 = 0; _i186 < _list184.size; ++_i186)
         {
-          _elem155 = new TopologySummary();
-          _elem155.read(iprot);
-          struct.topologies.add(_elem155);
+          _elem185 = new TopologySummary();
+          _elem185.read(iprot);
+          struct.topologies.add(_elem185);
         }
       }
       struct.set_topologies_isSet(true);
       {
-        org.apache.storm.thrift.protocol.TList _list157 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-        struct.nimbuses = new java.util.ArrayList<NimbusSummary>(_list157.size);
-        @org.apache.storm.thrift.annotation.Nullable NimbusSummary _elem158;
-        for (int _i159 = 0; _i159 < _list157.size; ++_i159)
+        org.apache.storm.thrift.protocol.TList _list187 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+        struct.nimbuses = new java.util.ArrayList<NimbusSummary>(_list187.size);
+        @org.apache.storm.thrift.annotation.Nullable NimbusSummary _elem188;
+        for (int _i189 = 0; _i189 < _list187.size; ++_i189)
         {
-          _elem158 = new NimbusSummary();
-          _elem158.read(iprot);
-          struct.nimbuses.add(_elem158);
+          _elem188 = new NimbusSummary();
+          _elem188.read(iprot);
+          struct.nimbuses.add(_elem188);
         }
       }
       struct.set_nimbuses_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/ClusterWorkerHeartbeat.java b/storm-client/src/jvm/org/apache/storm/generated/ClusterWorkerHeartbeat.java
index 43aa8fe..ef3d016 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/ClusterWorkerHeartbeat.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/ClusterWorkerHeartbeat.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class ClusterWorkerHeartbeat implements org.apache.storm.thrift.TBase<ClusterWorkerHeartbeat, ClusterWorkerHeartbeat._Fields>, java.io.Serializable, Cloneable, Comparable<ClusterWorkerHeartbeat> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("ClusterWorkerHeartbeat");
 
@@ -605,17 +605,17 @@
           case 2: // EXECUTOR_STATS
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map772 = iprot.readMapBegin();
-                struct.executor_stats = new java.util.HashMap<ExecutorInfo,ExecutorStats>(2*_map772.size);
-                @org.apache.storm.thrift.annotation.Nullable ExecutorInfo _key773;
-                @org.apache.storm.thrift.annotation.Nullable ExecutorStats _val774;
-                for (int _i775 = 0; _i775 < _map772.size; ++_i775)
+                org.apache.storm.thrift.protocol.TMap _map822 = iprot.readMapBegin();
+                struct.executor_stats = new java.util.HashMap<ExecutorInfo,ExecutorStats>(2*_map822.size);
+                @org.apache.storm.thrift.annotation.Nullable ExecutorInfo _key823;
+                @org.apache.storm.thrift.annotation.Nullable ExecutorStats _val824;
+                for (int _i825 = 0; _i825 < _map822.size; ++_i825)
                 {
-                  _key773 = new ExecutorInfo();
-                  _key773.read(iprot);
-                  _val774 = new ExecutorStats();
-                  _val774.read(iprot);
-                  struct.executor_stats.put(_key773, _val774);
+                  _key823 = new ExecutorInfo();
+                  _key823.read(iprot);
+                  _val824 = new ExecutorStats();
+                  _val824.read(iprot);
+                  struct.executor_stats.put(_key823, _val824);
                 }
                 iprot.readMapEnd();
               }
@@ -662,10 +662,10 @@
         oprot.writeFieldBegin(EXECUTOR_STATS_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRUCT, org.apache.storm.thrift.protocol.TType.STRUCT, struct.executor_stats.size()));
-          for (java.util.Map.Entry<ExecutorInfo, ExecutorStats> _iter776 : struct.executor_stats.entrySet())
+          for (java.util.Map.Entry<ExecutorInfo, ExecutorStats> _iter826 : struct.executor_stats.entrySet())
           {
-            _iter776.getKey().write(oprot);
-            _iter776.getValue().write(oprot);
+            _iter826.getKey().write(oprot);
+            _iter826.getValue().write(oprot);
           }
           oprot.writeMapEnd();
         }
@@ -697,10 +697,10 @@
       oprot.writeString(struct.storm_id);
       {
         oprot.writeI32(struct.executor_stats.size());
-        for (java.util.Map.Entry<ExecutorInfo, ExecutorStats> _iter777 : struct.executor_stats.entrySet())
+        for (java.util.Map.Entry<ExecutorInfo, ExecutorStats> _iter827 : struct.executor_stats.entrySet())
         {
-          _iter777.getKey().write(oprot);
-          _iter777.getValue().write(oprot);
+          _iter827.getKey().write(oprot);
+          _iter827.getValue().write(oprot);
         }
       }
       oprot.writeI32(struct.time_secs);
@@ -713,17 +713,17 @@
       struct.storm_id = iprot.readString();
       struct.set_storm_id_isSet(true);
       {
-        org.apache.storm.thrift.protocol.TMap _map778 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRUCT, org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-        struct.executor_stats = new java.util.HashMap<ExecutorInfo,ExecutorStats>(2*_map778.size);
-        @org.apache.storm.thrift.annotation.Nullable ExecutorInfo _key779;
-        @org.apache.storm.thrift.annotation.Nullable ExecutorStats _val780;
-        for (int _i781 = 0; _i781 < _map778.size; ++_i781)
+        org.apache.storm.thrift.protocol.TMap _map828 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRUCT, org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+        struct.executor_stats = new java.util.HashMap<ExecutorInfo,ExecutorStats>(2*_map828.size);
+        @org.apache.storm.thrift.annotation.Nullable ExecutorInfo _key829;
+        @org.apache.storm.thrift.annotation.Nullable ExecutorStats _val830;
+        for (int _i831 = 0; _i831 < _map828.size; ++_i831)
         {
-          _key779 = new ExecutorInfo();
-          _key779.read(iprot);
-          _val780 = new ExecutorStats();
-          _val780.read(iprot);
-          struct.executor_stats.put(_key779, _val780);
+          _key829 = new ExecutorInfo();
+          _key829.read(iprot);
+          _val830 = new ExecutorStats();
+          _val830.read(iprot);
+          struct.executor_stats.put(_key829, _val830);
         }
       }
       struct.set_executor_stats_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/CommonAggregateStats.java b/storm-client/src/jvm/org/apache/storm/generated/CommonAggregateStats.java
index ae23480..4f889e6 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/CommonAggregateStats.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/CommonAggregateStats.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class CommonAggregateStats implements org.apache.storm.thrift.TBase<CommonAggregateStats, CommonAggregateStats._Fields>, java.io.Serializable, Cloneable, Comparable<CommonAggregateStats> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("CommonAggregateStats");
 
@@ -835,15 +835,15 @@
           case 7: // RESOURCES_MAP
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map396 = iprot.readMapBegin();
-                struct.resources_map = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map396.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key397;
-                double _val398;
-                for (int _i399 = 0; _i399 < _map396.size; ++_i399)
+                org.apache.storm.thrift.protocol.TMap _map426 = iprot.readMapBegin();
+                struct.resources_map = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map426.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key427;
+                double _val428;
+                for (int _i429 = 0; _i429 < _map426.size; ++_i429)
                 {
-                  _key397 = iprot.readString();
-                  _val398 = iprot.readDouble();
-                  struct.resources_map.put(_key397, _val398);
+                  _key427 = iprot.readString();
+                  _val428 = iprot.readDouble();
+                  struct.resources_map.put(_key427, _val428);
                 }
                 iprot.readMapEnd();
               }
@@ -900,10 +900,10 @@
           oprot.writeFieldBegin(RESOURCES_MAP_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, struct.resources_map.size()));
-            for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter400 : struct.resources_map.entrySet())
+            for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter430 : struct.resources_map.entrySet())
             {
-              oprot.writeString(_iter400.getKey());
-              oprot.writeDouble(_iter400.getValue());
+              oprot.writeString(_iter430.getKey());
+              oprot.writeDouble(_iter430.getValue());
             }
             oprot.writeMapEnd();
           }
@@ -971,10 +971,10 @@
       if (struct.is_set_resources_map()) {
         {
           oprot.writeI32(struct.resources_map.size());
-          for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter401 : struct.resources_map.entrySet())
+          for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter431 : struct.resources_map.entrySet())
           {
-            oprot.writeString(_iter401.getKey());
-            oprot.writeDouble(_iter401.getValue());
+            oprot.writeString(_iter431.getKey());
+            oprot.writeDouble(_iter431.getValue());
           }
         }
       }
@@ -1010,15 +1010,15 @@
       }
       if (incoming.get(6)) {
         {
-          org.apache.storm.thrift.protocol.TMap _map402 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, iprot.readI32());
-          struct.resources_map = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map402.size);
-          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key403;
-          double _val404;
-          for (int _i405 = 0; _i405 < _map402.size; ++_i405)
+          org.apache.storm.thrift.protocol.TMap _map432 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, iprot.readI32());
+          struct.resources_map = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map432.size);
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key433;
+          double _val434;
+          for (int _i435 = 0; _i435 < _map432.size; ++_i435)
           {
-            _key403 = iprot.readString();
-            _val404 = iprot.readDouble();
-            struct.resources_map.put(_key403, _val404);
+            _key433 = iprot.readString();
+            _val434 = iprot.readDouble();
+            struct.resources_map.put(_key433, _val434);
           }
         }
         struct.set_resources_map_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/ComponentAggregateStats.java b/storm-client/src/jvm/org/apache/storm/generated/ComponentAggregateStats.java
index eed66e6..fda3205 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/ComponentAggregateStats.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/ComponentAggregateStats.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class ComponentAggregateStats implements org.apache.storm.thrift.TBase<ComponentAggregateStats, ComponentAggregateStats._Fields>, java.io.Serializable, Cloneable, Comparable<ComponentAggregateStats> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("ComponentAggregateStats");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/ComponentCommon.java b/storm-client/src/jvm/org/apache/storm/generated/ComponentCommon.java
index cffaea4..5ba5556 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/ComponentCommon.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/ComponentCommon.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class ComponentCommon implements org.apache.storm.thrift.TBase<ComponentCommon, ComponentCommon._Fields>, java.io.Serializable, Cloneable, Comparable<ComponentCommon> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("ComponentCommon");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/ComponentObject.java b/storm-client/src/jvm/org/apache/storm/generated/ComponentObject.java
index e6562c0..599df7b 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/ComponentObject.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/ComponentObject.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class ComponentObject extends org.apache.storm.thrift.TUnion<ComponentObject, ComponentObject._Fields> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("ComponentObject");
   private static final org.apache.storm.thrift.protocol.TField SERIALIZED_JAVA_FIELD_DESC = new org.apache.storm.thrift.protocol.TField("serialized_java", org.apache.storm.thrift.protocol.TType.STRING, (short)1);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/ComponentPageInfo.java b/storm-client/src/jvm/org/apache/storm/generated/ComponentPageInfo.java
index bb96e9a..44700bb 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/ComponentPageInfo.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/ComponentPageInfo.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class ComponentPageInfo implements org.apache.storm.thrift.TBase<ComponentPageInfo, ComponentPageInfo._Fields>, java.io.Serializable, Cloneable, Comparable<ComponentPageInfo> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("ComponentPageInfo");
 
@@ -1727,16 +1727,16 @@
           case 7: // WINDOW_TO_STATS
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map510 = iprot.readMapBegin();
-                struct.window_to_stats = new java.util.HashMap<java.lang.String,ComponentAggregateStats>(2*_map510.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key511;
-                @org.apache.storm.thrift.annotation.Nullable ComponentAggregateStats _val512;
-                for (int _i513 = 0; _i513 < _map510.size; ++_i513)
+                org.apache.storm.thrift.protocol.TMap _map560 = iprot.readMapBegin();
+                struct.window_to_stats = new java.util.HashMap<java.lang.String,ComponentAggregateStats>(2*_map560.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key561;
+                @org.apache.storm.thrift.annotation.Nullable ComponentAggregateStats _val562;
+                for (int _i563 = 0; _i563 < _map560.size; ++_i563)
                 {
-                  _key511 = iprot.readString();
-                  _val512 = new ComponentAggregateStats();
-                  _val512.read(iprot);
-                  struct.window_to_stats.put(_key511, _val512);
+                  _key561 = iprot.readString();
+                  _val562 = new ComponentAggregateStats();
+                  _val562.read(iprot);
+                  struct.window_to_stats.put(_key561, _val562);
                 }
                 iprot.readMapEnd();
               }
@@ -1748,17 +1748,17 @@
           case 8: // GSID_TO_INPUT_STATS
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map514 = iprot.readMapBegin();
-                struct.gsid_to_input_stats = new java.util.HashMap<GlobalStreamId,ComponentAggregateStats>(2*_map514.size);
-                @org.apache.storm.thrift.annotation.Nullable GlobalStreamId _key515;
-                @org.apache.storm.thrift.annotation.Nullable ComponentAggregateStats _val516;
-                for (int _i517 = 0; _i517 < _map514.size; ++_i517)
+                org.apache.storm.thrift.protocol.TMap _map564 = iprot.readMapBegin();
+                struct.gsid_to_input_stats = new java.util.HashMap<GlobalStreamId,ComponentAggregateStats>(2*_map564.size);
+                @org.apache.storm.thrift.annotation.Nullable GlobalStreamId _key565;
+                @org.apache.storm.thrift.annotation.Nullable ComponentAggregateStats _val566;
+                for (int _i567 = 0; _i567 < _map564.size; ++_i567)
                 {
-                  _key515 = new GlobalStreamId();
-                  _key515.read(iprot);
-                  _val516 = new ComponentAggregateStats();
-                  _val516.read(iprot);
-                  struct.gsid_to_input_stats.put(_key515, _val516);
+                  _key565 = new GlobalStreamId();
+                  _key565.read(iprot);
+                  _val566 = new ComponentAggregateStats();
+                  _val566.read(iprot);
+                  struct.gsid_to_input_stats.put(_key565, _val566);
                 }
                 iprot.readMapEnd();
               }
@@ -1770,16 +1770,16 @@
           case 9: // SID_TO_OUTPUT_STATS
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map518 = iprot.readMapBegin();
-                struct.sid_to_output_stats = new java.util.HashMap<java.lang.String,ComponentAggregateStats>(2*_map518.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key519;
-                @org.apache.storm.thrift.annotation.Nullable ComponentAggregateStats _val520;
-                for (int _i521 = 0; _i521 < _map518.size; ++_i521)
+                org.apache.storm.thrift.protocol.TMap _map568 = iprot.readMapBegin();
+                struct.sid_to_output_stats = new java.util.HashMap<java.lang.String,ComponentAggregateStats>(2*_map568.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key569;
+                @org.apache.storm.thrift.annotation.Nullable ComponentAggregateStats _val570;
+                for (int _i571 = 0; _i571 < _map568.size; ++_i571)
                 {
-                  _key519 = iprot.readString();
-                  _val520 = new ComponentAggregateStats();
-                  _val520.read(iprot);
-                  struct.sid_to_output_stats.put(_key519, _val520);
+                  _key569 = iprot.readString();
+                  _val570 = new ComponentAggregateStats();
+                  _val570.read(iprot);
+                  struct.sid_to_output_stats.put(_key569, _val570);
                 }
                 iprot.readMapEnd();
               }
@@ -1791,14 +1791,14 @@
           case 10: // EXEC_STATS
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.LIST) {
               {
-                org.apache.storm.thrift.protocol.TList _list522 = iprot.readListBegin();
-                struct.exec_stats = new java.util.ArrayList<ExecutorAggregateStats>(_list522.size);
-                @org.apache.storm.thrift.annotation.Nullable ExecutorAggregateStats _elem523;
-                for (int _i524 = 0; _i524 < _list522.size; ++_i524)
+                org.apache.storm.thrift.protocol.TList _list572 = iprot.readListBegin();
+                struct.exec_stats = new java.util.ArrayList<ExecutorAggregateStats>(_list572.size);
+                @org.apache.storm.thrift.annotation.Nullable ExecutorAggregateStats _elem573;
+                for (int _i574 = 0; _i574 < _list572.size; ++_i574)
                 {
-                  _elem523 = new ExecutorAggregateStats();
-                  _elem523.read(iprot);
-                  struct.exec_stats.add(_elem523);
+                  _elem573 = new ExecutorAggregateStats();
+                  _elem573.read(iprot);
+                  struct.exec_stats.add(_elem573);
                 }
                 iprot.readListEnd();
               }
@@ -1810,14 +1810,14 @@
           case 11: // ERRORS
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.LIST) {
               {
-                org.apache.storm.thrift.protocol.TList _list525 = iprot.readListBegin();
-                struct.errors = new java.util.ArrayList<ErrorInfo>(_list525.size);
-                @org.apache.storm.thrift.annotation.Nullable ErrorInfo _elem526;
-                for (int _i527 = 0; _i527 < _list525.size; ++_i527)
+                org.apache.storm.thrift.protocol.TList _list575 = iprot.readListBegin();
+                struct.errors = new java.util.ArrayList<ErrorInfo>(_list575.size);
+                @org.apache.storm.thrift.annotation.Nullable ErrorInfo _elem576;
+                for (int _i577 = 0; _i577 < _list575.size; ++_i577)
                 {
-                  _elem526 = new ErrorInfo();
-                  _elem526.read(iprot);
-                  struct.errors.add(_elem526);
+                  _elem576 = new ErrorInfo();
+                  _elem576.read(iprot);
+                  struct.errors.add(_elem576);
                 }
                 iprot.readListEnd();
               }
@@ -1862,15 +1862,15 @@
           case 16: // RESOURCES_MAP
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map528 = iprot.readMapBegin();
-                struct.resources_map = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map528.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key529;
-                double _val530;
-                for (int _i531 = 0; _i531 < _map528.size; ++_i531)
+                org.apache.storm.thrift.protocol.TMap _map578 = iprot.readMapBegin();
+                struct.resources_map = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map578.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key579;
+                double _val580;
+                for (int _i581 = 0; _i581 < _map578.size; ++_i581)
                 {
-                  _key529 = iprot.readString();
-                  _val530 = iprot.readDouble();
-                  struct.resources_map.put(_key529, _val530);
+                  _key579 = iprot.readString();
+                  _val580 = iprot.readDouble();
+                  struct.resources_map.put(_key579, _val580);
                 }
                 iprot.readMapEnd();
               }
@@ -1931,10 +1931,10 @@
           oprot.writeFieldBegin(WINDOW_TO_STATS_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRUCT, struct.window_to_stats.size()));
-            for (java.util.Map.Entry<java.lang.String, ComponentAggregateStats> _iter532 : struct.window_to_stats.entrySet())
+            for (java.util.Map.Entry<java.lang.String, ComponentAggregateStats> _iter582 : struct.window_to_stats.entrySet())
             {
-              oprot.writeString(_iter532.getKey());
-              _iter532.getValue().write(oprot);
+              oprot.writeString(_iter582.getKey());
+              _iter582.getValue().write(oprot);
             }
             oprot.writeMapEnd();
           }
@@ -1946,10 +1946,10 @@
           oprot.writeFieldBegin(GSID_TO_INPUT_STATS_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRUCT, org.apache.storm.thrift.protocol.TType.STRUCT, struct.gsid_to_input_stats.size()));
-            for (java.util.Map.Entry<GlobalStreamId, ComponentAggregateStats> _iter533 : struct.gsid_to_input_stats.entrySet())
+            for (java.util.Map.Entry<GlobalStreamId, ComponentAggregateStats> _iter583 : struct.gsid_to_input_stats.entrySet())
             {
-              _iter533.getKey().write(oprot);
-              _iter533.getValue().write(oprot);
+              _iter583.getKey().write(oprot);
+              _iter583.getValue().write(oprot);
             }
             oprot.writeMapEnd();
           }
@@ -1961,10 +1961,10 @@
           oprot.writeFieldBegin(SID_TO_OUTPUT_STATS_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRUCT, struct.sid_to_output_stats.size()));
-            for (java.util.Map.Entry<java.lang.String, ComponentAggregateStats> _iter534 : struct.sid_to_output_stats.entrySet())
+            for (java.util.Map.Entry<java.lang.String, ComponentAggregateStats> _iter584 : struct.sid_to_output_stats.entrySet())
             {
-              oprot.writeString(_iter534.getKey());
-              _iter534.getValue().write(oprot);
+              oprot.writeString(_iter584.getKey());
+              _iter584.getValue().write(oprot);
             }
             oprot.writeMapEnd();
           }
@@ -1976,9 +1976,9 @@
           oprot.writeFieldBegin(EXEC_STATS_FIELD_DESC);
           {
             oprot.writeListBegin(new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, struct.exec_stats.size()));
-            for (ExecutorAggregateStats _iter535 : struct.exec_stats)
+            for (ExecutorAggregateStats _iter585 : struct.exec_stats)
             {
-              _iter535.write(oprot);
+              _iter585.write(oprot);
             }
             oprot.writeListEnd();
           }
@@ -1990,9 +1990,9 @@
           oprot.writeFieldBegin(ERRORS_FIELD_DESC);
           {
             oprot.writeListBegin(new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, struct.errors.size()));
-            for (ErrorInfo _iter536 : struct.errors)
+            for (ErrorInfo _iter586 : struct.errors)
             {
-              _iter536.write(oprot);
+              _iter586.write(oprot);
             }
             oprot.writeListEnd();
           }
@@ -2030,10 +2030,10 @@
           oprot.writeFieldBegin(RESOURCES_MAP_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, struct.resources_map.size()));
-            for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter537 : struct.resources_map.entrySet())
+            for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter587 : struct.resources_map.entrySet())
             {
-              oprot.writeString(_iter537.getKey());
-              oprot.writeDouble(_iter537.getValue());
+              oprot.writeString(_iter587.getKey());
+              oprot.writeDouble(_iter587.getValue());
             }
             oprot.writeMapEnd();
           }
@@ -2118,48 +2118,48 @@
       if (struct.is_set_window_to_stats()) {
         {
           oprot.writeI32(struct.window_to_stats.size());
-          for (java.util.Map.Entry<java.lang.String, ComponentAggregateStats> _iter538 : struct.window_to_stats.entrySet())
+          for (java.util.Map.Entry<java.lang.String, ComponentAggregateStats> _iter588 : struct.window_to_stats.entrySet())
           {
-            oprot.writeString(_iter538.getKey());
-            _iter538.getValue().write(oprot);
+            oprot.writeString(_iter588.getKey());
+            _iter588.getValue().write(oprot);
           }
         }
       }
       if (struct.is_set_gsid_to_input_stats()) {
         {
           oprot.writeI32(struct.gsid_to_input_stats.size());
-          for (java.util.Map.Entry<GlobalStreamId, ComponentAggregateStats> _iter539 : struct.gsid_to_input_stats.entrySet())
+          for (java.util.Map.Entry<GlobalStreamId, ComponentAggregateStats> _iter589 : struct.gsid_to_input_stats.entrySet())
           {
-            _iter539.getKey().write(oprot);
-            _iter539.getValue().write(oprot);
+            _iter589.getKey().write(oprot);
+            _iter589.getValue().write(oprot);
           }
         }
       }
       if (struct.is_set_sid_to_output_stats()) {
         {
           oprot.writeI32(struct.sid_to_output_stats.size());
-          for (java.util.Map.Entry<java.lang.String, ComponentAggregateStats> _iter540 : struct.sid_to_output_stats.entrySet())
+          for (java.util.Map.Entry<java.lang.String, ComponentAggregateStats> _iter590 : struct.sid_to_output_stats.entrySet())
           {
-            oprot.writeString(_iter540.getKey());
-            _iter540.getValue().write(oprot);
+            oprot.writeString(_iter590.getKey());
+            _iter590.getValue().write(oprot);
           }
         }
       }
       if (struct.is_set_exec_stats()) {
         {
           oprot.writeI32(struct.exec_stats.size());
-          for (ExecutorAggregateStats _iter541 : struct.exec_stats)
+          for (ExecutorAggregateStats _iter591 : struct.exec_stats)
           {
-            _iter541.write(oprot);
+            _iter591.write(oprot);
           }
         }
       }
       if (struct.is_set_errors()) {
         {
           oprot.writeI32(struct.errors.size());
-          for (ErrorInfo _iter542 : struct.errors)
+          for (ErrorInfo _iter592 : struct.errors)
           {
-            _iter542.write(oprot);
+            _iter592.write(oprot);
           }
         }
       }
@@ -2178,10 +2178,10 @@
       if (struct.is_set_resources_map()) {
         {
           oprot.writeI32(struct.resources_map.size());
-          for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter543 : struct.resources_map.entrySet())
+          for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter593 : struct.resources_map.entrySet())
           {
-            oprot.writeString(_iter543.getKey());
-            oprot.writeDouble(_iter543.getValue());
+            oprot.writeString(_iter593.getKey());
+            oprot.writeDouble(_iter593.getValue());
           }
         }
       }
@@ -2213,77 +2213,77 @@
       }
       if (incoming.get(4)) {
         {
-          org.apache.storm.thrift.protocol.TMap _map544 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.window_to_stats = new java.util.HashMap<java.lang.String,ComponentAggregateStats>(2*_map544.size);
-          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key545;
-          @org.apache.storm.thrift.annotation.Nullable ComponentAggregateStats _val546;
-          for (int _i547 = 0; _i547 < _map544.size; ++_i547)
+          org.apache.storm.thrift.protocol.TMap _map594 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.window_to_stats = new java.util.HashMap<java.lang.String,ComponentAggregateStats>(2*_map594.size);
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key595;
+          @org.apache.storm.thrift.annotation.Nullable ComponentAggregateStats _val596;
+          for (int _i597 = 0; _i597 < _map594.size; ++_i597)
           {
-            _key545 = iprot.readString();
-            _val546 = new ComponentAggregateStats();
-            _val546.read(iprot);
-            struct.window_to_stats.put(_key545, _val546);
+            _key595 = iprot.readString();
+            _val596 = new ComponentAggregateStats();
+            _val596.read(iprot);
+            struct.window_to_stats.put(_key595, _val596);
           }
         }
         struct.set_window_to_stats_isSet(true);
       }
       if (incoming.get(5)) {
         {
-          org.apache.storm.thrift.protocol.TMap _map548 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRUCT, org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.gsid_to_input_stats = new java.util.HashMap<GlobalStreamId,ComponentAggregateStats>(2*_map548.size);
-          @org.apache.storm.thrift.annotation.Nullable GlobalStreamId _key549;
-          @org.apache.storm.thrift.annotation.Nullable ComponentAggregateStats _val550;
-          for (int _i551 = 0; _i551 < _map548.size; ++_i551)
+          org.apache.storm.thrift.protocol.TMap _map598 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRUCT, org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.gsid_to_input_stats = new java.util.HashMap<GlobalStreamId,ComponentAggregateStats>(2*_map598.size);
+          @org.apache.storm.thrift.annotation.Nullable GlobalStreamId _key599;
+          @org.apache.storm.thrift.annotation.Nullable ComponentAggregateStats _val600;
+          for (int _i601 = 0; _i601 < _map598.size; ++_i601)
           {
-            _key549 = new GlobalStreamId();
-            _key549.read(iprot);
-            _val550 = new ComponentAggregateStats();
-            _val550.read(iprot);
-            struct.gsid_to_input_stats.put(_key549, _val550);
+            _key599 = new GlobalStreamId();
+            _key599.read(iprot);
+            _val600 = new ComponentAggregateStats();
+            _val600.read(iprot);
+            struct.gsid_to_input_stats.put(_key599, _val600);
           }
         }
         struct.set_gsid_to_input_stats_isSet(true);
       }
       if (incoming.get(6)) {
         {
-          org.apache.storm.thrift.protocol.TMap _map552 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.sid_to_output_stats = new java.util.HashMap<java.lang.String,ComponentAggregateStats>(2*_map552.size);
-          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key553;
-          @org.apache.storm.thrift.annotation.Nullable ComponentAggregateStats _val554;
-          for (int _i555 = 0; _i555 < _map552.size; ++_i555)
+          org.apache.storm.thrift.protocol.TMap _map602 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.sid_to_output_stats = new java.util.HashMap<java.lang.String,ComponentAggregateStats>(2*_map602.size);
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key603;
+          @org.apache.storm.thrift.annotation.Nullable ComponentAggregateStats _val604;
+          for (int _i605 = 0; _i605 < _map602.size; ++_i605)
           {
-            _key553 = iprot.readString();
-            _val554 = new ComponentAggregateStats();
-            _val554.read(iprot);
-            struct.sid_to_output_stats.put(_key553, _val554);
+            _key603 = iprot.readString();
+            _val604 = new ComponentAggregateStats();
+            _val604.read(iprot);
+            struct.sid_to_output_stats.put(_key603, _val604);
           }
         }
         struct.set_sid_to_output_stats_isSet(true);
       }
       if (incoming.get(7)) {
         {
-          org.apache.storm.thrift.protocol.TList _list556 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.exec_stats = new java.util.ArrayList<ExecutorAggregateStats>(_list556.size);
-          @org.apache.storm.thrift.annotation.Nullable ExecutorAggregateStats _elem557;
-          for (int _i558 = 0; _i558 < _list556.size; ++_i558)
+          org.apache.storm.thrift.protocol.TList _list606 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.exec_stats = new java.util.ArrayList<ExecutorAggregateStats>(_list606.size);
+          @org.apache.storm.thrift.annotation.Nullable ExecutorAggregateStats _elem607;
+          for (int _i608 = 0; _i608 < _list606.size; ++_i608)
           {
-            _elem557 = new ExecutorAggregateStats();
-            _elem557.read(iprot);
-            struct.exec_stats.add(_elem557);
+            _elem607 = new ExecutorAggregateStats();
+            _elem607.read(iprot);
+            struct.exec_stats.add(_elem607);
           }
         }
         struct.set_exec_stats_isSet(true);
       }
       if (incoming.get(8)) {
         {
-          org.apache.storm.thrift.protocol.TList _list559 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.errors = new java.util.ArrayList<ErrorInfo>(_list559.size);
-          @org.apache.storm.thrift.annotation.Nullable ErrorInfo _elem560;
-          for (int _i561 = 0; _i561 < _list559.size; ++_i561)
+          org.apache.storm.thrift.protocol.TList _list609 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.errors = new java.util.ArrayList<ErrorInfo>(_list609.size);
+          @org.apache.storm.thrift.annotation.Nullable ErrorInfo _elem610;
+          for (int _i611 = 0; _i611 < _list609.size; ++_i611)
           {
-            _elem560 = new ErrorInfo();
-            _elem560.read(iprot);
-            struct.errors.add(_elem560);
+            _elem610 = new ErrorInfo();
+            _elem610.read(iprot);
+            struct.errors.add(_elem610);
           }
         }
         struct.set_errors_isSet(true);
@@ -2307,15 +2307,15 @@
       }
       if (incoming.get(13)) {
         {
-          org.apache.storm.thrift.protocol.TMap _map562 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, iprot.readI32());
-          struct.resources_map = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map562.size);
-          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key563;
-          double _val564;
-          for (int _i565 = 0; _i565 < _map562.size; ++_i565)
+          org.apache.storm.thrift.protocol.TMap _map612 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, iprot.readI32());
+          struct.resources_map = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map612.size);
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key613;
+          double _val614;
+          for (int _i615 = 0; _i615 < _map612.size; ++_i615)
           {
-            _key563 = iprot.readString();
-            _val564 = iprot.readDouble();
-            struct.resources_map.put(_key563, _val564);
+            _key613 = iprot.readString();
+            _val614 = iprot.readDouble();
+            struct.resources_map.put(_key613, _val614);
           }
         }
         struct.set_resources_map_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/ComponentType.java b/storm-client/src/jvm/org/apache/storm/generated/ComponentType.java
index 67ed928..d9b6f39 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/ComponentType.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/ComponentType.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public enum ComponentType implements org.apache.storm.thrift.TEnum {
   BOLT(1),
   SPOUT(2);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/Credentials.java b/storm-client/src/jvm/org/apache/storm/generated/Credentials.java
index 437a030..e249607 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/Credentials.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/Credentials.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class Credentials implements org.apache.storm.thrift.TBase<Credentials, Credentials._Fields>, java.io.Serializable, Cloneable, Comparable<Credentials> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("Credentials");
 
@@ -423,15 +423,15 @@
           case 1: // CREDS
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map596 = iprot.readMapBegin();
-                struct.creds = new java.util.HashMap<java.lang.String,java.lang.String>(2*_map596.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key597;
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _val598;
-                for (int _i599 = 0; _i599 < _map596.size; ++_i599)
+                org.apache.storm.thrift.protocol.TMap _map646 = iprot.readMapBegin();
+                struct.creds = new java.util.HashMap<java.lang.String,java.lang.String>(2*_map646.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key647;
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _val648;
+                for (int _i649 = 0; _i649 < _map646.size; ++_i649)
                 {
-                  _key597 = iprot.readString();
-                  _val598 = iprot.readString();
-                  struct.creds.put(_key597, _val598);
+                  _key647 = iprot.readString();
+                  _val648 = iprot.readString();
+                  struct.creds.put(_key647, _val648);
                 }
                 iprot.readMapEnd();
               }
@@ -465,10 +465,10 @@
         oprot.writeFieldBegin(CREDS_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRING, struct.creds.size()));
-          for (java.util.Map.Entry<java.lang.String, java.lang.String> _iter600 : struct.creds.entrySet())
+          for (java.util.Map.Entry<java.lang.String, java.lang.String> _iter650 : struct.creds.entrySet())
           {
-            oprot.writeString(_iter600.getKey());
-            oprot.writeString(_iter600.getValue());
+            oprot.writeString(_iter650.getKey());
+            oprot.writeString(_iter650.getValue());
           }
           oprot.writeMapEnd();
         }
@@ -500,10 +500,10 @@
       org.apache.storm.thrift.protocol.TTupleProtocol oprot = (org.apache.storm.thrift.protocol.TTupleProtocol) prot;
       {
         oprot.writeI32(struct.creds.size());
-        for (java.util.Map.Entry<java.lang.String, java.lang.String> _iter601 : struct.creds.entrySet())
+        for (java.util.Map.Entry<java.lang.String, java.lang.String> _iter651 : struct.creds.entrySet())
         {
-          oprot.writeString(_iter601.getKey());
-          oprot.writeString(_iter601.getValue());
+          oprot.writeString(_iter651.getKey());
+          oprot.writeString(_iter651.getValue());
         }
       }
       java.util.BitSet optionals = new java.util.BitSet();
@@ -520,15 +520,15 @@
     public void read(org.apache.storm.thrift.protocol.TProtocol prot, Credentials struct) throws org.apache.storm.thrift.TException {
       org.apache.storm.thrift.protocol.TTupleProtocol iprot = (org.apache.storm.thrift.protocol.TTupleProtocol) prot;
       {
-        org.apache.storm.thrift.protocol.TMap _map602 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRING, iprot.readI32());
-        struct.creds = new java.util.HashMap<java.lang.String,java.lang.String>(2*_map602.size);
-        @org.apache.storm.thrift.annotation.Nullable java.lang.String _key603;
-        @org.apache.storm.thrift.annotation.Nullable java.lang.String _val604;
-        for (int _i605 = 0; _i605 < _map602.size; ++_i605)
+        org.apache.storm.thrift.protocol.TMap _map652 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRING, iprot.readI32());
+        struct.creds = new java.util.HashMap<java.lang.String,java.lang.String>(2*_map652.size);
+        @org.apache.storm.thrift.annotation.Nullable java.lang.String _key653;
+        @org.apache.storm.thrift.annotation.Nullable java.lang.String _val654;
+        for (int _i655 = 0; _i655 < _map652.size; ++_i655)
         {
-          _key603 = iprot.readString();
-          _val604 = iprot.readString();
-          struct.creds.put(_key603, _val604);
+          _key653 = iprot.readString();
+          _val654 = iprot.readString();
+          struct.creds.put(_key653, _val654);
         }
       }
       struct.set_creds_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/DRPCExceptionType.java b/storm-client/src/jvm/org/apache/storm/generated/DRPCExceptionType.java
index 78ade53..8cf25c3 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/DRPCExceptionType.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/DRPCExceptionType.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public enum DRPCExceptionType implements org.apache.storm.thrift.TEnum {
   INTERNAL_ERROR(0),
   SERVER_SHUTDOWN(1),
diff --git a/storm-client/src/jvm/org/apache/storm/generated/DRPCExecutionException.java b/storm-client/src/jvm/org/apache/storm/generated/DRPCExecutionException.java
index 4fdd1d5..1422358 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/DRPCExecutionException.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/DRPCExecutionException.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class DRPCExecutionException extends org.apache.storm.thrift.TException implements org.apache.storm.thrift.TBase<DRPCExecutionException, DRPCExecutionException._Fields>, java.io.Serializable, Cloneable, Comparable<DRPCExecutionException> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("DRPCExecutionException");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/DRPCRequest.java b/storm-client/src/jvm/org/apache/storm/generated/DRPCRequest.java
index 39f42da..d46aac9 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/DRPCRequest.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/DRPCRequest.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class DRPCRequest implements org.apache.storm.thrift.TBase<DRPCRequest, DRPCRequest._Fields>, java.io.Serializable, Cloneable, Comparable<DRPCRequest> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("DRPCRequest");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/DebugOptions.java b/storm-client/src/jvm/org/apache/storm/generated/DebugOptions.java
index 20554c3..c593358 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/DebugOptions.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/DebugOptions.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class DebugOptions implements org.apache.storm.thrift.TBase<DebugOptions, DebugOptions._Fields>, java.io.Serializable, Cloneable, Comparable<DebugOptions> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("DebugOptions");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/DistributedRPC.java b/storm-client/src/jvm/org/apache/storm/generated/DistributedRPC.java
index 3841bc6..064c625 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/DistributedRPC.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/DistributedRPC.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class DistributedRPC {
 
   public interface Iface {
diff --git a/storm-client/src/jvm/org/apache/storm/generated/DistributedRPCInvocations.java b/storm-client/src/jvm/org/apache/storm/generated/DistributedRPCInvocations.java
index 867d403..b9ef78f 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/DistributedRPCInvocations.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/DistributedRPCInvocations.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class DistributedRPCInvocations {
 
   public interface Iface {
diff --git a/storm-client/src/jvm/org/apache/storm/generated/ErrorInfo.java b/storm-client/src/jvm/org/apache/storm/generated/ErrorInfo.java
index a607082..9715dbc 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/ErrorInfo.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/ErrorInfo.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class ErrorInfo implements org.apache.storm.thrift.TBase<ErrorInfo, ErrorInfo._Fields>, java.io.Serializable, Cloneable, Comparable<ErrorInfo> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("ErrorInfo");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/ExecutorAggregateStats.java b/storm-client/src/jvm/org/apache/storm/generated/ExecutorAggregateStats.java
index 99c7e51..f6257e8 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/ExecutorAggregateStats.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/ExecutorAggregateStats.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class ExecutorAggregateStats implements org.apache.storm.thrift.TBase<ExecutorAggregateStats, ExecutorAggregateStats._Fields>, java.io.Serializable, Cloneable, Comparable<ExecutorAggregateStats> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("ExecutorAggregateStats");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/ExecutorInfo.java b/storm-client/src/jvm/org/apache/storm/generated/ExecutorInfo.java
index de0f007..55b09f8 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/ExecutorInfo.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/ExecutorInfo.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class ExecutorInfo implements org.apache.storm.thrift.TBase<ExecutorInfo, ExecutorInfo._Fields>, java.io.Serializable, Cloneable, Comparable<ExecutorInfo> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("ExecutorInfo");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/ExecutorSpecificStats.java b/storm-client/src/jvm/org/apache/storm/generated/ExecutorSpecificStats.java
index b6301d6..29778e9 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/ExecutorSpecificStats.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/ExecutorSpecificStats.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class ExecutorSpecificStats extends org.apache.storm.thrift.TUnion<ExecutorSpecificStats, ExecutorSpecificStats._Fields> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("ExecutorSpecificStats");
   private static final org.apache.storm.thrift.protocol.TField BOLT_FIELD_DESC = new org.apache.storm.thrift.protocol.TField("bolt", org.apache.storm.thrift.protocol.TType.STRUCT, (short)1);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/ExecutorStats.java b/storm-client/src/jvm/org/apache/storm/generated/ExecutorStats.java
index b66fd67..7e7345c 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/ExecutorStats.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/ExecutorStats.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class ExecutorStats implements org.apache.storm.thrift.TBase<ExecutorStats, ExecutorStats._Fields>, java.io.Serializable, Cloneable, Comparable<ExecutorStats> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("ExecutorStats");
 
@@ -633,27 +633,27 @@
           case 1: // EMITTED
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map320 = iprot.readMapBegin();
-                struct.emitted = new java.util.HashMap<java.lang.String,java.util.Map<java.lang.String,java.lang.Long>>(2*_map320.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key321;
-                @org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Long> _val322;
-                for (int _i323 = 0; _i323 < _map320.size; ++_i323)
+                org.apache.storm.thrift.protocol.TMap _map350 = iprot.readMapBegin();
+                struct.emitted = new java.util.HashMap<java.lang.String,java.util.Map<java.lang.String,java.lang.Long>>(2*_map350.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key351;
+                @org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Long> _val352;
+                for (int _i353 = 0; _i353 < _map350.size; ++_i353)
                 {
-                  _key321 = iprot.readString();
+                  _key351 = iprot.readString();
                   {
-                    org.apache.storm.thrift.protocol.TMap _map324 = iprot.readMapBegin();
-                    _val322 = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map324.size);
-                    @org.apache.storm.thrift.annotation.Nullable java.lang.String _key325;
-                    long _val326;
-                    for (int _i327 = 0; _i327 < _map324.size; ++_i327)
+                    org.apache.storm.thrift.protocol.TMap _map354 = iprot.readMapBegin();
+                    _val352 = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map354.size);
+                    @org.apache.storm.thrift.annotation.Nullable java.lang.String _key355;
+                    long _val356;
+                    for (int _i357 = 0; _i357 < _map354.size; ++_i357)
                     {
-                      _key325 = iprot.readString();
-                      _val326 = iprot.readI64();
-                      _val322.put(_key325, _val326);
+                      _key355 = iprot.readString();
+                      _val356 = iprot.readI64();
+                      _val352.put(_key355, _val356);
                     }
                     iprot.readMapEnd();
                   }
-                  struct.emitted.put(_key321, _val322);
+                  struct.emitted.put(_key351, _val352);
                 }
                 iprot.readMapEnd();
               }
@@ -665,27 +665,27 @@
           case 2: // TRANSFERRED
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map328 = iprot.readMapBegin();
-                struct.transferred = new java.util.HashMap<java.lang.String,java.util.Map<java.lang.String,java.lang.Long>>(2*_map328.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key329;
-                @org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Long> _val330;
-                for (int _i331 = 0; _i331 < _map328.size; ++_i331)
+                org.apache.storm.thrift.protocol.TMap _map358 = iprot.readMapBegin();
+                struct.transferred = new java.util.HashMap<java.lang.String,java.util.Map<java.lang.String,java.lang.Long>>(2*_map358.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key359;
+                @org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Long> _val360;
+                for (int _i361 = 0; _i361 < _map358.size; ++_i361)
                 {
-                  _key329 = iprot.readString();
+                  _key359 = iprot.readString();
                   {
-                    org.apache.storm.thrift.protocol.TMap _map332 = iprot.readMapBegin();
-                    _val330 = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map332.size);
-                    @org.apache.storm.thrift.annotation.Nullable java.lang.String _key333;
-                    long _val334;
-                    for (int _i335 = 0; _i335 < _map332.size; ++_i335)
+                    org.apache.storm.thrift.protocol.TMap _map362 = iprot.readMapBegin();
+                    _val360 = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map362.size);
+                    @org.apache.storm.thrift.annotation.Nullable java.lang.String _key363;
+                    long _val364;
+                    for (int _i365 = 0; _i365 < _map362.size; ++_i365)
                     {
-                      _key333 = iprot.readString();
-                      _val334 = iprot.readI64();
-                      _val330.put(_key333, _val334);
+                      _key363 = iprot.readString();
+                      _val364 = iprot.readI64();
+                      _val360.put(_key363, _val364);
                     }
                     iprot.readMapEnd();
                   }
-                  struct.transferred.put(_key329, _val330);
+                  struct.transferred.put(_key359, _val360);
                 }
                 iprot.readMapEnd();
               }
@@ -728,15 +728,15 @@
         oprot.writeFieldBegin(EMITTED_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.MAP, struct.emitted.size()));
-          for (java.util.Map.Entry<java.lang.String, java.util.Map<java.lang.String,java.lang.Long>> _iter336 : struct.emitted.entrySet())
+          for (java.util.Map.Entry<java.lang.String, java.util.Map<java.lang.String,java.lang.Long>> _iter366 : struct.emitted.entrySet())
           {
-            oprot.writeString(_iter336.getKey());
+            oprot.writeString(_iter366.getKey());
             {
-              oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I64, _iter336.getValue().size()));
-              for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter337 : _iter336.getValue().entrySet())
+              oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I64, _iter366.getValue().size()));
+              for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter367 : _iter366.getValue().entrySet())
               {
-                oprot.writeString(_iter337.getKey());
-                oprot.writeI64(_iter337.getValue());
+                oprot.writeString(_iter367.getKey());
+                oprot.writeI64(_iter367.getValue());
               }
               oprot.writeMapEnd();
             }
@@ -749,15 +749,15 @@
         oprot.writeFieldBegin(TRANSFERRED_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.MAP, struct.transferred.size()));
-          for (java.util.Map.Entry<java.lang.String, java.util.Map<java.lang.String,java.lang.Long>> _iter338 : struct.transferred.entrySet())
+          for (java.util.Map.Entry<java.lang.String, java.util.Map<java.lang.String,java.lang.Long>> _iter368 : struct.transferred.entrySet())
           {
-            oprot.writeString(_iter338.getKey());
+            oprot.writeString(_iter368.getKey());
             {
-              oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I64, _iter338.getValue().size()));
-              for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter339 : _iter338.getValue().entrySet())
+              oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I64, _iter368.getValue().size()));
+              for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter369 : _iter368.getValue().entrySet())
               {
-                oprot.writeString(_iter339.getKey());
-                oprot.writeI64(_iter339.getValue());
+                oprot.writeString(_iter369.getKey());
+                oprot.writeI64(_iter369.getValue());
               }
               oprot.writeMapEnd();
             }
@@ -793,30 +793,30 @@
       org.apache.storm.thrift.protocol.TTupleProtocol oprot = (org.apache.storm.thrift.protocol.TTupleProtocol) prot;
       {
         oprot.writeI32(struct.emitted.size());
-        for (java.util.Map.Entry<java.lang.String, java.util.Map<java.lang.String,java.lang.Long>> _iter340 : struct.emitted.entrySet())
+        for (java.util.Map.Entry<java.lang.String, java.util.Map<java.lang.String,java.lang.Long>> _iter370 : struct.emitted.entrySet())
         {
-          oprot.writeString(_iter340.getKey());
+          oprot.writeString(_iter370.getKey());
           {
-            oprot.writeI32(_iter340.getValue().size());
-            for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter341 : _iter340.getValue().entrySet())
+            oprot.writeI32(_iter370.getValue().size());
+            for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter371 : _iter370.getValue().entrySet())
             {
-              oprot.writeString(_iter341.getKey());
-              oprot.writeI64(_iter341.getValue());
+              oprot.writeString(_iter371.getKey());
+              oprot.writeI64(_iter371.getValue());
             }
           }
         }
       }
       {
         oprot.writeI32(struct.transferred.size());
-        for (java.util.Map.Entry<java.lang.String, java.util.Map<java.lang.String,java.lang.Long>> _iter342 : struct.transferred.entrySet())
+        for (java.util.Map.Entry<java.lang.String, java.util.Map<java.lang.String,java.lang.Long>> _iter372 : struct.transferred.entrySet())
         {
-          oprot.writeString(_iter342.getKey());
+          oprot.writeString(_iter372.getKey());
           {
-            oprot.writeI32(_iter342.getValue().size());
-            for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter343 : _iter342.getValue().entrySet())
+            oprot.writeI32(_iter372.getValue().size());
+            for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter373 : _iter372.getValue().entrySet())
             {
-              oprot.writeString(_iter343.getKey());
-              oprot.writeI64(_iter343.getValue());
+              oprot.writeString(_iter373.getKey());
+              oprot.writeI64(_iter373.getValue());
             }
           }
         }
@@ -829,50 +829,50 @@
     public void read(org.apache.storm.thrift.protocol.TProtocol prot, ExecutorStats struct) throws org.apache.storm.thrift.TException {
       org.apache.storm.thrift.protocol.TTupleProtocol iprot = (org.apache.storm.thrift.protocol.TTupleProtocol) prot;
       {
-        org.apache.storm.thrift.protocol.TMap _map344 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.MAP, iprot.readI32());
-        struct.emitted = new java.util.HashMap<java.lang.String,java.util.Map<java.lang.String,java.lang.Long>>(2*_map344.size);
-        @org.apache.storm.thrift.annotation.Nullable java.lang.String _key345;
-        @org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Long> _val346;
-        for (int _i347 = 0; _i347 < _map344.size; ++_i347)
+        org.apache.storm.thrift.protocol.TMap _map374 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.MAP, iprot.readI32());
+        struct.emitted = new java.util.HashMap<java.lang.String,java.util.Map<java.lang.String,java.lang.Long>>(2*_map374.size);
+        @org.apache.storm.thrift.annotation.Nullable java.lang.String _key375;
+        @org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Long> _val376;
+        for (int _i377 = 0; _i377 < _map374.size; ++_i377)
         {
-          _key345 = iprot.readString();
+          _key375 = iprot.readString();
           {
-            org.apache.storm.thrift.protocol.TMap _map348 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
-            _val346 = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map348.size);
-            @org.apache.storm.thrift.annotation.Nullable java.lang.String _key349;
-            long _val350;
-            for (int _i351 = 0; _i351 < _map348.size; ++_i351)
+            org.apache.storm.thrift.protocol.TMap _map378 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
+            _val376 = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map378.size);
+            @org.apache.storm.thrift.annotation.Nullable java.lang.String _key379;
+            long _val380;
+            for (int _i381 = 0; _i381 < _map378.size; ++_i381)
             {
-              _key349 = iprot.readString();
-              _val350 = iprot.readI64();
-              _val346.put(_key349, _val350);
+              _key379 = iprot.readString();
+              _val380 = iprot.readI64();
+              _val376.put(_key379, _val380);
             }
           }
-          struct.emitted.put(_key345, _val346);
+          struct.emitted.put(_key375, _val376);
         }
       }
       struct.set_emitted_isSet(true);
       {
-        org.apache.storm.thrift.protocol.TMap _map352 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.MAP, iprot.readI32());
-        struct.transferred = new java.util.HashMap<java.lang.String,java.util.Map<java.lang.String,java.lang.Long>>(2*_map352.size);
-        @org.apache.storm.thrift.annotation.Nullable java.lang.String _key353;
-        @org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Long> _val354;
-        for (int _i355 = 0; _i355 < _map352.size; ++_i355)
+        org.apache.storm.thrift.protocol.TMap _map382 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.MAP, iprot.readI32());
+        struct.transferred = new java.util.HashMap<java.lang.String,java.util.Map<java.lang.String,java.lang.Long>>(2*_map382.size);
+        @org.apache.storm.thrift.annotation.Nullable java.lang.String _key383;
+        @org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Long> _val384;
+        for (int _i385 = 0; _i385 < _map382.size; ++_i385)
         {
-          _key353 = iprot.readString();
+          _key383 = iprot.readString();
           {
-            org.apache.storm.thrift.protocol.TMap _map356 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
-            _val354 = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map356.size);
-            @org.apache.storm.thrift.annotation.Nullable java.lang.String _key357;
-            long _val358;
-            for (int _i359 = 0; _i359 < _map356.size; ++_i359)
+            org.apache.storm.thrift.protocol.TMap _map386 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
+            _val384 = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map386.size);
+            @org.apache.storm.thrift.annotation.Nullable java.lang.String _key387;
+            long _val388;
+            for (int _i389 = 0; _i389 < _map386.size; ++_i389)
             {
-              _key357 = iprot.readString();
-              _val358 = iprot.readI64();
-              _val354.put(_key357, _val358);
+              _key387 = iprot.readString();
+              _val388 = iprot.readI64();
+              _val384.put(_key387, _val388);
             }
           }
-          struct.transferred.put(_key353, _val354);
+          struct.transferred.put(_key383, _val384);
         }
       }
       struct.set_transferred_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/ExecutorSummary.java b/storm-client/src/jvm/org/apache/storm/generated/ExecutorSummary.java
index 1e59c59..77267dd 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/ExecutorSummary.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/ExecutorSummary.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class ExecutorSummary implements org.apache.storm.thrift.TBase<ExecutorSummary, ExecutorSummary._Fields>, java.io.Serializable, Cloneable, Comparable<ExecutorSummary> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("ExecutorSummary");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/GetInfoOptions.java b/storm-client/src/jvm/org/apache/storm/generated/GetInfoOptions.java
index 1bffafb..d6c3dc7 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/GetInfoOptions.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/GetInfoOptions.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class GetInfoOptions implements org.apache.storm.thrift.TBase<GetInfoOptions, GetInfoOptions._Fields>, java.io.Serializable, Cloneable, Comparable<GetInfoOptions> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("GetInfoOptions");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/GlobalStreamId.java b/storm-client/src/jvm/org/apache/storm/generated/GlobalStreamId.java
index d442ee5..e4a7062 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/GlobalStreamId.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/GlobalStreamId.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class GlobalStreamId implements org.apache.storm.thrift.TBase<GlobalStreamId, GlobalStreamId._Fields>, java.io.Serializable, Cloneable, Comparable<GlobalStreamId> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("GlobalStreamId");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/Grouping.java b/storm-client/src/jvm/org/apache/storm/generated/Grouping.java
index dae7db2..01c3544 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/Grouping.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/Grouping.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class Grouping extends org.apache.storm.thrift.TUnion<Grouping, Grouping._Fields> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("Grouping");
   private static final org.apache.storm.thrift.protocol.TField FIELDS_FIELD_DESC = new org.apache.storm.thrift.protocol.TField("fields", org.apache.storm.thrift.protocol.TType.LIST, (short)1);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/HBAuthorizationException.java b/storm-client/src/jvm/org/apache/storm/generated/HBAuthorizationException.java
index 4cc0482..5156996 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/HBAuthorizationException.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/HBAuthorizationException.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class HBAuthorizationException extends org.apache.storm.thrift.TException implements org.apache.storm.thrift.TBase<HBAuthorizationException, HBAuthorizationException._Fields>, java.io.Serializable, Cloneable, Comparable<HBAuthorizationException> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("HBAuthorizationException");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/HBExecutionException.java b/storm-client/src/jvm/org/apache/storm/generated/HBExecutionException.java
index 0cebbaa..8c80374 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/HBExecutionException.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/HBExecutionException.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class HBExecutionException extends org.apache.storm.thrift.TException implements org.apache.storm.thrift.TBase<HBExecutionException, HBExecutionException._Fields>, java.io.Serializable, Cloneable, Comparable<HBExecutionException> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("HBExecutionException");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/HBMessage.java b/storm-client/src/jvm/org/apache/storm/generated/HBMessage.java
index 2766ee0..0c5e1da 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/HBMessage.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/HBMessage.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class HBMessage implements org.apache.storm.thrift.TBase<HBMessage, HBMessage._Fields>, java.io.Serializable, Cloneable, Comparable<HBMessage> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("HBMessage");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/HBMessageData.java b/storm-client/src/jvm/org/apache/storm/generated/HBMessageData.java
index 7415754..8af6927 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/HBMessageData.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/HBMessageData.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class HBMessageData extends org.apache.storm.thrift.TUnion<HBMessageData, HBMessageData._Fields> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("HBMessageData");
   private static final org.apache.storm.thrift.protocol.TField PATH_FIELD_DESC = new org.apache.storm.thrift.protocol.TField("path", org.apache.storm.thrift.protocol.TType.STRING, (short)1);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/HBNodes.java b/storm-client/src/jvm/org/apache/storm/generated/HBNodes.java
index 174dc26..4b4a84c 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/HBNodes.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/HBNodes.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class HBNodes implements org.apache.storm.thrift.TBase<HBNodes, HBNodes._Fields>, java.io.Serializable, Cloneable, Comparable<HBNodes> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("HBNodes");
 
@@ -341,13 +341,13 @@
           case 1: // PULSE_IDS
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.LIST) {
               {
-                org.apache.storm.thrift.protocol.TList _list912 = iprot.readListBegin();
-                struct.pulseIds = new java.util.ArrayList<java.lang.String>(_list912.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _elem913;
-                for (int _i914 = 0; _i914 < _list912.size; ++_i914)
+                org.apache.storm.thrift.protocol.TList _list962 = iprot.readListBegin();
+                struct.pulseIds = new java.util.ArrayList<java.lang.String>(_list962.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _elem963;
+                for (int _i964 = 0; _i964 < _list962.size; ++_i964)
                 {
-                  _elem913 = iprot.readString();
-                  struct.pulseIds.add(_elem913);
+                  _elem963 = iprot.readString();
+                  struct.pulseIds.add(_elem963);
                 }
                 iprot.readListEnd();
               }
@@ -373,9 +373,9 @@
         oprot.writeFieldBegin(PULSE_IDS_FIELD_DESC);
         {
           oprot.writeListBegin(new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRING, struct.pulseIds.size()));
-          for (java.lang.String _iter915 : struct.pulseIds)
+          for (java.lang.String _iter965 : struct.pulseIds)
           {
-            oprot.writeString(_iter915);
+            oprot.writeString(_iter965);
           }
           oprot.writeListEnd();
         }
@@ -406,9 +406,9 @@
       if (struct.is_set_pulseIds()) {
         {
           oprot.writeI32(struct.pulseIds.size());
-          for (java.lang.String _iter916 : struct.pulseIds)
+          for (java.lang.String _iter966 : struct.pulseIds)
           {
-            oprot.writeString(_iter916);
+            oprot.writeString(_iter966);
           }
         }
       }
@@ -420,13 +420,13 @@
       java.util.BitSet incoming = iprot.readBitSet(1);
       if (incoming.get(0)) {
         {
-          org.apache.storm.thrift.protocol.TList _list917 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRING, iprot.readI32());
-          struct.pulseIds = new java.util.ArrayList<java.lang.String>(_list917.size);
-          @org.apache.storm.thrift.annotation.Nullable java.lang.String _elem918;
-          for (int _i919 = 0; _i919 < _list917.size; ++_i919)
+          org.apache.storm.thrift.protocol.TList _list967 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRING, iprot.readI32());
+          struct.pulseIds = new java.util.ArrayList<java.lang.String>(_list967.size);
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _elem968;
+          for (int _i969 = 0; _i969 < _list967.size; ++_i969)
           {
-            _elem918 = iprot.readString();
-            struct.pulseIds.add(_elem918);
+            _elem968 = iprot.readString();
+            struct.pulseIds.add(_elem968);
           }
         }
         struct.set_pulseIds_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/HBPulse.java b/storm-client/src/jvm/org/apache/storm/generated/HBPulse.java
index 84458bf..a2d3c12 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/HBPulse.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/HBPulse.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class HBPulse implements org.apache.storm.thrift.TBase<HBPulse, HBPulse._Fields>, java.io.Serializable, Cloneable, Comparable<HBPulse> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("HBPulse");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/HBRecords.java b/storm-client/src/jvm/org/apache/storm/generated/HBRecords.java
index 26e2648..dcf95ce 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/HBRecords.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/HBRecords.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class HBRecords implements org.apache.storm.thrift.TBase<HBRecords, HBRecords._Fields>, java.io.Serializable, Cloneable, Comparable<HBRecords> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("HBRecords");
 
@@ -344,14 +344,14 @@
           case 1: // PULSES
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.LIST) {
               {
-                org.apache.storm.thrift.protocol.TList _list904 = iprot.readListBegin();
-                struct.pulses = new java.util.ArrayList<HBPulse>(_list904.size);
-                @org.apache.storm.thrift.annotation.Nullable HBPulse _elem905;
-                for (int _i906 = 0; _i906 < _list904.size; ++_i906)
+                org.apache.storm.thrift.protocol.TList _list954 = iprot.readListBegin();
+                struct.pulses = new java.util.ArrayList<HBPulse>(_list954.size);
+                @org.apache.storm.thrift.annotation.Nullable HBPulse _elem955;
+                for (int _i956 = 0; _i956 < _list954.size; ++_i956)
                 {
-                  _elem905 = new HBPulse();
-                  _elem905.read(iprot);
-                  struct.pulses.add(_elem905);
+                  _elem955 = new HBPulse();
+                  _elem955.read(iprot);
+                  struct.pulses.add(_elem955);
                 }
                 iprot.readListEnd();
               }
@@ -377,9 +377,9 @@
         oprot.writeFieldBegin(PULSES_FIELD_DESC);
         {
           oprot.writeListBegin(new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, struct.pulses.size()));
-          for (HBPulse _iter907 : struct.pulses)
+          for (HBPulse _iter957 : struct.pulses)
           {
-            _iter907.write(oprot);
+            _iter957.write(oprot);
           }
           oprot.writeListEnd();
         }
@@ -410,9 +410,9 @@
       if (struct.is_set_pulses()) {
         {
           oprot.writeI32(struct.pulses.size());
-          for (HBPulse _iter908 : struct.pulses)
+          for (HBPulse _iter958 : struct.pulses)
           {
-            _iter908.write(oprot);
+            _iter958.write(oprot);
           }
         }
       }
@@ -424,14 +424,14 @@
       java.util.BitSet incoming = iprot.readBitSet(1);
       if (incoming.get(0)) {
         {
-          org.apache.storm.thrift.protocol.TList _list909 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.pulses = new java.util.ArrayList<HBPulse>(_list909.size);
-          @org.apache.storm.thrift.annotation.Nullable HBPulse _elem910;
-          for (int _i911 = 0; _i911 < _list909.size; ++_i911)
+          org.apache.storm.thrift.protocol.TList _list959 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.pulses = new java.util.ArrayList<HBPulse>(_list959.size);
+          @org.apache.storm.thrift.annotation.Nullable HBPulse _elem960;
+          for (int _i961 = 0; _i961 < _list959.size; ++_i961)
           {
-            _elem910 = new HBPulse();
-            _elem910.read(iprot);
-            struct.pulses.add(_elem910);
+            _elem960 = new HBPulse();
+            _elem960.read(iprot);
+            struct.pulses.add(_elem960);
           }
         }
         struct.set_pulses_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/HBServerMessageType.java b/storm-client/src/jvm/org/apache/storm/generated/HBServerMessageType.java
index 7973290..9262719 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/HBServerMessageType.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/HBServerMessageType.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public enum HBServerMessageType implements org.apache.storm.thrift.TEnum {
   CREATE_PATH(0),
   CREATE_PATH_RESPONSE(1),
diff --git a/storm-client/src/jvm/org/apache/storm/generated/IllegalStateException.java b/storm-client/src/jvm/org/apache/storm/generated/IllegalStateException.java
index c3f05e1..94d8f22 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/IllegalStateException.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/IllegalStateException.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class IllegalStateException extends org.apache.storm.thrift.TException implements org.apache.storm.thrift.TBase<IllegalStateException, IllegalStateException._Fields>, java.io.Serializable, Cloneable, Comparable<IllegalStateException> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("IllegalStateException");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/InvalidTopologyException.java b/storm-client/src/jvm/org/apache/storm/generated/InvalidTopologyException.java
index d15cbd5..4320fd5 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/InvalidTopologyException.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/InvalidTopologyException.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class InvalidTopologyException extends org.apache.storm.thrift.TException implements org.apache.storm.thrift.TBase<InvalidTopologyException, InvalidTopologyException._Fields>, java.io.Serializable, Cloneable, Comparable<InvalidTopologyException> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("InvalidTopologyException");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/JavaObject.java b/storm-client/src/jvm/org/apache/storm/generated/JavaObject.java
index ff84aa3..420e3e8 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/JavaObject.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/JavaObject.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class JavaObject implements org.apache.storm.thrift.TBase<JavaObject, JavaObject._Fields>, java.io.Serializable, Cloneable, Comparable<JavaObject> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("JavaObject");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/JavaObjectArg.java b/storm-client/src/jvm/org/apache/storm/generated/JavaObjectArg.java
index 89ad6c6..1bed24c 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/JavaObjectArg.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/JavaObjectArg.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class JavaObjectArg extends org.apache.storm.thrift.TUnion<JavaObjectArg, JavaObjectArg._Fields> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("JavaObjectArg");
   private static final org.apache.storm.thrift.protocol.TField INT_ARG_FIELD_DESC = new org.apache.storm.thrift.protocol.TField("int_arg", org.apache.storm.thrift.protocol.TType.I32, (short)1);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/KeyAlreadyExistsException.java b/storm-client/src/jvm/org/apache/storm/generated/KeyAlreadyExistsException.java
index 79bc94a..2baf8e3 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/KeyAlreadyExistsException.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/KeyAlreadyExistsException.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class KeyAlreadyExistsException extends org.apache.storm.thrift.TException implements org.apache.storm.thrift.TBase<KeyAlreadyExistsException, KeyAlreadyExistsException._Fields>, java.io.Serializable, Cloneable, Comparable<KeyAlreadyExistsException> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("KeyAlreadyExistsException");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/KeyNotFoundException.java b/storm-client/src/jvm/org/apache/storm/generated/KeyNotFoundException.java
index 55039ee..a2d9d81 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/KeyNotFoundException.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/KeyNotFoundException.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class KeyNotFoundException extends org.apache.storm.thrift.TException implements org.apache.storm.thrift.TBase<KeyNotFoundException, KeyNotFoundException._Fields>, java.io.Serializable, Cloneable, Comparable<KeyNotFoundException> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("KeyNotFoundException");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/KillOptions.java b/storm-client/src/jvm/org/apache/storm/generated/KillOptions.java
index 1ab7693..c4b9580 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/KillOptions.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/KillOptions.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class KillOptions implements org.apache.storm.thrift.TBase<KillOptions, KillOptions._Fields>, java.io.Serializable, Cloneable, Comparable<KillOptions> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("KillOptions");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/LSApprovedWorkers.java b/storm-client/src/jvm/org/apache/storm/generated/LSApprovedWorkers.java
index 4370d1a..1c905d8 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/LSApprovedWorkers.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/LSApprovedWorkers.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class LSApprovedWorkers implements org.apache.storm.thrift.TBase<LSApprovedWorkers, LSApprovedWorkers._Fields>, java.io.Serializable, Cloneable, Comparable<LSApprovedWorkers> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("LSApprovedWorkers");
 
@@ -341,15 +341,15 @@
           case 1: // APPROVED_WORKERS
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map800 = iprot.readMapBegin();
-                struct.approved_workers = new java.util.HashMap<java.lang.String,java.lang.Integer>(2*_map800.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key801;
-                int _val802;
-                for (int _i803 = 0; _i803 < _map800.size; ++_i803)
+                org.apache.storm.thrift.protocol.TMap _map850 = iprot.readMapBegin();
+                struct.approved_workers = new java.util.HashMap<java.lang.String,java.lang.Integer>(2*_map850.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key851;
+                int _val852;
+                for (int _i853 = 0; _i853 < _map850.size; ++_i853)
                 {
-                  _key801 = iprot.readString();
-                  _val802 = iprot.readI32();
-                  struct.approved_workers.put(_key801, _val802);
+                  _key851 = iprot.readString();
+                  _val852 = iprot.readI32();
+                  struct.approved_workers.put(_key851, _val852);
                 }
                 iprot.readMapEnd();
               }
@@ -375,10 +375,10 @@
         oprot.writeFieldBegin(APPROVED_WORKERS_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I32, struct.approved_workers.size()));
-          for (java.util.Map.Entry<java.lang.String, java.lang.Integer> _iter804 : struct.approved_workers.entrySet())
+          for (java.util.Map.Entry<java.lang.String, java.lang.Integer> _iter854 : struct.approved_workers.entrySet())
           {
-            oprot.writeString(_iter804.getKey());
-            oprot.writeI32(_iter804.getValue());
+            oprot.writeString(_iter854.getKey());
+            oprot.writeI32(_iter854.getValue());
           }
           oprot.writeMapEnd();
         }
@@ -403,10 +403,10 @@
       org.apache.storm.thrift.protocol.TTupleProtocol oprot = (org.apache.storm.thrift.protocol.TTupleProtocol) prot;
       {
         oprot.writeI32(struct.approved_workers.size());
-        for (java.util.Map.Entry<java.lang.String, java.lang.Integer> _iter805 : struct.approved_workers.entrySet())
+        for (java.util.Map.Entry<java.lang.String, java.lang.Integer> _iter855 : struct.approved_workers.entrySet())
         {
-          oprot.writeString(_iter805.getKey());
-          oprot.writeI32(_iter805.getValue());
+          oprot.writeString(_iter855.getKey());
+          oprot.writeI32(_iter855.getValue());
         }
       }
     }
@@ -415,15 +415,15 @@
     public void read(org.apache.storm.thrift.protocol.TProtocol prot, LSApprovedWorkers struct) throws org.apache.storm.thrift.TException {
       org.apache.storm.thrift.protocol.TTupleProtocol iprot = (org.apache.storm.thrift.protocol.TTupleProtocol) prot;
       {
-        org.apache.storm.thrift.protocol.TMap _map806 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I32, iprot.readI32());
-        struct.approved_workers = new java.util.HashMap<java.lang.String,java.lang.Integer>(2*_map806.size);
-        @org.apache.storm.thrift.annotation.Nullable java.lang.String _key807;
-        int _val808;
-        for (int _i809 = 0; _i809 < _map806.size; ++_i809)
+        org.apache.storm.thrift.protocol.TMap _map856 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I32, iprot.readI32());
+        struct.approved_workers = new java.util.HashMap<java.lang.String,java.lang.Integer>(2*_map856.size);
+        @org.apache.storm.thrift.annotation.Nullable java.lang.String _key857;
+        int _val858;
+        for (int _i859 = 0; _i859 < _map856.size; ++_i859)
         {
-          _key807 = iprot.readString();
-          _val808 = iprot.readI32();
-          struct.approved_workers.put(_key807, _val808);
+          _key857 = iprot.readString();
+          _val858 = iprot.readI32();
+          struct.approved_workers.put(_key857, _val858);
         }
       }
       struct.set_approved_workers_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/LSSupervisorAssignments.java b/storm-client/src/jvm/org/apache/storm/generated/LSSupervisorAssignments.java
index 129259a..e9a7f10 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/LSSupervisorAssignments.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/LSSupervisorAssignments.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class LSSupervisorAssignments implements org.apache.storm.thrift.TBase<LSSupervisorAssignments, LSSupervisorAssignments._Fields>, java.io.Serializable, Cloneable, Comparable<LSSupervisorAssignments> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("LSSupervisorAssignments");
 
@@ -352,16 +352,16 @@
           case 1: // ASSIGNMENTS
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map810 = iprot.readMapBegin();
-                struct.assignments = new java.util.HashMap<java.lang.Integer,LocalAssignment>(2*_map810.size);
-                int _key811;
-                @org.apache.storm.thrift.annotation.Nullable LocalAssignment _val812;
-                for (int _i813 = 0; _i813 < _map810.size; ++_i813)
+                org.apache.storm.thrift.protocol.TMap _map860 = iprot.readMapBegin();
+                struct.assignments = new java.util.HashMap<java.lang.Integer,LocalAssignment>(2*_map860.size);
+                int _key861;
+                @org.apache.storm.thrift.annotation.Nullable LocalAssignment _val862;
+                for (int _i863 = 0; _i863 < _map860.size; ++_i863)
                 {
-                  _key811 = iprot.readI32();
-                  _val812 = new LocalAssignment();
-                  _val812.read(iprot);
-                  struct.assignments.put(_key811, _val812);
+                  _key861 = iprot.readI32();
+                  _val862 = new LocalAssignment();
+                  _val862.read(iprot);
+                  struct.assignments.put(_key861, _val862);
                 }
                 iprot.readMapEnd();
               }
@@ -387,10 +387,10 @@
         oprot.writeFieldBegin(ASSIGNMENTS_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.I32, org.apache.storm.thrift.protocol.TType.STRUCT, struct.assignments.size()));
-          for (java.util.Map.Entry<java.lang.Integer, LocalAssignment> _iter814 : struct.assignments.entrySet())
+          for (java.util.Map.Entry<java.lang.Integer, LocalAssignment> _iter864 : struct.assignments.entrySet())
           {
-            oprot.writeI32(_iter814.getKey());
-            _iter814.getValue().write(oprot);
+            oprot.writeI32(_iter864.getKey());
+            _iter864.getValue().write(oprot);
           }
           oprot.writeMapEnd();
         }
@@ -415,10 +415,10 @@
       org.apache.storm.thrift.protocol.TTupleProtocol oprot = (org.apache.storm.thrift.protocol.TTupleProtocol) prot;
       {
         oprot.writeI32(struct.assignments.size());
-        for (java.util.Map.Entry<java.lang.Integer, LocalAssignment> _iter815 : struct.assignments.entrySet())
+        for (java.util.Map.Entry<java.lang.Integer, LocalAssignment> _iter865 : struct.assignments.entrySet())
         {
-          oprot.writeI32(_iter815.getKey());
-          _iter815.getValue().write(oprot);
+          oprot.writeI32(_iter865.getKey());
+          _iter865.getValue().write(oprot);
         }
       }
     }
@@ -427,16 +427,16 @@
     public void read(org.apache.storm.thrift.protocol.TProtocol prot, LSSupervisorAssignments struct) throws org.apache.storm.thrift.TException {
       org.apache.storm.thrift.protocol.TTupleProtocol iprot = (org.apache.storm.thrift.protocol.TTupleProtocol) prot;
       {
-        org.apache.storm.thrift.protocol.TMap _map816 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.I32, org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-        struct.assignments = new java.util.HashMap<java.lang.Integer,LocalAssignment>(2*_map816.size);
-        int _key817;
-        @org.apache.storm.thrift.annotation.Nullable LocalAssignment _val818;
-        for (int _i819 = 0; _i819 < _map816.size; ++_i819)
+        org.apache.storm.thrift.protocol.TMap _map866 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.I32, org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+        struct.assignments = new java.util.HashMap<java.lang.Integer,LocalAssignment>(2*_map866.size);
+        int _key867;
+        @org.apache.storm.thrift.annotation.Nullable LocalAssignment _val868;
+        for (int _i869 = 0; _i869 < _map866.size; ++_i869)
         {
-          _key817 = iprot.readI32();
-          _val818 = new LocalAssignment();
-          _val818.read(iprot);
-          struct.assignments.put(_key817, _val818);
+          _key867 = iprot.readI32();
+          _val868 = new LocalAssignment();
+          _val868.read(iprot);
+          struct.assignments.put(_key867, _val868);
         }
       }
       struct.set_assignments_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/LSSupervisorId.java b/storm-client/src/jvm/org/apache/storm/generated/LSSupervisorId.java
index 416dcb6..c6af8d3 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/LSSupervisorId.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/LSSupervisorId.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class LSSupervisorId implements org.apache.storm.thrift.TBase<LSSupervisorId, LSSupervisorId._Fields>, java.io.Serializable, Cloneable, Comparable<LSSupervisorId> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("LSSupervisorId");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/LSTopoHistory.java b/storm-client/src/jvm/org/apache/storm/generated/LSTopoHistory.java
index ef7e0ab..baec85c 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/LSTopoHistory.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/LSTopoHistory.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class LSTopoHistory implements org.apache.storm.thrift.TBase<LSTopoHistory, LSTopoHistory._Fields>, java.io.Serializable, Cloneable, Comparable<LSTopoHistory> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("LSTopoHistory");
 
@@ -631,13 +631,13 @@
           case 3: // USERS
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.LIST) {
               {
-                org.apache.storm.thrift.protocol.TList _list828 = iprot.readListBegin();
-                struct.users = new java.util.ArrayList<java.lang.String>(_list828.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _elem829;
-                for (int _i830 = 0; _i830 < _list828.size; ++_i830)
+                org.apache.storm.thrift.protocol.TList _list878 = iprot.readListBegin();
+                struct.users = new java.util.ArrayList<java.lang.String>(_list878.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _elem879;
+                for (int _i880 = 0; _i880 < _list878.size; ++_i880)
                 {
-                  _elem829 = iprot.readString();
-                  struct.users.add(_elem829);
+                  _elem879 = iprot.readString();
+                  struct.users.add(_elem879);
                 }
                 iprot.readListEnd();
               }
@@ -649,13 +649,13 @@
           case 4: // GROUPS
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.LIST) {
               {
-                org.apache.storm.thrift.protocol.TList _list831 = iprot.readListBegin();
-                struct.groups = new java.util.ArrayList<java.lang.String>(_list831.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _elem832;
-                for (int _i833 = 0; _i833 < _list831.size; ++_i833)
+                org.apache.storm.thrift.protocol.TList _list881 = iprot.readListBegin();
+                struct.groups = new java.util.ArrayList<java.lang.String>(_list881.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _elem882;
+                for (int _i883 = 0; _i883 < _list881.size; ++_i883)
                 {
-                  _elem832 = iprot.readString();
-                  struct.groups.add(_elem832);
+                  _elem882 = iprot.readString();
+                  struct.groups.add(_elem882);
                 }
                 iprot.readListEnd();
               }
@@ -689,9 +689,9 @@
         oprot.writeFieldBegin(USERS_FIELD_DESC);
         {
           oprot.writeListBegin(new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRING, struct.users.size()));
-          for (java.lang.String _iter834 : struct.users)
+          for (java.lang.String _iter884 : struct.users)
           {
-            oprot.writeString(_iter834);
+            oprot.writeString(_iter884);
           }
           oprot.writeListEnd();
         }
@@ -701,9 +701,9 @@
         oprot.writeFieldBegin(GROUPS_FIELD_DESC);
         {
           oprot.writeListBegin(new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRING, struct.groups.size()));
-          for (java.lang.String _iter835 : struct.groups)
+          for (java.lang.String _iter885 : struct.groups)
           {
-            oprot.writeString(_iter835);
+            oprot.writeString(_iter885);
           }
           oprot.writeListEnd();
         }
@@ -730,16 +730,16 @@
       oprot.writeI64(struct.time_stamp);
       {
         oprot.writeI32(struct.users.size());
-        for (java.lang.String _iter836 : struct.users)
+        for (java.lang.String _iter886 : struct.users)
         {
-          oprot.writeString(_iter836);
+          oprot.writeString(_iter886);
         }
       }
       {
         oprot.writeI32(struct.groups.size());
-        for (java.lang.String _iter837 : struct.groups)
+        for (java.lang.String _iter887 : struct.groups)
         {
-          oprot.writeString(_iter837);
+          oprot.writeString(_iter887);
         }
       }
     }
@@ -752,24 +752,24 @@
       struct.time_stamp = iprot.readI64();
       struct.set_time_stamp_isSet(true);
       {
-        org.apache.storm.thrift.protocol.TList _list838 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRING, iprot.readI32());
-        struct.users = new java.util.ArrayList<java.lang.String>(_list838.size);
-        @org.apache.storm.thrift.annotation.Nullable java.lang.String _elem839;
-        for (int _i840 = 0; _i840 < _list838.size; ++_i840)
+        org.apache.storm.thrift.protocol.TList _list888 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRING, iprot.readI32());
+        struct.users = new java.util.ArrayList<java.lang.String>(_list888.size);
+        @org.apache.storm.thrift.annotation.Nullable java.lang.String _elem889;
+        for (int _i890 = 0; _i890 < _list888.size; ++_i890)
         {
-          _elem839 = iprot.readString();
-          struct.users.add(_elem839);
+          _elem889 = iprot.readString();
+          struct.users.add(_elem889);
         }
       }
       struct.set_users_isSet(true);
       {
-        org.apache.storm.thrift.protocol.TList _list841 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRING, iprot.readI32());
-        struct.groups = new java.util.ArrayList<java.lang.String>(_list841.size);
-        @org.apache.storm.thrift.annotation.Nullable java.lang.String _elem842;
-        for (int _i843 = 0; _i843 < _list841.size; ++_i843)
+        org.apache.storm.thrift.protocol.TList _list891 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRING, iprot.readI32());
+        struct.groups = new java.util.ArrayList<java.lang.String>(_list891.size);
+        @org.apache.storm.thrift.annotation.Nullable java.lang.String _elem892;
+        for (int _i893 = 0; _i893 < _list891.size; ++_i893)
         {
-          _elem842 = iprot.readString();
-          struct.groups.add(_elem842);
+          _elem892 = iprot.readString();
+          struct.groups.add(_elem892);
         }
       }
       struct.set_groups_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/LSTopoHistoryList.java b/storm-client/src/jvm/org/apache/storm/generated/LSTopoHistoryList.java
index 17afd66..5170897 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/LSTopoHistoryList.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/LSTopoHistoryList.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class LSTopoHistoryList implements org.apache.storm.thrift.TBase<LSTopoHistoryList, LSTopoHistoryList._Fields>, java.io.Serializable, Cloneable, Comparable<LSTopoHistoryList> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("LSTopoHistoryList");
 
@@ -348,14 +348,14 @@
           case 1: // TOPO_HISTORY
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.LIST) {
               {
-                org.apache.storm.thrift.protocol.TList _list844 = iprot.readListBegin();
-                struct.topo_history = new java.util.ArrayList<LSTopoHistory>(_list844.size);
-                @org.apache.storm.thrift.annotation.Nullable LSTopoHistory _elem845;
-                for (int _i846 = 0; _i846 < _list844.size; ++_i846)
+                org.apache.storm.thrift.protocol.TList _list894 = iprot.readListBegin();
+                struct.topo_history = new java.util.ArrayList<LSTopoHistory>(_list894.size);
+                @org.apache.storm.thrift.annotation.Nullable LSTopoHistory _elem895;
+                for (int _i896 = 0; _i896 < _list894.size; ++_i896)
                 {
-                  _elem845 = new LSTopoHistory();
-                  _elem845.read(iprot);
-                  struct.topo_history.add(_elem845);
+                  _elem895 = new LSTopoHistory();
+                  _elem895.read(iprot);
+                  struct.topo_history.add(_elem895);
                 }
                 iprot.readListEnd();
               }
@@ -381,9 +381,9 @@
         oprot.writeFieldBegin(TOPO_HISTORY_FIELD_DESC);
         {
           oprot.writeListBegin(new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, struct.topo_history.size()));
-          for (LSTopoHistory _iter847 : struct.topo_history)
+          for (LSTopoHistory _iter897 : struct.topo_history)
           {
-            _iter847.write(oprot);
+            _iter897.write(oprot);
           }
           oprot.writeListEnd();
         }
@@ -408,9 +408,9 @@
       org.apache.storm.thrift.protocol.TTupleProtocol oprot = (org.apache.storm.thrift.protocol.TTupleProtocol) prot;
       {
         oprot.writeI32(struct.topo_history.size());
-        for (LSTopoHistory _iter848 : struct.topo_history)
+        for (LSTopoHistory _iter898 : struct.topo_history)
         {
-          _iter848.write(oprot);
+          _iter898.write(oprot);
         }
       }
     }
@@ -419,14 +419,14 @@
     public void read(org.apache.storm.thrift.protocol.TProtocol prot, LSTopoHistoryList struct) throws org.apache.storm.thrift.TException {
       org.apache.storm.thrift.protocol.TTupleProtocol iprot = (org.apache.storm.thrift.protocol.TTupleProtocol) prot;
       {
-        org.apache.storm.thrift.protocol.TList _list849 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-        struct.topo_history = new java.util.ArrayList<LSTopoHistory>(_list849.size);
-        @org.apache.storm.thrift.annotation.Nullable LSTopoHistory _elem850;
-        for (int _i851 = 0; _i851 < _list849.size; ++_i851)
+        org.apache.storm.thrift.protocol.TList _list899 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+        struct.topo_history = new java.util.ArrayList<LSTopoHistory>(_list899.size);
+        @org.apache.storm.thrift.annotation.Nullable LSTopoHistory _elem900;
+        for (int _i901 = 0; _i901 < _list899.size; ++_i901)
         {
-          _elem850 = new LSTopoHistory();
-          _elem850.read(iprot);
-          struct.topo_history.add(_elem850);
+          _elem900 = new LSTopoHistory();
+          _elem900.read(iprot);
+          struct.topo_history.add(_elem900);
         }
       }
       struct.set_topo_history_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/LSWorkerHeartbeat.java b/storm-client/src/jvm/org/apache/storm/generated/LSWorkerHeartbeat.java
index 4a01e3e..86b08c3 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/LSWorkerHeartbeat.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/LSWorkerHeartbeat.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class LSWorkerHeartbeat implements org.apache.storm.thrift.TBase<LSWorkerHeartbeat, LSWorkerHeartbeat._Fields>, java.io.Serializable, Cloneable, Comparable<LSWorkerHeartbeat> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("LSWorkerHeartbeat");
 
@@ -609,14 +609,14 @@
           case 3: // EXECUTORS
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.LIST) {
               {
-                org.apache.storm.thrift.protocol.TList _list820 = iprot.readListBegin();
-                struct.executors = new java.util.ArrayList<ExecutorInfo>(_list820.size);
-                @org.apache.storm.thrift.annotation.Nullable ExecutorInfo _elem821;
-                for (int _i822 = 0; _i822 < _list820.size; ++_i822)
+                org.apache.storm.thrift.protocol.TList _list870 = iprot.readListBegin();
+                struct.executors = new java.util.ArrayList<ExecutorInfo>(_list870.size);
+                @org.apache.storm.thrift.annotation.Nullable ExecutorInfo _elem871;
+                for (int _i872 = 0; _i872 < _list870.size; ++_i872)
                 {
-                  _elem821 = new ExecutorInfo();
-                  _elem821.read(iprot);
-                  struct.executors.add(_elem821);
+                  _elem871 = new ExecutorInfo();
+                  _elem871.read(iprot);
+                  struct.executors.add(_elem871);
                 }
                 iprot.readListEnd();
               }
@@ -658,9 +658,9 @@
         oprot.writeFieldBegin(EXECUTORS_FIELD_DESC);
         {
           oprot.writeListBegin(new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, struct.executors.size()));
-          for (ExecutorInfo _iter823 : struct.executors)
+          for (ExecutorInfo _iter873 : struct.executors)
           {
-            _iter823.write(oprot);
+            _iter873.write(oprot);
           }
           oprot.writeListEnd();
         }
@@ -690,9 +690,9 @@
       oprot.writeString(struct.topology_id);
       {
         oprot.writeI32(struct.executors.size());
-        for (ExecutorInfo _iter824 : struct.executors)
+        for (ExecutorInfo _iter874 : struct.executors)
         {
-          _iter824.write(oprot);
+          _iter874.write(oprot);
         }
       }
       oprot.writeI32(struct.port);
@@ -706,14 +706,14 @@
       struct.topology_id = iprot.readString();
       struct.set_topology_id_isSet(true);
       {
-        org.apache.storm.thrift.protocol.TList _list825 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-        struct.executors = new java.util.ArrayList<ExecutorInfo>(_list825.size);
-        @org.apache.storm.thrift.annotation.Nullable ExecutorInfo _elem826;
-        for (int _i827 = 0; _i827 < _list825.size; ++_i827)
+        org.apache.storm.thrift.protocol.TList _list875 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+        struct.executors = new java.util.ArrayList<ExecutorInfo>(_list875.size);
+        @org.apache.storm.thrift.annotation.Nullable ExecutorInfo _elem876;
+        for (int _i877 = 0; _i877 < _list875.size; ++_i877)
         {
-          _elem826 = new ExecutorInfo();
-          _elem826.read(iprot);
-          struct.executors.add(_elem826);
+          _elem876 = new ExecutorInfo();
+          _elem876.read(iprot);
+          struct.executors.add(_elem876);
         }
       }
       struct.set_executors_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/ListBlobsResult.java b/storm-client/src/jvm/org/apache/storm/generated/ListBlobsResult.java
index 86b93e5..f653bf1 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/ListBlobsResult.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/ListBlobsResult.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class ListBlobsResult implements org.apache.storm.thrift.TBase<ListBlobsResult, ListBlobsResult._Fields>, java.io.Serializable, Cloneable, Comparable<ListBlobsResult> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("ListBlobsResult");
 
@@ -430,13 +430,13 @@
           case 1: // KEYS
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.LIST) {
               {
-                org.apache.storm.thrift.protocol.TList _list614 = iprot.readListBegin();
-                struct.keys = new java.util.ArrayList<java.lang.String>(_list614.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _elem615;
-                for (int _i616 = 0; _i616 < _list614.size; ++_i616)
+                org.apache.storm.thrift.protocol.TList _list664 = iprot.readListBegin();
+                struct.keys = new java.util.ArrayList<java.lang.String>(_list664.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _elem665;
+                for (int _i666 = 0; _i666 < _list664.size; ++_i666)
                 {
-                  _elem615 = iprot.readString();
-                  struct.keys.add(_elem615);
+                  _elem665 = iprot.readString();
+                  struct.keys.add(_elem665);
                 }
                 iprot.readListEnd();
               }
@@ -470,9 +470,9 @@
         oprot.writeFieldBegin(KEYS_FIELD_DESC);
         {
           oprot.writeListBegin(new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRING, struct.keys.size()));
-          for (java.lang.String _iter617 : struct.keys)
+          for (java.lang.String _iter667 : struct.keys)
           {
-            oprot.writeString(_iter617);
+            oprot.writeString(_iter667);
           }
           oprot.writeListEnd();
         }
@@ -502,9 +502,9 @@
       org.apache.storm.thrift.protocol.TTupleProtocol oprot = (org.apache.storm.thrift.protocol.TTupleProtocol) prot;
       {
         oprot.writeI32(struct.keys.size());
-        for (java.lang.String _iter618 : struct.keys)
+        for (java.lang.String _iter668 : struct.keys)
         {
-          oprot.writeString(_iter618);
+          oprot.writeString(_iter668);
         }
       }
       oprot.writeString(struct.session);
@@ -514,13 +514,13 @@
     public void read(org.apache.storm.thrift.protocol.TProtocol prot, ListBlobsResult struct) throws org.apache.storm.thrift.TException {
       org.apache.storm.thrift.protocol.TTupleProtocol iprot = (org.apache.storm.thrift.protocol.TTupleProtocol) prot;
       {
-        org.apache.storm.thrift.protocol.TList _list619 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRING, iprot.readI32());
-        struct.keys = new java.util.ArrayList<java.lang.String>(_list619.size);
-        @org.apache.storm.thrift.annotation.Nullable java.lang.String _elem620;
-        for (int _i621 = 0; _i621 < _list619.size; ++_i621)
+        org.apache.storm.thrift.protocol.TList _list669 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRING, iprot.readI32());
+        struct.keys = new java.util.ArrayList<java.lang.String>(_list669.size);
+        @org.apache.storm.thrift.annotation.Nullable java.lang.String _elem670;
+        for (int _i671 = 0; _i671 < _list669.size; ++_i671)
         {
-          _elem620 = iprot.readString();
-          struct.keys.add(_elem620);
+          _elem670 = iprot.readString();
+          struct.keys.add(_elem670);
         }
       }
       struct.set_keys_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/LocalAssignment.java b/storm-client/src/jvm/org/apache/storm/generated/LocalAssignment.java
index 22095f9..a7ace14 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/LocalAssignment.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/LocalAssignment.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class LocalAssignment implements org.apache.storm.thrift.TBase<LocalAssignment, LocalAssignment._Fields>, java.io.Serializable, Cloneable, Comparable<LocalAssignment> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("LocalAssignment");
 
@@ -686,14 +686,14 @@
           case 2: // EXECUTORS
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.LIST) {
               {
-                org.apache.storm.thrift.protocol.TList _list792 = iprot.readListBegin();
-                struct.executors = new java.util.ArrayList<ExecutorInfo>(_list792.size);
-                @org.apache.storm.thrift.annotation.Nullable ExecutorInfo _elem793;
-                for (int _i794 = 0; _i794 < _list792.size; ++_i794)
+                org.apache.storm.thrift.protocol.TList _list842 = iprot.readListBegin();
+                struct.executors = new java.util.ArrayList<ExecutorInfo>(_list842.size);
+                @org.apache.storm.thrift.annotation.Nullable ExecutorInfo _elem843;
+                for (int _i844 = 0; _i844 < _list842.size; ++_i844)
                 {
-                  _elem793 = new ExecutorInfo();
-                  _elem793.read(iprot);
-                  struct.executors.add(_elem793);
+                  _elem843 = new ExecutorInfo();
+                  _elem843.read(iprot);
+                  struct.executors.add(_elem843);
                 }
                 iprot.readListEnd();
               }
@@ -749,9 +749,9 @@
         oprot.writeFieldBegin(EXECUTORS_FIELD_DESC);
         {
           oprot.writeListBegin(new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, struct.executors.size()));
-          for (ExecutorInfo _iter795 : struct.executors)
+          for (ExecutorInfo _iter845 : struct.executors)
           {
-            _iter795.write(oprot);
+            _iter845.write(oprot);
           }
           oprot.writeListEnd();
         }
@@ -796,9 +796,9 @@
       oprot.writeString(struct.topology_id);
       {
         oprot.writeI32(struct.executors.size());
-        for (ExecutorInfo _iter796 : struct.executors)
+        for (ExecutorInfo _iter846 : struct.executors)
         {
-          _iter796.write(oprot);
+          _iter846.write(oprot);
         }
       }
       java.util.BitSet optionals = new java.util.BitSet();
@@ -829,14 +829,14 @@
       struct.topology_id = iprot.readString();
       struct.set_topology_id_isSet(true);
       {
-        org.apache.storm.thrift.protocol.TList _list797 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-        struct.executors = new java.util.ArrayList<ExecutorInfo>(_list797.size);
-        @org.apache.storm.thrift.annotation.Nullable ExecutorInfo _elem798;
-        for (int _i799 = 0; _i799 < _list797.size; ++_i799)
+        org.apache.storm.thrift.protocol.TList _list847 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+        struct.executors = new java.util.ArrayList<ExecutorInfo>(_list847.size);
+        @org.apache.storm.thrift.annotation.Nullable ExecutorInfo _elem848;
+        for (int _i849 = 0; _i849 < _list847.size; ++_i849)
         {
-          _elem798 = new ExecutorInfo();
-          _elem798.read(iprot);
-          struct.executors.add(_elem798);
+          _elem848 = new ExecutorInfo();
+          _elem848.read(iprot);
+          struct.executors.add(_elem848);
         }
       }
       struct.set_executors_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/LocalStateData.java b/storm-client/src/jvm/org/apache/storm/generated/LocalStateData.java
index 23b532a..d0b9f32 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/LocalStateData.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/LocalStateData.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class LocalStateData implements org.apache.storm.thrift.TBase<LocalStateData, LocalStateData._Fields>, java.io.Serializable, Cloneable, Comparable<LocalStateData> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("LocalStateData");
 
@@ -352,16 +352,16 @@
           case 1: // SERIALIZED_PARTS
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map782 = iprot.readMapBegin();
-                struct.serialized_parts = new java.util.HashMap<java.lang.String,ThriftSerializedObject>(2*_map782.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key783;
-                @org.apache.storm.thrift.annotation.Nullable ThriftSerializedObject _val784;
-                for (int _i785 = 0; _i785 < _map782.size; ++_i785)
+                org.apache.storm.thrift.protocol.TMap _map832 = iprot.readMapBegin();
+                struct.serialized_parts = new java.util.HashMap<java.lang.String,ThriftSerializedObject>(2*_map832.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key833;
+                @org.apache.storm.thrift.annotation.Nullable ThriftSerializedObject _val834;
+                for (int _i835 = 0; _i835 < _map832.size; ++_i835)
                 {
-                  _key783 = iprot.readString();
-                  _val784 = new ThriftSerializedObject();
-                  _val784.read(iprot);
-                  struct.serialized_parts.put(_key783, _val784);
+                  _key833 = iprot.readString();
+                  _val834 = new ThriftSerializedObject();
+                  _val834.read(iprot);
+                  struct.serialized_parts.put(_key833, _val834);
                 }
                 iprot.readMapEnd();
               }
@@ -387,10 +387,10 @@
         oprot.writeFieldBegin(SERIALIZED_PARTS_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRUCT, struct.serialized_parts.size()));
-          for (java.util.Map.Entry<java.lang.String, ThriftSerializedObject> _iter786 : struct.serialized_parts.entrySet())
+          for (java.util.Map.Entry<java.lang.String, ThriftSerializedObject> _iter836 : struct.serialized_parts.entrySet())
           {
-            oprot.writeString(_iter786.getKey());
-            _iter786.getValue().write(oprot);
+            oprot.writeString(_iter836.getKey());
+            _iter836.getValue().write(oprot);
           }
           oprot.writeMapEnd();
         }
@@ -415,10 +415,10 @@
       org.apache.storm.thrift.protocol.TTupleProtocol oprot = (org.apache.storm.thrift.protocol.TTupleProtocol) prot;
       {
         oprot.writeI32(struct.serialized_parts.size());
-        for (java.util.Map.Entry<java.lang.String, ThriftSerializedObject> _iter787 : struct.serialized_parts.entrySet())
+        for (java.util.Map.Entry<java.lang.String, ThriftSerializedObject> _iter837 : struct.serialized_parts.entrySet())
         {
-          oprot.writeString(_iter787.getKey());
-          _iter787.getValue().write(oprot);
+          oprot.writeString(_iter837.getKey());
+          _iter837.getValue().write(oprot);
         }
       }
     }
@@ -427,16 +427,16 @@
     public void read(org.apache.storm.thrift.protocol.TProtocol prot, LocalStateData struct) throws org.apache.storm.thrift.TException {
       org.apache.storm.thrift.protocol.TTupleProtocol iprot = (org.apache.storm.thrift.protocol.TTupleProtocol) prot;
       {
-        org.apache.storm.thrift.protocol.TMap _map788 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-        struct.serialized_parts = new java.util.HashMap<java.lang.String,ThriftSerializedObject>(2*_map788.size);
-        @org.apache.storm.thrift.annotation.Nullable java.lang.String _key789;
-        @org.apache.storm.thrift.annotation.Nullable ThriftSerializedObject _val790;
-        for (int _i791 = 0; _i791 < _map788.size; ++_i791)
+        org.apache.storm.thrift.protocol.TMap _map838 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+        struct.serialized_parts = new java.util.HashMap<java.lang.String,ThriftSerializedObject>(2*_map838.size);
+        @org.apache.storm.thrift.annotation.Nullable java.lang.String _key839;
+        @org.apache.storm.thrift.annotation.Nullable ThriftSerializedObject _val840;
+        for (int _i841 = 0; _i841 < _map838.size; ++_i841)
         {
-          _key789 = iprot.readString();
-          _val790 = new ThriftSerializedObject();
-          _val790.read(iprot);
-          struct.serialized_parts.put(_key789, _val790);
+          _key839 = iprot.readString();
+          _val840 = new ThriftSerializedObject();
+          _val840.read(iprot);
+          struct.serialized_parts.put(_key839, _val840);
         }
       }
       struct.set_serialized_parts_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/LogConfig.java b/storm-client/src/jvm/org/apache/storm/generated/LogConfig.java
index 1b6e3b2..3759773 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/LogConfig.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/LogConfig.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class LogConfig implements org.apache.storm.thrift.TBase<LogConfig, LogConfig._Fields>, java.io.Serializable, Cloneable, Comparable<LogConfig> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("LogConfig");
 
@@ -344,16 +344,16 @@
           case 2: // NAMED_LOGGER_LEVEL
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map852 = iprot.readMapBegin();
-                struct.named_logger_level = new java.util.HashMap<java.lang.String,LogLevel>(2*_map852.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key853;
-                @org.apache.storm.thrift.annotation.Nullable LogLevel _val854;
-                for (int _i855 = 0; _i855 < _map852.size; ++_i855)
+                org.apache.storm.thrift.protocol.TMap _map902 = iprot.readMapBegin();
+                struct.named_logger_level = new java.util.HashMap<java.lang.String,LogLevel>(2*_map902.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key903;
+                @org.apache.storm.thrift.annotation.Nullable LogLevel _val904;
+                for (int _i905 = 0; _i905 < _map902.size; ++_i905)
                 {
-                  _key853 = iprot.readString();
-                  _val854 = new LogLevel();
-                  _val854.read(iprot);
-                  struct.named_logger_level.put(_key853, _val854);
+                  _key903 = iprot.readString();
+                  _val904 = new LogLevel();
+                  _val904.read(iprot);
+                  struct.named_logger_level.put(_key903, _val904);
                 }
                 iprot.readMapEnd();
               }
@@ -380,10 +380,10 @@
           oprot.writeFieldBegin(NAMED_LOGGER_LEVEL_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRUCT, struct.named_logger_level.size()));
-            for (java.util.Map.Entry<java.lang.String, LogLevel> _iter856 : struct.named_logger_level.entrySet())
+            for (java.util.Map.Entry<java.lang.String, LogLevel> _iter906 : struct.named_logger_level.entrySet())
             {
-              oprot.writeString(_iter856.getKey());
-              _iter856.getValue().write(oprot);
+              oprot.writeString(_iter906.getKey());
+              _iter906.getValue().write(oprot);
             }
             oprot.writeMapEnd();
           }
@@ -415,10 +415,10 @@
       if (struct.is_set_named_logger_level()) {
         {
           oprot.writeI32(struct.named_logger_level.size());
-          for (java.util.Map.Entry<java.lang.String, LogLevel> _iter857 : struct.named_logger_level.entrySet())
+          for (java.util.Map.Entry<java.lang.String, LogLevel> _iter907 : struct.named_logger_level.entrySet())
           {
-            oprot.writeString(_iter857.getKey());
-            _iter857.getValue().write(oprot);
+            oprot.writeString(_iter907.getKey());
+            _iter907.getValue().write(oprot);
           }
         }
       }
@@ -430,16 +430,16 @@
       java.util.BitSet incoming = iprot.readBitSet(1);
       if (incoming.get(0)) {
         {
-          org.apache.storm.thrift.protocol.TMap _map858 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.named_logger_level = new java.util.HashMap<java.lang.String,LogLevel>(2*_map858.size);
-          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key859;
-          @org.apache.storm.thrift.annotation.Nullable LogLevel _val860;
-          for (int _i861 = 0; _i861 < _map858.size; ++_i861)
+          org.apache.storm.thrift.protocol.TMap _map908 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.named_logger_level = new java.util.HashMap<java.lang.String,LogLevel>(2*_map908.size);
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key909;
+          @org.apache.storm.thrift.annotation.Nullable LogLevel _val910;
+          for (int _i911 = 0; _i911 < _map908.size; ++_i911)
           {
-            _key859 = iprot.readString();
-            _val860 = new LogLevel();
-            _val860.read(iprot);
-            struct.named_logger_level.put(_key859, _val860);
+            _key909 = iprot.readString();
+            _val910 = new LogLevel();
+            _val910.read(iprot);
+            struct.named_logger_level.put(_key909, _val910);
           }
         }
         struct.set_named_logger_level_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/LogLevel.java b/storm-client/src/jvm/org/apache/storm/generated/LogLevel.java
index 7879c73..7b98d74 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/LogLevel.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/LogLevel.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class LogLevel implements org.apache.storm.thrift.TBase<LogLevel, LogLevel._Fields>, java.io.Serializable, Cloneable, Comparable<LogLevel> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("LogLevel");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/LogLevelAction.java b/storm-client/src/jvm/org/apache/storm/generated/LogLevelAction.java
index 887d6ae..2ec322c 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/LogLevelAction.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/LogLevelAction.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public enum LogLevelAction implements org.apache.storm.thrift.TEnum {
   UNCHANGED(1),
   UPDATE(2),
diff --git a/storm-client/src/jvm/org/apache/storm/generated/Nimbus.java b/storm-client/src/jvm/org/apache/storm/generated/Nimbus.java
index d6093a0..8a799c8 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/Nimbus.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/Nimbus.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class Nimbus {
 
   public interface Iface {
@@ -19626,14 +19626,14 @@
             case 0: // SUCCESS
               if (schemeField.type == org.apache.storm.thrift.protocol.TType.LIST) {
                 {
-                  org.apache.storm.thrift.protocol.TList _list920 = iprot.readListBegin();
-                  struct.success = new java.util.ArrayList<ProfileRequest>(_list920.size);
-                  @org.apache.storm.thrift.annotation.Nullable ProfileRequest _elem921;
-                  for (int _i922 = 0; _i922 < _list920.size; ++_i922)
+                  org.apache.storm.thrift.protocol.TList _list970 = iprot.readListBegin();
+                  struct.success = new java.util.ArrayList<ProfileRequest>(_list970.size);
+                  @org.apache.storm.thrift.annotation.Nullable ProfileRequest _elem971;
+                  for (int _i972 = 0; _i972 < _list970.size; ++_i972)
                   {
-                    _elem921 = new ProfileRequest();
-                    _elem921.read(iprot);
-                    struct.success.add(_elem921);
+                    _elem971 = new ProfileRequest();
+                    _elem971.read(iprot);
+                    struct.success.add(_elem971);
                   }
                   iprot.readListEnd();
                 }
@@ -19659,9 +19659,9 @@
           oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
           {
             oprot.writeListBegin(new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, struct.success.size()));
-            for (ProfileRequest _iter923 : struct.success)
+            for (ProfileRequest _iter973 : struct.success)
             {
-              _iter923.write(oprot);
+              _iter973.write(oprot);
             }
             oprot.writeListEnd();
           }
@@ -19692,9 +19692,9 @@
         if (struct.is_set_success()) {
           {
             oprot.writeI32(struct.success.size());
-            for (ProfileRequest _iter924 : struct.success)
+            for (ProfileRequest _iter974 : struct.success)
             {
-              _iter924.write(oprot);
+              _iter974.write(oprot);
             }
           }
         }
@@ -19706,14 +19706,14 @@
         java.util.BitSet incoming = iprot.readBitSet(1);
         if (incoming.get(0)) {
           {
-            org.apache.storm.thrift.protocol.TList _list925 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-            struct.success = new java.util.ArrayList<ProfileRequest>(_list925.size);
-            @org.apache.storm.thrift.annotation.Nullable ProfileRequest _elem926;
-            for (int _i927 = 0; _i927 < _list925.size; ++_i927)
+            org.apache.storm.thrift.protocol.TList _list975 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+            struct.success = new java.util.ArrayList<ProfileRequest>(_list975.size);
+            @org.apache.storm.thrift.annotation.Nullable ProfileRequest _elem976;
+            for (int _i977 = 0; _i977 < _list975.size; ++_i977)
             {
-              _elem926 = new ProfileRequest();
-              _elem926.read(iprot);
-              struct.success.add(_elem926);
+              _elem976 = new ProfileRequest();
+              _elem976.read(iprot);
+              struct.success.add(_elem976);
             }
           }
           struct.set_success_isSet(true);
@@ -49148,14 +49148,14 @@
             case 0: // SUCCESS
               if (schemeField.type == org.apache.storm.thrift.protocol.TType.LIST) {
                 {
-                  org.apache.storm.thrift.protocol.TList _list928 = iprot.readListBegin();
-                  struct.success = new java.util.ArrayList<OwnerResourceSummary>(_list928.size);
-                  @org.apache.storm.thrift.annotation.Nullable OwnerResourceSummary _elem929;
-                  for (int _i930 = 0; _i930 < _list928.size; ++_i930)
+                  org.apache.storm.thrift.protocol.TList _list978 = iprot.readListBegin();
+                  struct.success = new java.util.ArrayList<OwnerResourceSummary>(_list978.size);
+                  @org.apache.storm.thrift.annotation.Nullable OwnerResourceSummary _elem979;
+                  for (int _i980 = 0; _i980 < _list978.size; ++_i980)
                   {
-                    _elem929 = new OwnerResourceSummary();
-                    _elem929.read(iprot);
-                    struct.success.add(_elem929);
+                    _elem979 = new OwnerResourceSummary();
+                    _elem979.read(iprot);
+                    struct.success.add(_elem979);
                   }
                   iprot.readListEnd();
                 }
@@ -49190,9 +49190,9 @@
           oprot.writeFieldBegin(SUCCESS_FIELD_DESC);
           {
             oprot.writeListBegin(new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, struct.success.size()));
-            for (OwnerResourceSummary _iter931 : struct.success)
+            for (OwnerResourceSummary _iter981 : struct.success)
             {
-              _iter931.write(oprot);
+              _iter981.write(oprot);
             }
             oprot.writeListEnd();
           }
@@ -49231,9 +49231,9 @@
         if (struct.is_set_success()) {
           {
             oprot.writeI32(struct.success.size());
-            for (OwnerResourceSummary _iter932 : struct.success)
+            for (OwnerResourceSummary _iter982 : struct.success)
             {
-              _iter932.write(oprot);
+              _iter982.write(oprot);
             }
           }
         }
@@ -49248,14 +49248,14 @@
         java.util.BitSet incoming = iprot.readBitSet(2);
         if (incoming.get(0)) {
           {
-            org.apache.storm.thrift.protocol.TList _list933 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-            struct.success = new java.util.ArrayList<OwnerResourceSummary>(_list933.size);
-            @org.apache.storm.thrift.annotation.Nullable OwnerResourceSummary _elem934;
-            for (int _i935 = 0; _i935 < _list933.size; ++_i935)
+            org.apache.storm.thrift.protocol.TList _list983 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+            struct.success = new java.util.ArrayList<OwnerResourceSummary>(_list983.size);
+            @org.apache.storm.thrift.annotation.Nullable OwnerResourceSummary _elem984;
+            for (int _i985 = 0; _i985 < _list983.size; ++_i985)
             {
-              _elem934 = new OwnerResourceSummary();
-              _elem934.read(iprot);
-              struct.success.add(_elem934);
+              _elem984 = new OwnerResourceSummary();
+              _elem984.read(iprot);
+              struct.success.add(_elem984);
             }
           }
           struct.set_success_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/NimbusSummary.java b/storm-client/src/jvm/org/apache/storm/generated/NimbusSummary.java
index 1adbb60..9985fbb 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/NimbusSummary.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/NimbusSummary.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class NimbusSummary implements org.apache.storm.thrift.TBase<NimbusSummary, NimbusSummary._Fields>, java.io.Serializable, Cloneable, Comparable<NimbusSummary> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("NimbusSummary");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/NodeInfo.java b/storm-client/src/jvm/org/apache/storm/generated/NodeInfo.java
index 863c162..0e35e9e 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/NodeInfo.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/NodeInfo.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class NodeInfo implements org.apache.storm.thrift.TBase<NodeInfo, NodeInfo._Fields>, java.io.Serializable, Cloneable, Comparable<NodeInfo> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("NodeInfo");
 
@@ -438,13 +438,13 @@
           case 2: // PORT
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.SET) {
               {
-                org.apache.storm.thrift.protocol.TSet _set658 = iprot.readSetBegin();
-                struct.port = new java.util.HashSet<java.lang.Long>(2*_set658.size);
-                long _elem659;
-                for (int _i660 = 0; _i660 < _set658.size; ++_i660)
+                org.apache.storm.thrift.protocol.TSet _set708 = iprot.readSetBegin();
+                struct.port = new java.util.HashSet<java.lang.Long>(2*_set708.size);
+                long _elem709;
+                for (int _i710 = 0; _i710 < _set708.size; ++_i710)
                 {
-                  _elem659 = iprot.readI64();
-                  struct.port.add(_elem659);
+                  _elem709 = iprot.readI64();
+                  struct.port.add(_elem709);
                 }
                 iprot.readSetEnd();
               }
@@ -475,9 +475,9 @@
         oprot.writeFieldBegin(PORT_FIELD_DESC);
         {
           oprot.writeSetBegin(new org.apache.storm.thrift.protocol.TSet(org.apache.storm.thrift.protocol.TType.I64, struct.port.size()));
-          for (long _iter661 : struct.port)
+          for (long _iter711 : struct.port)
           {
-            oprot.writeI64(_iter661);
+            oprot.writeI64(_iter711);
           }
           oprot.writeSetEnd();
         }
@@ -503,9 +503,9 @@
       oprot.writeString(struct.node);
       {
         oprot.writeI32(struct.port.size());
-        for (long _iter662 : struct.port)
+        for (long _iter712 : struct.port)
         {
-          oprot.writeI64(_iter662);
+          oprot.writeI64(_iter712);
         }
       }
     }
@@ -516,13 +516,13 @@
       struct.node = iprot.readString();
       struct.set_node_isSet(true);
       {
-        org.apache.storm.thrift.protocol.TSet _set663 = new org.apache.storm.thrift.protocol.TSet(org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
-        struct.port = new java.util.HashSet<java.lang.Long>(2*_set663.size);
-        long _elem664;
-        for (int _i665 = 0; _i665 < _set663.size; ++_i665)
+        org.apache.storm.thrift.protocol.TSet _set713 = new org.apache.storm.thrift.protocol.TSet(org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
+        struct.port = new java.util.HashSet<java.lang.Long>(2*_set713.size);
+        long _elem714;
+        for (int _i715 = 0; _i715 < _set713.size; ++_i715)
         {
-          _elem664 = iprot.readI64();
-          struct.port.add(_elem664);
+          _elem714 = iprot.readI64();
+          struct.port.add(_elem714);
         }
       }
       struct.set_port_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/NotAliveException.java b/storm-client/src/jvm/org/apache/storm/generated/NotAliveException.java
index 72fbf71..5a29703 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/NotAliveException.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/NotAliveException.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class NotAliveException extends org.apache.storm.thrift.TException implements org.apache.storm.thrift.TBase<NotAliveException, NotAliveException._Fields>, java.io.Serializable, Cloneable, Comparable<NotAliveException> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("NotAliveException");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/NullStruct.java b/storm-client/src/jvm/org/apache/storm/generated/NullStruct.java
index e1b4d5d..cea8e31 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/NullStruct.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/NullStruct.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class NullStruct implements org.apache.storm.thrift.TBase<NullStruct, NullStruct._Fields>, java.io.Serializable, Cloneable, Comparable<NullStruct> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("NullStruct");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/NumErrorsChoice.java b/storm-client/src/jvm/org/apache/storm/generated/NumErrorsChoice.java
index ace4920..7140e32 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/NumErrorsChoice.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/NumErrorsChoice.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public enum NumErrorsChoice implements org.apache.storm.thrift.TEnum {
   ALL(0),
   NONE(1),
diff --git a/storm-client/src/jvm/org/apache/storm/generated/OwnerResourceSummary.java b/storm-client/src/jvm/org/apache/storm/generated/OwnerResourceSummary.java
index 63c1c15..474a7bc 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/OwnerResourceSummary.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/OwnerResourceSummary.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class OwnerResourceSummary implements org.apache.storm.thrift.TBase<OwnerResourceSummary, OwnerResourceSummary._Fields>, java.io.Serializable, Cloneable, Comparable<OwnerResourceSummary> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("OwnerResourceSummary");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/PrivateWorkerKey.java b/storm-client/src/jvm/org/apache/storm/generated/PrivateWorkerKey.java
index 3895c7c..4a96981 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/PrivateWorkerKey.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/PrivateWorkerKey.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class PrivateWorkerKey implements org.apache.storm.thrift.TBase<PrivateWorkerKey, PrivateWorkerKey._Fields>, java.io.Serializable, Cloneable, Comparable<PrivateWorkerKey> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("PrivateWorkerKey");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/ProfileAction.java b/storm-client/src/jvm/org/apache/storm/generated/ProfileAction.java
index 7253cdb..40f9892 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/ProfileAction.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/ProfileAction.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public enum ProfileAction implements org.apache.storm.thrift.TEnum {
   JPROFILE_STOP(0),
   JPROFILE_START(1),
diff --git a/storm-client/src/jvm/org/apache/storm/generated/ProfileRequest.java b/storm-client/src/jvm/org/apache/storm/generated/ProfileRequest.java
index ac0ec79..dfc0c7f 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/ProfileRequest.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/ProfileRequest.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class ProfileRequest implements org.apache.storm.thrift.TBase<ProfileRequest, ProfileRequest._Fields>, java.io.Serializable, Cloneable, Comparable<ProfileRequest> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("ProfileRequest");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/ReadableBlobMeta.java b/storm-client/src/jvm/org/apache/storm/generated/ReadableBlobMeta.java
index dfd3d33..14d5844 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/ReadableBlobMeta.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/ReadableBlobMeta.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class ReadableBlobMeta implements org.apache.storm.thrift.TBase<ReadableBlobMeta, ReadableBlobMeta._Fields>, java.io.Serializable, Cloneable, Comparable<ReadableBlobMeta> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("ReadableBlobMeta");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/RebalanceOptions.java b/storm-client/src/jvm/org/apache/storm/generated/RebalanceOptions.java
index 6cb752d..1af7c06 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/RebalanceOptions.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/RebalanceOptions.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class RebalanceOptions implements org.apache.storm.thrift.TBase<RebalanceOptions, RebalanceOptions._Fields>, java.io.Serializable, Cloneable, Comparable<RebalanceOptions> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("RebalanceOptions");
 
@@ -773,15 +773,15 @@
           case 3: // NUM_EXECUTORS
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map566 = iprot.readMapBegin();
-                struct.num_executors = new java.util.HashMap<java.lang.String,java.lang.Integer>(2*_map566.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key567;
-                int _val568;
-                for (int _i569 = 0; _i569 < _map566.size; ++_i569)
+                org.apache.storm.thrift.protocol.TMap _map616 = iprot.readMapBegin();
+                struct.num_executors = new java.util.HashMap<java.lang.String,java.lang.Integer>(2*_map616.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key617;
+                int _val618;
+                for (int _i619 = 0; _i619 < _map616.size; ++_i619)
                 {
-                  _key567 = iprot.readString();
-                  _val568 = iprot.readI32();
-                  struct.num_executors.put(_key567, _val568);
+                  _key617 = iprot.readString();
+                  _val618 = iprot.readI32();
+                  struct.num_executors.put(_key617, _val618);
                 }
                 iprot.readMapEnd();
               }
@@ -793,27 +793,27 @@
           case 4: // TOPOLOGY_RESOURCES_OVERRIDES
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map570 = iprot.readMapBegin();
-                struct.topology_resources_overrides = new java.util.HashMap<java.lang.String,java.util.Map<java.lang.String,java.lang.Double>>(2*_map570.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key571;
-                @org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Double> _val572;
-                for (int _i573 = 0; _i573 < _map570.size; ++_i573)
+                org.apache.storm.thrift.protocol.TMap _map620 = iprot.readMapBegin();
+                struct.topology_resources_overrides = new java.util.HashMap<java.lang.String,java.util.Map<java.lang.String,java.lang.Double>>(2*_map620.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key621;
+                @org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Double> _val622;
+                for (int _i623 = 0; _i623 < _map620.size; ++_i623)
                 {
-                  _key571 = iprot.readString();
+                  _key621 = iprot.readString();
                   {
-                    org.apache.storm.thrift.protocol.TMap _map574 = iprot.readMapBegin();
-                    _val572 = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map574.size);
-                    @org.apache.storm.thrift.annotation.Nullable java.lang.String _key575;
-                    double _val576;
-                    for (int _i577 = 0; _i577 < _map574.size; ++_i577)
+                    org.apache.storm.thrift.protocol.TMap _map624 = iprot.readMapBegin();
+                    _val622 = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map624.size);
+                    @org.apache.storm.thrift.annotation.Nullable java.lang.String _key625;
+                    double _val626;
+                    for (int _i627 = 0; _i627 < _map624.size; ++_i627)
                     {
-                      _key575 = iprot.readString();
-                      _val576 = iprot.readDouble();
-                      _val572.put(_key575, _val576);
+                      _key625 = iprot.readString();
+                      _val626 = iprot.readDouble();
+                      _val622.put(_key625, _val626);
                     }
                     iprot.readMapEnd();
                   }
-                  struct.topology_resources_overrides.put(_key571, _val572);
+                  struct.topology_resources_overrides.put(_key621, _val622);
                 }
                 iprot.readMapEnd();
               }
@@ -866,10 +866,10 @@
           oprot.writeFieldBegin(NUM_EXECUTORS_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I32, struct.num_executors.size()));
-            for (java.util.Map.Entry<java.lang.String, java.lang.Integer> _iter578 : struct.num_executors.entrySet())
+            for (java.util.Map.Entry<java.lang.String, java.lang.Integer> _iter628 : struct.num_executors.entrySet())
             {
-              oprot.writeString(_iter578.getKey());
-              oprot.writeI32(_iter578.getValue());
+              oprot.writeString(_iter628.getKey());
+              oprot.writeI32(_iter628.getValue());
             }
             oprot.writeMapEnd();
           }
@@ -881,15 +881,15 @@
           oprot.writeFieldBegin(TOPOLOGY_RESOURCES_OVERRIDES_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.MAP, struct.topology_resources_overrides.size()));
-            for (java.util.Map.Entry<java.lang.String, java.util.Map<java.lang.String,java.lang.Double>> _iter579 : struct.topology_resources_overrides.entrySet())
+            for (java.util.Map.Entry<java.lang.String, java.util.Map<java.lang.String,java.lang.Double>> _iter629 : struct.topology_resources_overrides.entrySet())
             {
-              oprot.writeString(_iter579.getKey());
+              oprot.writeString(_iter629.getKey());
               {
-                oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, _iter579.getValue().size()));
-                for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter580 : _iter579.getValue().entrySet())
+                oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, _iter629.getValue().size()));
+                for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter630 : _iter629.getValue().entrySet())
                 {
-                  oprot.writeString(_iter580.getKey());
-                  oprot.writeDouble(_iter580.getValue());
+                  oprot.writeString(_iter630.getKey());
+                  oprot.writeDouble(_iter630.getValue());
                 }
                 oprot.writeMapEnd();
               }
@@ -959,25 +959,25 @@
       if (struct.is_set_num_executors()) {
         {
           oprot.writeI32(struct.num_executors.size());
-          for (java.util.Map.Entry<java.lang.String, java.lang.Integer> _iter581 : struct.num_executors.entrySet())
+          for (java.util.Map.Entry<java.lang.String, java.lang.Integer> _iter631 : struct.num_executors.entrySet())
           {
-            oprot.writeString(_iter581.getKey());
-            oprot.writeI32(_iter581.getValue());
+            oprot.writeString(_iter631.getKey());
+            oprot.writeI32(_iter631.getValue());
           }
         }
       }
       if (struct.is_set_topology_resources_overrides()) {
         {
           oprot.writeI32(struct.topology_resources_overrides.size());
-          for (java.util.Map.Entry<java.lang.String, java.util.Map<java.lang.String,java.lang.Double>> _iter582 : struct.topology_resources_overrides.entrySet())
+          for (java.util.Map.Entry<java.lang.String, java.util.Map<java.lang.String,java.lang.Double>> _iter632 : struct.topology_resources_overrides.entrySet())
           {
-            oprot.writeString(_iter582.getKey());
+            oprot.writeString(_iter632.getKey());
             {
-              oprot.writeI32(_iter582.getValue().size());
-              for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter583 : _iter582.getValue().entrySet())
+              oprot.writeI32(_iter632.getValue().size());
+              for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter633 : _iter632.getValue().entrySet())
               {
-                oprot.writeString(_iter583.getKey());
-                oprot.writeDouble(_iter583.getValue());
+                oprot.writeString(_iter633.getKey());
+                oprot.writeDouble(_iter633.getValue());
               }
             }
           }
@@ -1005,41 +1005,41 @@
       }
       if (incoming.get(2)) {
         {
-          org.apache.storm.thrift.protocol.TMap _map584 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I32, iprot.readI32());
-          struct.num_executors = new java.util.HashMap<java.lang.String,java.lang.Integer>(2*_map584.size);
-          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key585;
-          int _val586;
-          for (int _i587 = 0; _i587 < _map584.size; ++_i587)
+          org.apache.storm.thrift.protocol.TMap _map634 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I32, iprot.readI32());
+          struct.num_executors = new java.util.HashMap<java.lang.String,java.lang.Integer>(2*_map634.size);
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key635;
+          int _val636;
+          for (int _i637 = 0; _i637 < _map634.size; ++_i637)
           {
-            _key585 = iprot.readString();
-            _val586 = iprot.readI32();
-            struct.num_executors.put(_key585, _val586);
+            _key635 = iprot.readString();
+            _val636 = iprot.readI32();
+            struct.num_executors.put(_key635, _val636);
           }
         }
         struct.set_num_executors_isSet(true);
       }
       if (incoming.get(3)) {
         {
-          org.apache.storm.thrift.protocol.TMap _map588 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.MAP, iprot.readI32());
-          struct.topology_resources_overrides = new java.util.HashMap<java.lang.String,java.util.Map<java.lang.String,java.lang.Double>>(2*_map588.size);
-          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key589;
-          @org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Double> _val590;
-          for (int _i591 = 0; _i591 < _map588.size; ++_i591)
+          org.apache.storm.thrift.protocol.TMap _map638 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.MAP, iprot.readI32());
+          struct.topology_resources_overrides = new java.util.HashMap<java.lang.String,java.util.Map<java.lang.String,java.lang.Double>>(2*_map638.size);
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key639;
+          @org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Double> _val640;
+          for (int _i641 = 0; _i641 < _map638.size; ++_i641)
           {
-            _key589 = iprot.readString();
+            _key639 = iprot.readString();
             {
-              org.apache.storm.thrift.protocol.TMap _map592 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, iprot.readI32());
-              _val590 = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map592.size);
-              @org.apache.storm.thrift.annotation.Nullable java.lang.String _key593;
-              double _val594;
-              for (int _i595 = 0; _i595 < _map592.size; ++_i595)
+              org.apache.storm.thrift.protocol.TMap _map642 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, iprot.readI32());
+              _val640 = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map642.size);
+              @org.apache.storm.thrift.annotation.Nullable java.lang.String _key643;
+              double _val644;
+              for (int _i645 = 0; _i645 < _map642.size; ++_i645)
               {
-                _key593 = iprot.readString();
-                _val594 = iprot.readDouble();
-                _val590.put(_key593, _val594);
+                _key643 = iprot.readString();
+                _val644 = iprot.readDouble();
+                _val640.put(_key643, _val644);
               }
             }
-            struct.topology_resources_overrides.put(_key589, _val590);
+            struct.topology_resources_overrides.put(_key639, _val640);
           }
         }
         struct.set_topology_resources_overrides_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/SettableBlobMeta.java b/storm-client/src/jvm/org/apache/storm/generated/SettableBlobMeta.java
index c33548e..0fb96ab 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/SettableBlobMeta.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/SettableBlobMeta.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class SettableBlobMeta implements org.apache.storm.thrift.TBase<SettableBlobMeta, SettableBlobMeta._Fields>, java.io.Serializable, Cloneable, Comparable<SettableBlobMeta> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("SettableBlobMeta");
 
@@ -428,14 +428,14 @@
           case 1: // ACL
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.LIST) {
               {
-                org.apache.storm.thrift.protocol.TList _list606 = iprot.readListBegin();
-                struct.acl = new java.util.ArrayList<AccessControl>(_list606.size);
-                @org.apache.storm.thrift.annotation.Nullable AccessControl _elem607;
-                for (int _i608 = 0; _i608 < _list606.size; ++_i608)
+                org.apache.storm.thrift.protocol.TList _list656 = iprot.readListBegin();
+                struct.acl = new java.util.ArrayList<AccessControl>(_list656.size);
+                @org.apache.storm.thrift.annotation.Nullable AccessControl _elem657;
+                for (int _i658 = 0; _i658 < _list656.size; ++_i658)
                 {
-                  _elem607 = new AccessControl();
-                  _elem607.read(iprot);
-                  struct.acl.add(_elem607);
+                  _elem657 = new AccessControl();
+                  _elem657.read(iprot);
+                  struct.acl.add(_elem657);
                 }
                 iprot.readListEnd();
               }
@@ -469,9 +469,9 @@
         oprot.writeFieldBegin(ACL_FIELD_DESC);
         {
           oprot.writeListBegin(new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, struct.acl.size()));
-          for (AccessControl _iter609 : struct.acl)
+          for (AccessControl _iter659 : struct.acl)
           {
-            _iter609.write(oprot);
+            _iter659.write(oprot);
           }
           oprot.writeListEnd();
         }
@@ -501,9 +501,9 @@
       org.apache.storm.thrift.protocol.TTupleProtocol oprot = (org.apache.storm.thrift.protocol.TTupleProtocol) prot;
       {
         oprot.writeI32(struct.acl.size());
-        for (AccessControl _iter610 : struct.acl)
+        for (AccessControl _iter660 : struct.acl)
         {
-          _iter610.write(oprot);
+          _iter660.write(oprot);
         }
       }
       java.util.BitSet optionals = new java.util.BitSet();
@@ -520,14 +520,14 @@
     public void read(org.apache.storm.thrift.protocol.TProtocol prot, SettableBlobMeta struct) throws org.apache.storm.thrift.TException {
       org.apache.storm.thrift.protocol.TTupleProtocol iprot = (org.apache.storm.thrift.protocol.TTupleProtocol) prot;
       {
-        org.apache.storm.thrift.protocol.TList _list611 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-        struct.acl = new java.util.ArrayList<AccessControl>(_list611.size);
-        @org.apache.storm.thrift.annotation.Nullable AccessControl _elem612;
-        for (int _i613 = 0; _i613 < _list611.size; ++_i613)
+        org.apache.storm.thrift.protocol.TList _list661 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+        struct.acl = new java.util.ArrayList<AccessControl>(_list661.size);
+        @org.apache.storm.thrift.annotation.Nullable AccessControl _elem662;
+        for (int _i663 = 0; _i663 < _list661.size; ++_i663)
         {
-          _elem612 = new AccessControl();
-          _elem612.read(iprot);
-          struct.acl.add(_elem612);
+          _elem662 = new AccessControl();
+          _elem662.read(iprot);
+          struct.acl.add(_elem662);
         }
       }
       struct.set_acl_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/SharedMemory.java b/storm-client/src/jvm/org/apache/storm/generated/SharedMemory.java
index 632f1ca..1cd1d32 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/SharedMemory.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/SharedMemory.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class SharedMemory implements org.apache.storm.thrift.TBase<SharedMemory, SharedMemory._Fields>, java.io.Serializable, Cloneable, Comparable<SharedMemory> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("SharedMemory");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/ShellComponent.java b/storm-client/src/jvm/org/apache/storm/generated/ShellComponent.java
index c58828d..a5b7c8a 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/ShellComponent.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/ShellComponent.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class ShellComponent implements org.apache.storm.thrift.TBase<ShellComponent, ShellComponent._Fields>, java.io.Serializable, Cloneable, Comparable<ShellComponent> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("ShellComponent");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/SpecificAggregateStats.java b/storm-client/src/jvm/org/apache/storm/generated/SpecificAggregateStats.java
index a58a286..4f2cb5e 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/SpecificAggregateStats.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/SpecificAggregateStats.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class SpecificAggregateStats extends org.apache.storm.thrift.TUnion<SpecificAggregateStats, SpecificAggregateStats._Fields> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("SpecificAggregateStats");
   private static final org.apache.storm.thrift.protocol.TField BOLT_FIELD_DESC = new org.apache.storm.thrift.protocol.TField("bolt", org.apache.storm.thrift.protocol.TType.STRUCT, (short)1);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/SpoutAggregateStats.java b/storm-client/src/jvm/org/apache/storm/generated/SpoutAggregateStats.java
index a8fd8cc..c93f5f4 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/SpoutAggregateStats.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/SpoutAggregateStats.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class SpoutAggregateStats implements org.apache.storm.thrift.TBase<SpoutAggregateStats, SpoutAggregateStats._Fields>, java.io.Serializable, Cloneable, Comparable<SpoutAggregateStats> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("SpoutAggregateStats");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/SpoutSpec.java b/storm-client/src/jvm/org/apache/storm/generated/SpoutSpec.java
index b80a07d..1cf7965 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/SpoutSpec.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/SpoutSpec.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class SpoutSpec implements org.apache.storm.thrift.TBase<SpoutSpec, SpoutSpec._Fields>, java.io.Serializable, Cloneable, Comparable<SpoutSpec> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("SpoutSpec");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/SpoutStats.java b/storm-client/src/jvm/org/apache/storm/generated/SpoutStats.java
index 14e3844..560d9d5 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/SpoutStats.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/SpoutStats.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class SpoutStats implements org.apache.storm.thrift.TBase<SpoutStats, SpoutStats._Fields>, java.io.Serializable, Cloneable, Comparable<SpoutStats> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("SpoutStats");
 
@@ -578,27 +578,27 @@
           case 1: // ACKED
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map260 = iprot.readMapBegin();
-                struct.acked = new java.util.HashMap<java.lang.String,java.util.Map<java.lang.String,java.lang.Long>>(2*_map260.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key261;
-                @org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Long> _val262;
-                for (int _i263 = 0; _i263 < _map260.size; ++_i263)
+                org.apache.storm.thrift.protocol.TMap _map290 = iprot.readMapBegin();
+                struct.acked = new java.util.HashMap<java.lang.String,java.util.Map<java.lang.String,java.lang.Long>>(2*_map290.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key291;
+                @org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Long> _val292;
+                for (int _i293 = 0; _i293 < _map290.size; ++_i293)
                 {
-                  _key261 = iprot.readString();
+                  _key291 = iprot.readString();
                   {
-                    org.apache.storm.thrift.protocol.TMap _map264 = iprot.readMapBegin();
-                    _val262 = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map264.size);
-                    @org.apache.storm.thrift.annotation.Nullable java.lang.String _key265;
-                    long _val266;
-                    for (int _i267 = 0; _i267 < _map264.size; ++_i267)
+                    org.apache.storm.thrift.protocol.TMap _map294 = iprot.readMapBegin();
+                    _val292 = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map294.size);
+                    @org.apache.storm.thrift.annotation.Nullable java.lang.String _key295;
+                    long _val296;
+                    for (int _i297 = 0; _i297 < _map294.size; ++_i297)
                     {
-                      _key265 = iprot.readString();
-                      _val266 = iprot.readI64();
-                      _val262.put(_key265, _val266);
+                      _key295 = iprot.readString();
+                      _val296 = iprot.readI64();
+                      _val292.put(_key295, _val296);
                     }
                     iprot.readMapEnd();
                   }
-                  struct.acked.put(_key261, _val262);
+                  struct.acked.put(_key291, _val292);
                 }
                 iprot.readMapEnd();
               }
@@ -610,27 +610,27 @@
           case 2: // FAILED
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map268 = iprot.readMapBegin();
-                struct.failed = new java.util.HashMap<java.lang.String,java.util.Map<java.lang.String,java.lang.Long>>(2*_map268.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key269;
-                @org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Long> _val270;
-                for (int _i271 = 0; _i271 < _map268.size; ++_i271)
+                org.apache.storm.thrift.protocol.TMap _map298 = iprot.readMapBegin();
+                struct.failed = new java.util.HashMap<java.lang.String,java.util.Map<java.lang.String,java.lang.Long>>(2*_map298.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key299;
+                @org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Long> _val300;
+                for (int _i301 = 0; _i301 < _map298.size; ++_i301)
                 {
-                  _key269 = iprot.readString();
+                  _key299 = iprot.readString();
                   {
-                    org.apache.storm.thrift.protocol.TMap _map272 = iprot.readMapBegin();
-                    _val270 = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map272.size);
-                    @org.apache.storm.thrift.annotation.Nullable java.lang.String _key273;
-                    long _val274;
-                    for (int _i275 = 0; _i275 < _map272.size; ++_i275)
+                    org.apache.storm.thrift.protocol.TMap _map302 = iprot.readMapBegin();
+                    _val300 = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map302.size);
+                    @org.apache.storm.thrift.annotation.Nullable java.lang.String _key303;
+                    long _val304;
+                    for (int _i305 = 0; _i305 < _map302.size; ++_i305)
                     {
-                      _key273 = iprot.readString();
-                      _val274 = iprot.readI64();
-                      _val270.put(_key273, _val274);
+                      _key303 = iprot.readString();
+                      _val304 = iprot.readI64();
+                      _val300.put(_key303, _val304);
                     }
                     iprot.readMapEnd();
                   }
-                  struct.failed.put(_key269, _val270);
+                  struct.failed.put(_key299, _val300);
                 }
                 iprot.readMapEnd();
               }
@@ -642,27 +642,27 @@
           case 3: // COMPLETE_MS_AVG
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map276 = iprot.readMapBegin();
-                struct.complete_ms_avg = new java.util.HashMap<java.lang.String,java.util.Map<java.lang.String,java.lang.Double>>(2*_map276.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key277;
-                @org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Double> _val278;
-                for (int _i279 = 0; _i279 < _map276.size; ++_i279)
+                org.apache.storm.thrift.protocol.TMap _map306 = iprot.readMapBegin();
+                struct.complete_ms_avg = new java.util.HashMap<java.lang.String,java.util.Map<java.lang.String,java.lang.Double>>(2*_map306.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key307;
+                @org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Double> _val308;
+                for (int _i309 = 0; _i309 < _map306.size; ++_i309)
                 {
-                  _key277 = iprot.readString();
+                  _key307 = iprot.readString();
                   {
-                    org.apache.storm.thrift.protocol.TMap _map280 = iprot.readMapBegin();
-                    _val278 = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map280.size);
-                    @org.apache.storm.thrift.annotation.Nullable java.lang.String _key281;
-                    double _val282;
-                    for (int _i283 = 0; _i283 < _map280.size; ++_i283)
+                    org.apache.storm.thrift.protocol.TMap _map310 = iprot.readMapBegin();
+                    _val308 = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map310.size);
+                    @org.apache.storm.thrift.annotation.Nullable java.lang.String _key311;
+                    double _val312;
+                    for (int _i313 = 0; _i313 < _map310.size; ++_i313)
                     {
-                      _key281 = iprot.readString();
-                      _val282 = iprot.readDouble();
-                      _val278.put(_key281, _val282);
+                      _key311 = iprot.readString();
+                      _val312 = iprot.readDouble();
+                      _val308.put(_key311, _val312);
                     }
                     iprot.readMapEnd();
                   }
-                  struct.complete_ms_avg.put(_key277, _val278);
+                  struct.complete_ms_avg.put(_key307, _val308);
                 }
                 iprot.readMapEnd();
               }
@@ -688,15 +688,15 @@
         oprot.writeFieldBegin(ACKED_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.MAP, struct.acked.size()));
-          for (java.util.Map.Entry<java.lang.String, java.util.Map<java.lang.String,java.lang.Long>> _iter284 : struct.acked.entrySet())
+          for (java.util.Map.Entry<java.lang.String, java.util.Map<java.lang.String,java.lang.Long>> _iter314 : struct.acked.entrySet())
           {
-            oprot.writeString(_iter284.getKey());
+            oprot.writeString(_iter314.getKey());
             {
-              oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I64, _iter284.getValue().size()));
-              for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter285 : _iter284.getValue().entrySet())
+              oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I64, _iter314.getValue().size()));
+              for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter315 : _iter314.getValue().entrySet())
               {
-                oprot.writeString(_iter285.getKey());
-                oprot.writeI64(_iter285.getValue());
+                oprot.writeString(_iter315.getKey());
+                oprot.writeI64(_iter315.getValue());
               }
               oprot.writeMapEnd();
             }
@@ -709,15 +709,15 @@
         oprot.writeFieldBegin(FAILED_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.MAP, struct.failed.size()));
-          for (java.util.Map.Entry<java.lang.String, java.util.Map<java.lang.String,java.lang.Long>> _iter286 : struct.failed.entrySet())
+          for (java.util.Map.Entry<java.lang.String, java.util.Map<java.lang.String,java.lang.Long>> _iter316 : struct.failed.entrySet())
           {
-            oprot.writeString(_iter286.getKey());
+            oprot.writeString(_iter316.getKey());
             {
-              oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I64, _iter286.getValue().size()));
-              for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter287 : _iter286.getValue().entrySet())
+              oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I64, _iter316.getValue().size()));
+              for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter317 : _iter316.getValue().entrySet())
               {
-                oprot.writeString(_iter287.getKey());
-                oprot.writeI64(_iter287.getValue());
+                oprot.writeString(_iter317.getKey());
+                oprot.writeI64(_iter317.getValue());
               }
               oprot.writeMapEnd();
             }
@@ -730,15 +730,15 @@
         oprot.writeFieldBegin(COMPLETE_MS_AVG_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.MAP, struct.complete_ms_avg.size()));
-          for (java.util.Map.Entry<java.lang.String, java.util.Map<java.lang.String,java.lang.Double>> _iter288 : struct.complete_ms_avg.entrySet())
+          for (java.util.Map.Entry<java.lang.String, java.util.Map<java.lang.String,java.lang.Double>> _iter318 : struct.complete_ms_avg.entrySet())
           {
-            oprot.writeString(_iter288.getKey());
+            oprot.writeString(_iter318.getKey());
             {
-              oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, _iter288.getValue().size()));
-              for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter289 : _iter288.getValue().entrySet())
+              oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, _iter318.getValue().size()));
+              for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter319 : _iter318.getValue().entrySet())
               {
-                oprot.writeString(_iter289.getKey());
-                oprot.writeDouble(_iter289.getValue());
+                oprot.writeString(_iter319.getKey());
+                oprot.writeDouble(_iter319.getValue());
               }
               oprot.writeMapEnd();
             }
@@ -766,45 +766,45 @@
       org.apache.storm.thrift.protocol.TTupleProtocol oprot = (org.apache.storm.thrift.protocol.TTupleProtocol) prot;
       {
         oprot.writeI32(struct.acked.size());
-        for (java.util.Map.Entry<java.lang.String, java.util.Map<java.lang.String,java.lang.Long>> _iter290 : struct.acked.entrySet())
+        for (java.util.Map.Entry<java.lang.String, java.util.Map<java.lang.String,java.lang.Long>> _iter320 : struct.acked.entrySet())
         {
-          oprot.writeString(_iter290.getKey());
+          oprot.writeString(_iter320.getKey());
           {
-            oprot.writeI32(_iter290.getValue().size());
-            for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter291 : _iter290.getValue().entrySet())
+            oprot.writeI32(_iter320.getValue().size());
+            for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter321 : _iter320.getValue().entrySet())
             {
-              oprot.writeString(_iter291.getKey());
-              oprot.writeI64(_iter291.getValue());
+              oprot.writeString(_iter321.getKey());
+              oprot.writeI64(_iter321.getValue());
             }
           }
         }
       }
       {
         oprot.writeI32(struct.failed.size());
-        for (java.util.Map.Entry<java.lang.String, java.util.Map<java.lang.String,java.lang.Long>> _iter292 : struct.failed.entrySet())
+        for (java.util.Map.Entry<java.lang.String, java.util.Map<java.lang.String,java.lang.Long>> _iter322 : struct.failed.entrySet())
         {
-          oprot.writeString(_iter292.getKey());
+          oprot.writeString(_iter322.getKey());
           {
-            oprot.writeI32(_iter292.getValue().size());
-            for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter293 : _iter292.getValue().entrySet())
+            oprot.writeI32(_iter322.getValue().size());
+            for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter323 : _iter322.getValue().entrySet())
             {
-              oprot.writeString(_iter293.getKey());
-              oprot.writeI64(_iter293.getValue());
+              oprot.writeString(_iter323.getKey());
+              oprot.writeI64(_iter323.getValue());
             }
           }
         }
       }
       {
         oprot.writeI32(struct.complete_ms_avg.size());
-        for (java.util.Map.Entry<java.lang.String, java.util.Map<java.lang.String,java.lang.Double>> _iter294 : struct.complete_ms_avg.entrySet())
+        for (java.util.Map.Entry<java.lang.String, java.util.Map<java.lang.String,java.lang.Double>> _iter324 : struct.complete_ms_avg.entrySet())
         {
-          oprot.writeString(_iter294.getKey());
+          oprot.writeString(_iter324.getKey());
           {
-            oprot.writeI32(_iter294.getValue().size());
-            for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter295 : _iter294.getValue().entrySet())
+            oprot.writeI32(_iter324.getValue().size());
+            for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter325 : _iter324.getValue().entrySet())
             {
-              oprot.writeString(_iter295.getKey());
-              oprot.writeDouble(_iter295.getValue());
+              oprot.writeString(_iter325.getKey());
+              oprot.writeDouble(_iter325.getValue());
             }
           }
         }
@@ -815,74 +815,74 @@
     public void read(org.apache.storm.thrift.protocol.TProtocol prot, SpoutStats struct) throws org.apache.storm.thrift.TException {
       org.apache.storm.thrift.protocol.TTupleProtocol iprot = (org.apache.storm.thrift.protocol.TTupleProtocol) prot;
       {
-        org.apache.storm.thrift.protocol.TMap _map296 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.MAP, iprot.readI32());
-        struct.acked = new java.util.HashMap<java.lang.String,java.util.Map<java.lang.String,java.lang.Long>>(2*_map296.size);
-        @org.apache.storm.thrift.annotation.Nullable java.lang.String _key297;
-        @org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Long> _val298;
-        for (int _i299 = 0; _i299 < _map296.size; ++_i299)
+        org.apache.storm.thrift.protocol.TMap _map326 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.MAP, iprot.readI32());
+        struct.acked = new java.util.HashMap<java.lang.String,java.util.Map<java.lang.String,java.lang.Long>>(2*_map326.size);
+        @org.apache.storm.thrift.annotation.Nullable java.lang.String _key327;
+        @org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Long> _val328;
+        for (int _i329 = 0; _i329 < _map326.size; ++_i329)
         {
-          _key297 = iprot.readString();
+          _key327 = iprot.readString();
           {
-            org.apache.storm.thrift.protocol.TMap _map300 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
-            _val298 = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map300.size);
-            @org.apache.storm.thrift.annotation.Nullable java.lang.String _key301;
-            long _val302;
-            for (int _i303 = 0; _i303 < _map300.size; ++_i303)
+            org.apache.storm.thrift.protocol.TMap _map330 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
+            _val328 = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map330.size);
+            @org.apache.storm.thrift.annotation.Nullable java.lang.String _key331;
+            long _val332;
+            for (int _i333 = 0; _i333 < _map330.size; ++_i333)
             {
-              _key301 = iprot.readString();
-              _val302 = iprot.readI64();
-              _val298.put(_key301, _val302);
+              _key331 = iprot.readString();
+              _val332 = iprot.readI64();
+              _val328.put(_key331, _val332);
             }
           }
-          struct.acked.put(_key297, _val298);
+          struct.acked.put(_key327, _val328);
         }
       }
       struct.set_acked_isSet(true);
       {
-        org.apache.storm.thrift.protocol.TMap _map304 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.MAP, iprot.readI32());
-        struct.failed = new java.util.HashMap<java.lang.String,java.util.Map<java.lang.String,java.lang.Long>>(2*_map304.size);
-        @org.apache.storm.thrift.annotation.Nullable java.lang.String _key305;
-        @org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Long> _val306;
-        for (int _i307 = 0; _i307 < _map304.size; ++_i307)
+        org.apache.storm.thrift.protocol.TMap _map334 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.MAP, iprot.readI32());
+        struct.failed = new java.util.HashMap<java.lang.String,java.util.Map<java.lang.String,java.lang.Long>>(2*_map334.size);
+        @org.apache.storm.thrift.annotation.Nullable java.lang.String _key335;
+        @org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Long> _val336;
+        for (int _i337 = 0; _i337 < _map334.size; ++_i337)
         {
-          _key305 = iprot.readString();
+          _key335 = iprot.readString();
           {
-            org.apache.storm.thrift.protocol.TMap _map308 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
-            _val306 = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map308.size);
-            @org.apache.storm.thrift.annotation.Nullable java.lang.String _key309;
-            long _val310;
-            for (int _i311 = 0; _i311 < _map308.size; ++_i311)
+            org.apache.storm.thrift.protocol.TMap _map338 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
+            _val336 = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map338.size);
+            @org.apache.storm.thrift.annotation.Nullable java.lang.String _key339;
+            long _val340;
+            for (int _i341 = 0; _i341 < _map338.size; ++_i341)
             {
-              _key309 = iprot.readString();
-              _val310 = iprot.readI64();
-              _val306.put(_key309, _val310);
+              _key339 = iprot.readString();
+              _val340 = iprot.readI64();
+              _val336.put(_key339, _val340);
             }
           }
-          struct.failed.put(_key305, _val306);
+          struct.failed.put(_key335, _val336);
         }
       }
       struct.set_failed_isSet(true);
       {
-        org.apache.storm.thrift.protocol.TMap _map312 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.MAP, iprot.readI32());
-        struct.complete_ms_avg = new java.util.HashMap<java.lang.String,java.util.Map<java.lang.String,java.lang.Double>>(2*_map312.size);
-        @org.apache.storm.thrift.annotation.Nullable java.lang.String _key313;
-        @org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Double> _val314;
-        for (int _i315 = 0; _i315 < _map312.size; ++_i315)
+        org.apache.storm.thrift.protocol.TMap _map342 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.MAP, iprot.readI32());
+        struct.complete_ms_avg = new java.util.HashMap<java.lang.String,java.util.Map<java.lang.String,java.lang.Double>>(2*_map342.size);
+        @org.apache.storm.thrift.annotation.Nullable java.lang.String _key343;
+        @org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Double> _val344;
+        for (int _i345 = 0; _i345 < _map342.size; ++_i345)
         {
-          _key313 = iprot.readString();
+          _key343 = iprot.readString();
           {
-            org.apache.storm.thrift.protocol.TMap _map316 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, iprot.readI32());
-            _val314 = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map316.size);
-            @org.apache.storm.thrift.annotation.Nullable java.lang.String _key317;
-            double _val318;
-            for (int _i319 = 0; _i319 < _map316.size; ++_i319)
+            org.apache.storm.thrift.protocol.TMap _map346 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, iprot.readI32());
+            _val344 = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map346.size);
+            @org.apache.storm.thrift.annotation.Nullable java.lang.String _key347;
+            double _val348;
+            for (int _i349 = 0; _i349 < _map346.size; ++_i349)
             {
-              _key317 = iprot.readString();
-              _val318 = iprot.readDouble();
-              _val314.put(_key317, _val318);
+              _key347 = iprot.readString();
+              _val348 = iprot.readDouble();
+              _val344.put(_key347, _val348);
             }
           }
-          struct.complete_ms_avg.put(_key313, _val314);
+          struct.complete_ms_avg.put(_key343, _val344);
         }
       }
       struct.set_complete_ms_avg_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/StateSpoutSpec.java b/storm-client/src/jvm/org/apache/storm/generated/StateSpoutSpec.java
index c387bd5..d14aff7 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/StateSpoutSpec.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/StateSpoutSpec.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class StateSpoutSpec implements org.apache.storm.thrift.TBase<StateSpoutSpec, StateSpoutSpec._Fields>, java.io.Serializable, Cloneable, Comparable<StateSpoutSpec> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("StateSpoutSpec");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/StormBase.java b/storm-client/src/jvm/org/apache/storm/generated/StormBase.java
index 7a68c20..35c29d1 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/StormBase.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/StormBase.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class StormBase implements org.apache.storm.thrift.TBase<StormBase, StormBase._Fields>, java.io.Serializable, Cloneable, Comparable<StormBase> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("StormBase");
 
@@ -1224,15 +1224,15 @@
           case 4: // COMPONENT_EXECUTORS
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map752 = iprot.readMapBegin();
-                struct.component_executors = new java.util.HashMap<java.lang.String,java.lang.Integer>(2*_map752.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key753;
-                int _val754;
-                for (int _i755 = 0; _i755 < _map752.size; ++_i755)
+                org.apache.storm.thrift.protocol.TMap _map802 = iprot.readMapBegin();
+                struct.component_executors = new java.util.HashMap<java.lang.String,java.lang.Integer>(2*_map802.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key803;
+                int _val804;
+                for (int _i805 = 0; _i805 < _map802.size; ++_i805)
                 {
-                  _key753 = iprot.readString();
-                  _val754 = iprot.readI32();
-                  struct.component_executors.put(_key753, _val754);
+                  _key803 = iprot.readString();
+                  _val804 = iprot.readI32();
+                  struct.component_executors.put(_key803, _val804);
                 }
                 iprot.readMapEnd();
               }
@@ -1277,16 +1277,16 @@
           case 9: // COMPONENT_DEBUG
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map756 = iprot.readMapBegin();
-                struct.component_debug = new java.util.HashMap<java.lang.String,DebugOptions>(2*_map756.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key757;
-                @org.apache.storm.thrift.annotation.Nullable DebugOptions _val758;
-                for (int _i759 = 0; _i759 < _map756.size; ++_i759)
+                org.apache.storm.thrift.protocol.TMap _map806 = iprot.readMapBegin();
+                struct.component_debug = new java.util.HashMap<java.lang.String,DebugOptions>(2*_map806.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key807;
+                @org.apache.storm.thrift.annotation.Nullable DebugOptions _val808;
+                for (int _i809 = 0; _i809 < _map806.size; ++_i809)
                 {
-                  _key757 = iprot.readString();
-                  _val758 = new DebugOptions();
-                  _val758.read(iprot);
-                  struct.component_debug.put(_key757, _val758);
+                  _key807 = iprot.readString();
+                  _val808 = new DebugOptions();
+                  _val808.read(iprot);
+                  struct.component_debug.put(_key807, _val808);
                 }
                 iprot.readMapEnd();
               }
@@ -1342,10 +1342,10 @@
           oprot.writeFieldBegin(COMPONENT_EXECUTORS_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I32, struct.component_executors.size()));
-            for (java.util.Map.Entry<java.lang.String, java.lang.Integer> _iter760 : struct.component_executors.entrySet())
+            for (java.util.Map.Entry<java.lang.String, java.lang.Integer> _iter810 : struct.component_executors.entrySet())
             {
-              oprot.writeString(_iter760.getKey());
-              oprot.writeI32(_iter760.getValue());
+              oprot.writeString(_iter810.getKey());
+              oprot.writeI32(_iter810.getValue());
             }
             oprot.writeMapEnd();
           }
@@ -1383,10 +1383,10 @@
           oprot.writeFieldBegin(COMPONENT_DEBUG_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRUCT, struct.component_debug.size()));
-            for (java.util.Map.Entry<java.lang.String, DebugOptions> _iter761 : struct.component_debug.entrySet())
+            for (java.util.Map.Entry<java.lang.String, DebugOptions> _iter811 : struct.component_debug.entrySet())
             {
-              oprot.writeString(_iter761.getKey());
-              _iter761.getValue().write(oprot);
+              oprot.writeString(_iter811.getKey());
+              _iter811.getValue().write(oprot);
             }
             oprot.writeMapEnd();
           }
@@ -1456,10 +1456,10 @@
       if (struct.is_set_component_executors()) {
         {
           oprot.writeI32(struct.component_executors.size());
-          for (java.util.Map.Entry<java.lang.String, java.lang.Integer> _iter762 : struct.component_executors.entrySet())
+          for (java.util.Map.Entry<java.lang.String, java.lang.Integer> _iter812 : struct.component_executors.entrySet())
           {
-            oprot.writeString(_iter762.getKey());
-            oprot.writeI32(_iter762.getValue());
+            oprot.writeString(_iter812.getKey());
+            oprot.writeI32(_iter812.getValue());
           }
         }
       }
@@ -1478,10 +1478,10 @@
       if (struct.is_set_component_debug()) {
         {
           oprot.writeI32(struct.component_debug.size());
-          for (java.util.Map.Entry<java.lang.String, DebugOptions> _iter763 : struct.component_debug.entrySet())
+          for (java.util.Map.Entry<java.lang.String, DebugOptions> _iter813 : struct.component_debug.entrySet())
           {
-            oprot.writeString(_iter763.getKey());
-            _iter763.getValue().write(oprot);
+            oprot.writeString(_iter813.getKey());
+            _iter813.getValue().write(oprot);
           }
         }
       }
@@ -1505,15 +1505,15 @@
       java.util.BitSet incoming = iprot.readBitSet(8);
       if (incoming.get(0)) {
         {
-          org.apache.storm.thrift.protocol.TMap _map764 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I32, iprot.readI32());
-          struct.component_executors = new java.util.HashMap<java.lang.String,java.lang.Integer>(2*_map764.size);
-          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key765;
-          int _val766;
-          for (int _i767 = 0; _i767 < _map764.size; ++_i767)
+          org.apache.storm.thrift.protocol.TMap _map814 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I32, iprot.readI32());
+          struct.component_executors = new java.util.HashMap<java.lang.String,java.lang.Integer>(2*_map814.size);
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key815;
+          int _val816;
+          for (int _i817 = 0; _i817 < _map814.size; ++_i817)
           {
-            _key765 = iprot.readString();
-            _val766 = iprot.readI32();
-            struct.component_executors.put(_key765, _val766);
+            _key815 = iprot.readString();
+            _val816 = iprot.readI32();
+            struct.component_executors.put(_key815, _val816);
           }
         }
         struct.set_component_executors_isSet(true);
@@ -1537,16 +1537,16 @@
       }
       if (incoming.get(5)) {
         {
-          org.apache.storm.thrift.protocol.TMap _map768 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.component_debug = new java.util.HashMap<java.lang.String,DebugOptions>(2*_map768.size);
-          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key769;
-          @org.apache.storm.thrift.annotation.Nullable DebugOptions _val770;
-          for (int _i771 = 0; _i771 < _map768.size; ++_i771)
+          org.apache.storm.thrift.protocol.TMap _map818 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.component_debug = new java.util.HashMap<java.lang.String,DebugOptions>(2*_map818.size);
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key819;
+          @org.apache.storm.thrift.annotation.Nullable DebugOptions _val820;
+          for (int _i821 = 0; _i821 < _map818.size; ++_i821)
           {
-            _key769 = iprot.readString();
-            _val770 = new DebugOptions();
-            _val770.read(iprot);
-            struct.component_debug.put(_key769, _val770);
+            _key819 = iprot.readString();
+            _val820 = new DebugOptions();
+            _val820.read(iprot);
+            struct.component_debug.put(_key819, _val820);
           }
         }
         struct.set_component_debug_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/StormTopology.java b/storm-client/src/jvm/org/apache/storm/generated/StormTopology.java
index dc1fd18..1bd0c25 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/StormTopology.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/StormTopology.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class StormTopology implements org.apache.storm.thrift.TBase<StormTopology, StormTopology._Fields>, java.io.Serializable, Cloneable, Comparable<StormTopology> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("StormTopology");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/StreamInfo.java b/storm-client/src/jvm/org/apache/storm/generated/StreamInfo.java
index e16f83a..4a102f8 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/StreamInfo.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/StreamInfo.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class StreamInfo implements org.apache.storm.thrift.TBase<StreamInfo, StreamInfo._Fields>, java.io.Serializable, Cloneable, Comparable<StreamInfo> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("StreamInfo");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/SubmitOptions.java b/storm-client/src/jvm/org/apache/storm/generated/SubmitOptions.java
index 6c6a21c..206d59b 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/SubmitOptions.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/SubmitOptions.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class SubmitOptions implements org.apache.storm.thrift.TBase<SubmitOptions, SubmitOptions._Fields>, java.io.Serializable, Cloneable, Comparable<SubmitOptions> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("SubmitOptions");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/Supervisor.java b/storm-client/src/jvm/org/apache/storm/generated/Supervisor.java
index c38d57c..96e3e13 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/Supervisor.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/Supervisor.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class Supervisor {
 
   public interface Iface {
diff --git a/storm-client/src/jvm/org/apache/storm/generated/SupervisorAssignments.java b/storm-client/src/jvm/org/apache/storm/generated/SupervisorAssignments.java
index 7d61955..eb7ccfd 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/SupervisorAssignments.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/SupervisorAssignments.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class SupervisorAssignments implements org.apache.storm.thrift.TBase<SupervisorAssignments, SupervisorAssignments._Fields>, java.io.Serializable, Cloneable, Comparable<SupervisorAssignments> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("SupervisorAssignments");
 
@@ -347,16 +347,16 @@
           case 1: // STORM_ASSIGNMENT
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map886 = iprot.readMapBegin();
-                struct.storm_assignment = new java.util.HashMap<java.lang.String,Assignment>(2*_map886.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key887;
-                @org.apache.storm.thrift.annotation.Nullable Assignment _val888;
-                for (int _i889 = 0; _i889 < _map886.size; ++_i889)
+                org.apache.storm.thrift.protocol.TMap _map936 = iprot.readMapBegin();
+                struct.storm_assignment = new java.util.HashMap<java.lang.String,Assignment>(2*_map936.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key937;
+                @org.apache.storm.thrift.annotation.Nullable Assignment _val938;
+                for (int _i939 = 0; _i939 < _map936.size; ++_i939)
                 {
-                  _key887 = iprot.readString();
-                  _val888 = new Assignment();
-                  _val888.read(iprot);
-                  struct.storm_assignment.put(_key887, _val888);
+                  _key937 = iprot.readString();
+                  _val938 = new Assignment();
+                  _val938.read(iprot);
+                  struct.storm_assignment.put(_key937, _val938);
                 }
                 iprot.readMapEnd();
               }
@@ -383,10 +383,10 @@
           oprot.writeFieldBegin(STORM_ASSIGNMENT_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRUCT, struct.storm_assignment.size()));
-            for (java.util.Map.Entry<java.lang.String, Assignment> _iter890 : struct.storm_assignment.entrySet())
+            for (java.util.Map.Entry<java.lang.String, Assignment> _iter940 : struct.storm_assignment.entrySet())
             {
-              oprot.writeString(_iter890.getKey());
-              _iter890.getValue().write(oprot);
+              oprot.writeString(_iter940.getKey());
+              _iter940.getValue().write(oprot);
             }
             oprot.writeMapEnd();
           }
@@ -418,10 +418,10 @@
       if (struct.is_set_storm_assignment()) {
         {
           oprot.writeI32(struct.storm_assignment.size());
-          for (java.util.Map.Entry<java.lang.String, Assignment> _iter891 : struct.storm_assignment.entrySet())
+          for (java.util.Map.Entry<java.lang.String, Assignment> _iter941 : struct.storm_assignment.entrySet())
           {
-            oprot.writeString(_iter891.getKey());
-            _iter891.getValue().write(oprot);
+            oprot.writeString(_iter941.getKey());
+            _iter941.getValue().write(oprot);
           }
         }
       }
@@ -433,16 +433,16 @@
       java.util.BitSet incoming = iprot.readBitSet(1);
       if (incoming.get(0)) {
         {
-          org.apache.storm.thrift.protocol.TMap _map892 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.storm_assignment = new java.util.HashMap<java.lang.String,Assignment>(2*_map892.size);
-          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key893;
-          @org.apache.storm.thrift.annotation.Nullable Assignment _val894;
-          for (int _i895 = 0; _i895 < _map892.size; ++_i895)
+          org.apache.storm.thrift.protocol.TMap _map942 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.storm_assignment = new java.util.HashMap<java.lang.String,Assignment>(2*_map942.size);
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key943;
+          @org.apache.storm.thrift.annotation.Nullable Assignment _val944;
+          for (int _i945 = 0; _i945 < _map942.size; ++_i945)
           {
-            _key893 = iprot.readString();
-            _val894 = new Assignment();
-            _val894.read(iprot);
-            struct.storm_assignment.put(_key893, _val894);
+            _key943 = iprot.readString();
+            _val944 = new Assignment();
+            _val944.read(iprot);
+            struct.storm_assignment.put(_key943, _val944);
           }
         }
         struct.set_storm_assignment_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/SupervisorInfo.java b/storm-client/src/jvm/org/apache/storm/generated/SupervisorInfo.java
index 0270e5b..af4b9d1 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/SupervisorInfo.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/SupervisorInfo.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class SupervisorInfo implements org.apache.storm.thrift.TBase<SupervisorInfo, SupervisorInfo._Fields>, java.io.Serializable, Cloneable, Comparable<SupervisorInfo> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("SupervisorInfo");
 
@@ -1134,13 +1134,13 @@
           case 4: // USED_PORTS
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.LIST) {
               {
-                org.apache.storm.thrift.protocol.TList _list622 = iprot.readListBegin();
-                struct.used_ports = new java.util.ArrayList<java.lang.Long>(_list622.size);
-                long _elem623;
-                for (int _i624 = 0; _i624 < _list622.size; ++_i624)
+                org.apache.storm.thrift.protocol.TList _list672 = iprot.readListBegin();
+                struct.used_ports = new java.util.ArrayList<java.lang.Long>(_list672.size);
+                long _elem673;
+                for (int _i674 = 0; _i674 < _list672.size; ++_i674)
                 {
-                  _elem623 = iprot.readI64();
-                  struct.used_ports.add(_elem623);
+                  _elem673 = iprot.readI64();
+                  struct.used_ports.add(_elem673);
                 }
                 iprot.readListEnd();
               }
@@ -1152,13 +1152,13 @@
           case 5: // META
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.LIST) {
               {
-                org.apache.storm.thrift.protocol.TList _list625 = iprot.readListBegin();
-                struct.meta = new java.util.ArrayList<java.lang.Long>(_list625.size);
-                long _elem626;
-                for (int _i627 = 0; _i627 < _list625.size; ++_i627)
+                org.apache.storm.thrift.protocol.TList _list675 = iprot.readListBegin();
+                struct.meta = new java.util.ArrayList<java.lang.Long>(_list675.size);
+                long _elem676;
+                for (int _i677 = 0; _i677 < _list675.size; ++_i677)
                 {
-                  _elem626 = iprot.readI64();
-                  struct.meta.add(_elem626);
+                  _elem676 = iprot.readI64();
+                  struct.meta.add(_elem676);
                 }
                 iprot.readListEnd();
               }
@@ -1170,15 +1170,15 @@
           case 6: // SCHEDULER_META
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map628 = iprot.readMapBegin();
-                struct.scheduler_meta = new java.util.HashMap<java.lang.String,java.lang.String>(2*_map628.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key629;
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _val630;
-                for (int _i631 = 0; _i631 < _map628.size; ++_i631)
+                org.apache.storm.thrift.protocol.TMap _map678 = iprot.readMapBegin();
+                struct.scheduler_meta = new java.util.HashMap<java.lang.String,java.lang.String>(2*_map678.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key679;
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _val680;
+                for (int _i681 = 0; _i681 < _map678.size; ++_i681)
                 {
-                  _key629 = iprot.readString();
-                  _val630 = iprot.readString();
-                  struct.scheduler_meta.put(_key629, _val630);
+                  _key679 = iprot.readString();
+                  _val680 = iprot.readString();
+                  struct.scheduler_meta.put(_key679, _val680);
                 }
                 iprot.readMapEnd();
               }
@@ -1206,15 +1206,15 @@
           case 9: // RESOURCES_MAP
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map632 = iprot.readMapBegin();
-                struct.resources_map = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map632.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key633;
-                double _val634;
-                for (int _i635 = 0; _i635 < _map632.size; ++_i635)
+                org.apache.storm.thrift.protocol.TMap _map682 = iprot.readMapBegin();
+                struct.resources_map = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map682.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key683;
+                double _val684;
+                for (int _i685 = 0; _i685 < _map682.size; ++_i685)
                 {
-                  _key633 = iprot.readString();
-                  _val634 = iprot.readDouble();
-                  struct.resources_map.put(_key633, _val634);
+                  _key683 = iprot.readString();
+                  _val684 = iprot.readDouble();
+                  struct.resources_map.put(_key683, _val684);
                 }
                 iprot.readMapEnd();
               }
@@ -1264,9 +1264,9 @@
           oprot.writeFieldBegin(USED_PORTS_FIELD_DESC);
           {
             oprot.writeListBegin(new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.I64, struct.used_ports.size()));
-            for (long _iter636 : struct.used_ports)
+            for (long _iter686 : struct.used_ports)
             {
-              oprot.writeI64(_iter636);
+              oprot.writeI64(_iter686);
             }
             oprot.writeListEnd();
           }
@@ -1278,9 +1278,9 @@
           oprot.writeFieldBegin(META_FIELD_DESC);
           {
             oprot.writeListBegin(new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.I64, struct.meta.size()));
-            for (long _iter637 : struct.meta)
+            for (long _iter687 : struct.meta)
             {
-              oprot.writeI64(_iter637);
+              oprot.writeI64(_iter687);
             }
             oprot.writeListEnd();
           }
@@ -1292,10 +1292,10 @@
           oprot.writeFieldBegin(SCHEDULER_META_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRING, struct.scheduler_meta.size()));
-            for (java.util.Map.Entry<java.lang.String, java.lang.String> _iter638 : struct.scheduler_meta.entrySet())
+            for (java.util.Map.Entry<java.lang.String, java.lang.String> _iter688 : struct.scheduler_meta.entrySet())
             {
-              oprot.writeString(_iter638.getKey());
-              oprot.writeString(_iter638.getValue());
+              oprot.writeString(_iter688.getKey());
+              oprot.writeString(_iter688.getValue());
             }
             oprot.writeMapEnd();
           }
@@ -1319,10 +1319,10 @@
           oprot.writeFieldBegin(RESOURCES_MAP_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, struct.resources_map.size()));
-            for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter639 : struct.resources_map.entrySet())
+            for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter689 : struct.resources_map.entrySet())
             {
-              oprot.writeString(_iter639.getKey());
-              oprot.writeDouble(_iter639.getValue());
+              oprot.writeString(_iter689.getKey());
+              oprot.writeDouble(_iter689.getValue());
             }
             oprot.writeMapEnd();
           }
@@ -1385,28 +1385,28 @@
       if (struct.is_set_used_ports()) {
         {
           oprot.writeI32(struct.used_ports.size());
-          for (long _iter640 : struct.used_ports)
+          for (long _iter690 : struct.used_ports)
           {
-            oprot.writeI64(_iter640);
+            oprot.writeI64(_iter690);
           }
         }
       }
       if (struct.is_set_meta()) {
         {
           oprot.writeI32(struct.meta.size());
-          for (long _iter641 : struct.meta)
+          for (long _iter691 : struct.meta)
           {
-            oprot.writeI64(_iter641);
+            oprot.writeI64(_iter691);
           }
         }
       }
       if (struct.is_set_scheduler_meta()) {
         {
           oprot.writeI32(struct.scheduler_meta.size());
-          for (java.util.Map.Entry<java.lang.String, java.lang.String> _iter642 : struct.scheduler_meta.entrySet())
+          for (java.util.Map.Entry<java.lang.String, java.lang.String> _iter692 : struct.scheduler_meta.entrySet())
           {
-            oprot.writeString(_iter642.getKey());
-            oprot.writeString(_iter642.getValue());
+            oprot.writeString(_iter692.getKey());
+            oprot.writeString(_iter692.getValue());
           }
         }
       }
@@ -1419,10 +1419,10 @@
       if (struct.is_set_resources_map()) {
         {
           oprot.writeI32(struct.resources_map.size());
-          for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter643 : struct.resources_map.entrySet())
+          for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter693 : struct.resources_map.entrySet())
           {
-            oprot.writeString(_iter643.getKey());
-            oprot.writeDouble(_iter643.getValue());
+            oprot.writeString(_iter693.getKey());
+            oprot.writeDouble(_iter693.getValue());
           }
         }
       }
@@ -1445,41 +1445,41 @@
       }
       if (incoming.get(1)) {
         {
-          org.apache.storm.thrift.protocol.TList _list644 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
-          struct.used_ports = new java.util.ArrayList<java.lang.Long>(_list644.size);
-          long _elem645;
-          for (int _i646 = 0; _i646 < _list644.size; ++_i646)
+          org.apache.storm.thrift.protocol.TList _list694 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
+          struct.used_ports = new java.util.ArrayList<java.lang.Long>(_list694.size);
+          long _elem695;
+          for (int _i696 = 0; _i696 < _list694.size; ++_i696)
           {
-            _elem645 = iprot.readI64();
-            struct.used_ports.add(_elem645);
+            _elem695 = iprot.readI64();
+            struct.used_ports.add(_elem695);
           }
         }
         struct.set_used_ports_isSet(true);
       }
       if (incoming.get(2)) {
         {
-          org.apache.storm.thrift.protocol.TList _list647 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
-          struct.meta = new java.util.ArrayList<java.lang.Long>(_list647.size);
-          long _elem648;
-          for (int _i649 = 0; _i649 < _list647.size; ++_i649)
+          org.apache.storm.thrift.protocol.TList _list697 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
+          struct.meta = new java.util.ArrayList<java.lang.Long>(_list697.size);
+          long _elem698;
+          for (int _i699 = 0; _i699 < _list697.size; ++_i699)
           {
-            _elem648 = iprot.readI64();
-            struct.meta.add(_elem648);
+            _elem698 = iprot.readI64();
+            struct.meta.add(_elem698);
           }
         }
         struct.set_meta_isSet(true);
       }
       if (incoming.get(3)) {
         {
-          org.apache.storm.thrift.protocol.TMap _map650 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRING, iprot.readI32());
-          struct.scheduler_meta = new java.util.HashMap<java.lang.String,java.lang.String>(2*_map650.size);
-          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key651;
-          @org.apache.storm.thrift.annotation.Nullable java.lang.String _val652;
-          for (int _i653 = 0; _i653 < _map650.size; ++_i653)
+          org.apache.storm.thrift.protocol.TMap _map700 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRING, iprot.readI32());
+          struct.scheduler_meta = new java.util.HashMap<java.lang.String,java.lang.String>(2*_map700.size);
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key701;
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _val702;
+          for (int _i703 = 0; _i703 < _map700.size; ++_i703)
           {
-            _key651 = iprot.readString();
-            _val652 = iprot.readString();
-            struct.scheduler_meta.put(_key651, _val652);
+            _key701 = iprot.readString();
+            _val702 = iprot.readString();
+            struct.scheduler_meta.put(_key701, _val702);
           }
         }
         struct.set_scheduler_meta_isSet(true);
@@ -1494,15 +1494,15 @@
       }
       if (incoming.get(6)) {
         {
-          org.apache.storm.thrift.protocol.TMap _map654 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, iprot.readI32());
-          struct.resources_map = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map654.size);
-          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key655;
-          double _val656;
-          for (int _i657 = 0; _i657 < _map654.size; ++_i657)
+          org.apache.storm.thrift.protocol.TMap _map704 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, iprot.readI32());
+          struct.resources_map = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map704.size);
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key705;
+          double _val706;
+          for (int _i707 = 0; _i707 < _map704.size; ++_i707)
           {
-            _key655 = iprot.readString();
-            _val656 = iprot.readDouble();
-            struct.resources_map.put(_key655, _val656);
+            _key705 = iprot.readString();
+            _val706 = iprot.readDouble();
+            struct.resources_map.put(_key705, _val706);
           }
         }
         struct.set_resources_map_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/SupervisorPageInfo.java b/storm-client/src/jvm/org/apache/storm/generated/SupervisorPageInfo.java
index a77f242..9c51e56 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/SupervisorPageInfo.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/SupervisorPageInfo.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class SupervisorPageInfo implements org.apache.storm.thrift.TBase<SupervisorPageInfo, SupervisorPageInfo._Fields>, java.io.Serializable, Cloneable, Comparable<SupervisorPageInfo> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("SupervisorPageInfo");
 
@@ -442,14 +442,14 @@
           case 1: // SUPERVISOR_SUMMARIES
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.LIST) {
               {
-                org.apache.storm.thrift.protocol.TList _list466 = iprot.readListBegin();
-                struct.supervisor_summaries = new java.util.ArrayList<SupervisorSummary>(_list466.size);
-                @org.apache.storm.thrift.annotation.Nullable SupervisorSummary _elem467;
-                for (int _i468 = 0; _i468 < _list466.size; ++_i468)
+                org.apache.storm.thrift.protocol.TList _list496 = iprot.readListBegin();
+                struct.supervisor_summaries = new java.util.ArrayList<SupervisorSummary>(_list496.size);
+                @org.apache.storm.thrift.annotation.Nullable SupervisorSummary _elem497;
+                for (int _i498 = 0; _i498 < _list496.size; ++_i498)
                 {
-                  _elem467 = new SupervisorSummary();
-                  _elem467.read(iprot);
-                  struct.supervisor_summaries.add(_elem467);
+                  _elem497 = new SupervisorSummary();
+                  _elem497.read(iprot);
+                  struct.supervisor_summaries.add(_elem497);
                 }
                 iprot.readListEnd();
               }
@@ -461,14 +461,14 @@
           case 2: // WORKER_SUMMARIES
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.LIST) {
               {
-                org.apache.storm.thrift.protocol.TList _list469 = iprot.readListBegin();
-                struct.worker_summaries = new java.util.ArrayList<WorkerSummary>(_list469.size);
-                @org.apache.storm.thrift.annotation.Nullable WorkerSummary _elem470;
-                for (int _i471 = 0; _i471 < _list469.size; ++_i471)
+                org.apache.storm.thrift.protocol.TList _list499 = iprot.readListBegin();
+                struct.worker_summaries = new java.util.ArrayList<WorkerSummary>(_list499.size);
+                @org.apache.storm.thrift.annotation.Nullable WorkerSummary _elem500;
+                for (int _i501 = 0; _i501 < _list499.size; ++_i501)
                 {
-                  _elem470 = new WorkerSummary();
-                  _elem470.read(iprot);
-                  struct.worker_summaries.add(_elem470);
+                  _elem500 = new WorkerSummary();
+                  _elem500.read(iprot);
+                  struct.worker_summaries.add(_elem500);
                 }
                 iprot.readListEnd();
               }
@@ -495,9 +495,9 @@
           oprot.writeFieldBegin(SUPERVISOR_SUMMARIES_FIELD_DESC);
           {
             oprot.writeListBegin(new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, struct.supervisor_summaries.size()));
-            for (SupervisorSummary _iter472 : struct.supervisor_summaries)
+            for (SupervisorSummary _iter502 : struct.supervisor_summaries)
             {
-              _iter472.write(oprot);
+              _iter502.write(oprot);
             }
             oprot.writeListEnd();
           }
@@ -509,9 +509,9 @@
           oprot.writeFieldBegin(WORKER_SUMMARIES_FIELD_DESC);
           {
             oprot.writeListBegin(new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, struct.worker_summaries.size()));
-            for (WorkerSummary _iter473 : struct.worker_summaries)
+            for (WorkerSummary _iter503 : struct.worker_summaries)
             {
-              _iter473.write(oprot);
+              _iter503.write(oprot);
             }
             oprot.writeListEnd();
           }
@@ -546,18 +546,18 @@
       if (struct.is_set_supervisor_summaries()) {
         {
           oprot.writeI32(struct.supervisor_summaries.size());
-          for (SupervisorSummary _iter474 : struct.supervisor_summaries)
+          for (SupervisorSummary _iter504 : struct.supervisor_summaries)
           {
-            _iter474.write(oprot);
+            _iter504.write(oprot);
           }
         }
       }
       if (struct.is_set_worker_summaries()) {
         {
           oprot.writeI32(struct.worker_summaries.size());
-          for (WorkerSummary _iter475 : struct.worker_summaries)
+          for (WorkerSummary _iter505 : struct.worker_summaries)
           {
-            _iter475.write(oprot);
+            _iter505.write(oprot);
           }
         }
       }
@@ -569,28 +569,28 @@
       java.util.BitSet incoming = iprot.readBitSet(2);
       if (incoming.get(0)) {
         {
-          org.apache.storm.thrift.protocol.TList _list476 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.supervisor_summaries = new java.util.ArrayList<SupervisorSummary>(_list476.size);
-          @org.apache.storm.thrift.annotation.Nullable SupervisorSummary _elem477;
-          for (int _i478 = 0; _i478 < _list476.size; ++_i478)
+          org.apache.storm.thrift.protocol.TList _list506 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.supervisor_summaries = new java.util.ArrayList<SupervisorSummary>(_list506.size);
+          @org.apache.storm.thrift.annotation.Nullable SupervisorSummary _elem507;
+          for (int _i508 = 0; _i508 < _list506.size; ++_i508)
           {
-            _elem477 = new SupervisorSummary();
-            _elem477.read(iprot);
-            struct.supervisor_summaries.add(_elem477);
+            _elem507 = new SupervisorSummary();
+            _elem507.read(iprot);
+            struct.supervisor_summaries.add(_elem507);
           }
         }
         struct.set_supervisor_summaries_isSet(true);
       }
       if (incoming.get(1)) {
         {
-          org.apache.storm.thrift.protocol.TList _list479 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.worker_summaries = new java.util.ArrayList<WorkerSummary>(_list479.size);
-          @org.apache.storm.thrift.annotation.Nullable WorkerSummary _elem480;
-          for (int _i481 = 0; _i481 < _list479.size; ++_i481)
+          org.apache.storm.thrift.protocol.TList _list509 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.worker_summaries = new java.util.ArrayList<WorkerSummary>(_list509.size);
+          @org.apache.storm.thrift.annotation.Nullable WorkerSummary _elem510;
+          for (int _i511 = 0; _i511 < _list509.size; ++_i511)
           {
-            _elem480 = new WorkerSummary();
-            _elem480.read(iprot);
-            struct.worker_summaries.add(_elem480);
+            _elem510 = new WorkerSummary();
+            _elem510.read(iprot);
+            struct.worker_summaries.add(_elem510);
           }
         }
         struct.set_worker_summaries_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/SupervisorSummary.java b/storm-client/src/jvm/org/apache/storm/generated/SupervisorSummary.java
index 3e39bad..c6886fc 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/SupervisorSummary.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/SupervisorSummary.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class SupervisorSummary implements org.apache.storm.thrift.TBase<SupervisorSummary, SupervisorSummary._Fields>, java.io.Serializable, Cloneable, Comparable<SupervisorSummary> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("SupervisorSummary");
 
@@ -40,6 +40,7 @@
   private static final org.apache.storm.thrift.protocol.TField FRAGMENTED_MEM_FIELD_DESC = new org.apache.storm.thrift.protocol.TField("fragmented_mem", org.apache.storm.thrift.protocol.TType.DOUBLE, (short)10);
   private static final org.apache.storm.thrift.protocol.TField FRAGMENTED_CPU_FIELD_DESC = new org.apache.storm.thrift.protocol.TField("fragmented_cpu", org.apache.storm.thrift.protocol.TType.DOUBLE, (short)11);
   private static final org.apache.storm.thrift.protocol.TField BLACKLISTED_FIELD_DESC = new org.apache.storm.thrift.protocol.TField("blacklisted", org.apache.storm.thrift.protocol.TType.BOOL, (short)12);
+  private static final org.apache.storm.thrift.protocol.TField USED_GENERIC_RESOURCES_FIELD_DESC = new org.apache.storm.thrift.protocol.TField("used_generic_resources", org.apache.storm.thrift.protocol.TType.MAP, (short)13);
 
   private static final org.apache.storm.thrift.scheme.SchemeFactory STANDARD_SCHEME_FACTORY = new SupervisorSummaryStandardSchemeFactory();
   private static final org.apache.storm.thrift.scheme.SchemeFactory TUPLE_SCHEME_FACTORY = new SupervisorSummaryTupleSchemeFactory();
@@ -56,6 +57,7 @@
   private double fragmented_mem; // optional
   private double fragmented_cpu; // optional
   private boolean blacklisted; // optional
+  private @org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Double> used_generic_resources; // optional
 
   /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
   public enum _Fields implements org.apache.storm.thrift.TFieldIdEnum {
@@ -70,7 +72,8 @@
     USED_CPU((short)9, "used_cpu"),
     FRAGMENTED_MEM((short)10, "fragmented_mem"),
     FRAGMENTED_CPU((short)11, "fragmented_cpu"),
-    BLACKLISTED((short)12, "blacklisted");
+    BLACKLISTED((short)12, "blacklisted"),
+    USED_GENERIC_RESOURCES((short)13, "used_generic_resources");
 
     private static final java.util.Map<java.lang.String, _Fields> byName = new java.util.HashMap<java.lang.String, _Fields>();
 
@@ -110,6 +113,8 @@
           return FRAGMENTED_CPU;
         case 12: // BLACKLISTED
           return BLACKLISTED;
+        case 13: // USED_GENERIC_RESOURCES
+          return USED_GENERIC_RESOURCES;
         default:
           return null;
       }
@@ -160,7 +165,7 @@
   private static final int __FRAGMENTED_CPU_ISSET_ID = 6;
   private static final int __BLACKLISTED_ISSET_ID = 7;
   private byte __isset_bitfield = 0;
-  private static final _Fields optionals[] = {_Fields.VERSION,_Fields.TOTAL_RESOURCES,_Fields.USED_MEM,_Fields.USED_CPU,_Fields.FRAGMENTED_MEM,_Fields.FRAGMENTED_CPU,_Fields.BLACKLISTED};
+  private static final _Fields optionals[] = {_Fields.VERSION,_Fields.TOTAL_RESOURCES,_Fields.USED_MEM,_Fields.USED_CPU,_Fields.FRAGMENTED_MEM,_Fields.FRAGMENTED_CPU,_Fields.BLACKLISTED,_Fields.USED_GENERIC_RESOURCES};
   public static final java.util.Map<_Fields, org.apache.storm.thrift.meta_data.FieldMetaData> metaDataMap;
   static {
     java.util.Map<_Fields, org.apache.storm.thrift.meta_data.FieldMetaData> tmpMap = new java.util.EnumMap<_Fields, org.apache.storm.thrift.meta_data.FieldMetaData>(_Fields.class);
@@ -190,6 +195,10 @@
         new org.apache.storm.thrift.meta_data.FieldValueMetaData(org.apache.storm.thrift.protocol.TType.DOUBLE)));
     tmpMap.put(_Fields.BLACKLISTED, new org.apache.storm.thrift.meta_data.FieldMetaData("blacklisted", org.apache.storm.thrift.TFieldRequirementType.OPTIONAL, 
         new org.apache.storm.thrift.meta_data.FieldValueMetaData(org.apache.storm.thrift.protocol.TType.BOOL)));
+    tmpMap.put(_Fields.USED_GENERIC_RESOURCES, new org.apache.storm.thrift.meta_data.FieldMetaData("used_generic_resources", org.apache.storm.thrift.TFieldRequirementType.OPTIONAL, 
+        new org.apache.storm.thrift.meta_data.MapMetaData(org.apache.storm.thrift.protocol.TType.MAP, 
+            new org.apache.storm.thrift.meta_data.FieldValueMetaData(org.apache.storm.thrift.protocol.TType.STRING), 
+            new org.apache.storm.thrift.meta_data.FieldValueMetaData(org.apache.storm.thrift.protocol.TType.DOUBLE))));
     metaDataMap = java.util.Collections.unmodifiableMap(tmpMap);
     org.apache.storm.thrift.meta_data.FieldMetaData.addStructMetaDataMap(SupervisorSummary.class, metaDataMap);
   }
@@ -243,6 +252,10 @@
     this.fragmented_mem = other.fragmented_mem;
     this.fragmented_cpu = other.fragmented_cpu;
     this.blacklisted = other.blacklisted;
+    if (other.is_set_used_generic_resources()) {
+      java.util.Map<java.lang.String,java.lang.Double> __this__used_generic_resources = new java.util.HashMap<java.lang.String,java.lang.Double>(other.used_generic_resources);
+      this.used_generic_resources = __this__used_generic_resources;
+    }
   }
 
   public SupervisorSummary deepCopy() {
@@ -272,6 +285,7 @@
     this.fragmented_cpu = 0.0;
     set_blacklisted_isSet(false);
     this.blacklisted = false;
+    this.used_generic_resources = null;
   }
 
   @org.apache.storm.thrift.annotation.Nullable
@@ -557,6 +571,41 @@
     __isset_bitfield = org.apache.storm.thrift.EncodingUtils.setBit(__isset_bitfield, __BLACKLISTED_ISSET_ID, value);
   }
 
+  public int get_used_generic_resources_size() {
+    return (this.used_generic_resources == null) ? 0 : this.used_generic_resources.size();
+  }
+
+  public void put_to_used_generic_resources(java.lang.String key, double val) {
+    if (this.used_generic_resources == null) {
+      this.used_generic_resources = new java.util.HashMap<java.lang.String,java.lang.Double>();
+    }
+    this.used_generic_resources.put(key, val);
+  }
+
+  @org.apache.storm.thrift.annotation.Nullable
+  public java.util.Map<java.lang.String,java.lang.Double> get_used_generic_resources() {
+    return this.used_generic_resources;
+  }
+
+  public void set_used_generic_resources(@org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Double> used_generic_resources) {
+    this.used_generic_resources = used_generic_resources;
+  }
+
+  public void unset_used_generic_resources() {
+    this.used_generic_resources = null;
+  }
+
+  /** Returns true if field used_generic_resources is set (has been assigned a value) and false otherwise */
+  public boolean is_set_used_generic_resources() {
+    return this.used_generic_resources != null;
+  }
+
+  public void set_used_generic_resources_isSet(boolean value) {
+    if (!value) {
+      this.used_generic_resources = null;
+    }
+  }
+
   public void setFieldValue(_Fields field, @org.apache.storm.thrift.annotation.Nullable java.lang.Object value) {
     switch (field) {
     case HOST:
@@ -655,6 +704,14 @@
       }
       break;
 
+    case USED_GENERIC_RESOURCES:
+      if (value == null) {
+        unset_used_generic_resources();
+      } else {
+        set_used_generic_resources((java.util.Map<java.lang.String,java.lang.Double>)value);
+      }
+      break;
+
     }
   }
 
@@ -697,6 +754,9 @@
     case BLACKLISTED:
       return is_blacklisted();
 
+    case USED_GENERIC_RESOURCES:
+      return get_used_generic_resources();
+
     }
     throw new java.lang.IllegalStateException();
   }
@@ -732,6 +792,8 @@
       return is_set_fragmented_cpu();
     case BLACKLISTED:
       return is_set_blacklisted();
+    case USED_GENERIC_RESOURCES:
+      return is_set_used_generic_resources();
     }
     throw new java.lang.IllegalStateException();
   }
@@ -859,6 +921,15 @@
         return false;
     }
 
+    boolean this_present_used_generic_resources = true && this.is_set_used_generic_resources();
+    boolean that_present_used_generic_resources = true && that.is_set_used_generic_resources();
+    if (this_present_used_generic_resources || that_present_used_generic_resources) {
+      if (!(this_present_used_generic_resources && that_present_used_generic_resources))
+        return false;
+      if (!this.used_generic_resources.equals(that.used_generic_resources))
+        return false;
+    }
+
     return true;
   }
 
@@ -908,6 +979,10 @@
     if (is_set_blacklisted())
       hashCode = hashCode * 8191 + ((blacklisted) ? 131071 : 524287);
 
+    hashCode = hashCode * 8191 + ((is_set_used_generic_resources()) ? 131071 : 524287);
+    if (is_set_used_generic_resources())
+      hashCode = hashCode * 8191 + used_generic_resources.hashCode();
+
     return hashCode;
   }
 
@@ -1039,6 +1114,16 @@
         return lastComparison;
       }
     }
+    lastComparison = java.lang.Boolean.valueOf(is_set_used_generic_resources()).compareTo(other.is_set_used_generic_resources());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    if (is_set_used_generic_resources()) {
+      lastComparison = org.apache.storm.thrift.TBaseHelper.compareTo(this.used_generic_resources, other.used_generic_resources);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+    }
     return 0;
   }
 
@@ -1137,6 +1222,16 @@
       sb.append(this.blacklisted);
       first = false;
     }
+    if (is_set_used_generic_resources()) {
+      if (!first) sb.append(", ");
+      sb.append("used_generic_resources:");
+      if (this.used_generic_resources == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.used_generic_resources);
+      }
+      first = false;
+    }
     sb.append(")");
     return sb.toString();
   }
@@ -1253,15 +1348,15 @@
           case 7: // TOTAL_RESOURCES
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map126 = iprot.readMapBegin();
-                struct.total_resources = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map126.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key127;
-                double _val128;
-                for (int _i129 = 0; _i129 < _map126.size; ++_i129)
+                org.apache.storm.thrift.protocol.TMap _map146 = iprot.readMapBegin();
+                struct.total_resources = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map146.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key147;
+                double _val148;
+                for (int _i149 = 0; _i149 < _map146.size; ++_i149)
                 {
-                  _key127 = iprot.readString();
-                  _val128 = iprot.readDouble();
-                  struct.total_resources.put(_key127, _val128);
+                  _key147 = iprot.readString();
+                  _val148 = iprot.readDouble();
+                  struct.total_resources.put(_key147, _val148);
                 }
                 iprot.readMapEnd();
               }
@@ -1310,6 +1405,26 @@
               org.apache.storm.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
             }
             break;
+          case 13: // USED_GENERIC_RESOURCES
+            if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
+              {
+                org.apache.storm.thrift.protocol.TMap _map150 = iprot.readMapBegin();
+                struct.used_generic_resources = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map150.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key151;
+                double _val152;
+                for (int _i153 = 0; _i153 < _map150.size; ++_i153)
+                {
+                  _key151 = iprot.readString();
+                  _val152 = iprot.readDouble();
+                  struct.used_generic_resources.put(_key151, _val152);
+                }
+                iprot.readMapEnd();
+              }
+              struct.set_used_generic_resources_isSet(true);
+            } else { 
+              org.apache.storm.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+            }
+            break;
           default:
             org.apache.storm.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
         }
@@ -1354,10 +1469,10 @@
           oprot.writeFieldBegin(TOTAL_RESOURCES_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, struct.total_resources.size()));
-            for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter130 : struct.total_resources.entrySet())
+            for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter154 : struct.total_resources.entrySet())
             {
-              oprot.writeString(_iter130.getKey());
-              oprot.writeDouble(_iter130.getValue());
+              oprot.writeString(_iter154.getKey());
+              oprot.writeDouble(_iter154.getValue());
             }
             oprot.writeMapEnd();
           }
@@ -1389,6 +1504,21 @@
         oprot.writeBool(struct.blacklisted);
         oprot.writeFieldEnd();
       }
+      if (struct.used_generic_resources != null) {
+        if (struct.is_set_used_generic_resources()) {
+          oprot.writeFieldBegin(USED_GENERIC_RESOURCES_FIELD_DESC);
+          {
+            oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, struct.used_generic_resources.size()));
+            for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter155 : struct.used_generic_resources.entrySet())
+            {
+              oprot.writeString(_iter155.getKey());
+              oprot.writeDouble(_iter155.getValue());
+            }
+            oprot.writeMapEnd();
+          }
+          oprot.writeFieldEnd();
+        }
+      }
       oprot.writeFieldStop();
       oprot.writeStructEnd();
     }
@@ -1433,17 +1563,20 @@
       if (struct.is_set_blacklisted()) {
         optionals.set(6);
       }
-      oprot.writeBitSet(optionals, 7);
+      if (struct.is_set_used_generic_resources()) {
+        optionals.set(7);
+      }
+      oprot.writeBitSet(optionals, 8);
       if (struct.is_set_version()) {
         oprot.writeString(struct.version);
       }
       if (struct.is_set_total_resources()) {
         {
           oprot.writeI32(struct.total_resources.size());
-          for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter131 : struct.total_resources.entrySet())
+          for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter156 : struct.total_resources.entrySet())
           {
-            oprot.writeString(_iter131.getKey());
-            oprot.writeDouble(_iter131.getValue());
+            oprot.writeString(_iter156.getKey());
+            oprot.writeDouble(_iter156.getValue());
           }
         }
       }
@@ -1462,6 +1595,16 @@
       if (struct.is_set_blacklisted()) {
         oprot.writeBool(struct.blacklisted);
       }
+      if (struct.is_set_used_generic_resources()) {
+        {
+          oprot.writeI32(struct.used_generic_resources.size());
+          for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter157 : struct.used_generic_resources.entrySet())
+          {
+            oprot.writeString(_iter157.getKey());
+            oprot.writeDouble(_iter157.getValue());
+          }
+        }
+      }
     }
 
     @Override
@@ -1477,22 +1620,22 @@
       struct.set_num_used_workers_isSet(true);
       struct.supervisor_id = iprot.readString();
       struct.set_supervisor_id_isSet(true);
-      java.util.BitSet incoming = iprot.readBitSet(7);
+      java.util.BitSet incoming = iprot.readBitSet(8);
       if (incoming.get(0)) {
         struct.version = iprot.readString();
         struct.set_version_isSet(true);
       }
       if (incoming.get(1)) {
         {
-          org.apache.storm.thrift.protocol.TMap _map132 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, iprot.readI32());
-          struct.total_resources = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map132.size);
-          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key133;
-          double _val134;
-          for (int _i135 = 0; _i135 < _map132.size; ++_i135)
+          org.apache.storm.thrift.protocol.TMap _map158 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, iprot.readI32());
+          struct.total_resources = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map158.size);
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key159;
+          double _val160;
+          for (int _i161 = 0; _i161 < _map158.size; ++_i161)
           {
-            _key133 = iprot.readString();
-            _val134 = iprot.readDouble();
-            struct.total_resources.put(_key133, _val134);
+            _key159 = iprot.readString();
+            _val160 = iprot.readDouble();
+            struct.total_resources.put(_key159, _val160);
           }
         }
         struct.set_total_resources_isSet(true);
@@ -1517,6 +1660,21 @@
         struct.blacklisted = iprot.readBool();
         struct.set_blacklisted_isSet(true);
       }
+      if (incoming.get(7)) {
+        {
+          org.apache.storm.thrift.protocol.TMap _map162 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, iprot.readI32());
+          struct.used_generic_resources = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map162.size);
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key163;
+          double _val164;
+          for (int _i165 = 0; _i165 < _map162.size; ++_i165)
+          {
+            _key163 = iprot.readString();
+            _val164 = iprot.readDouble();
+            struct.used_generic_resources.put(_key163, _val164);
+          }
+        }
+        struct.set_used_generic_resources_isSet(true);
+      }
     }
   }
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/SupervisorWorkerHeartbeat.java b/storm-client/src/jvm/org/apache/storm/generated/SupervisorWorkerHeartbeat.java
index 497a447..a518062 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/SupervisorWorkerHeartbeat.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/SupervisorWorkerHeartbeat.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class SupervisorWorkerHeartbeat implements org.apache.storm.thrift.TBase<SupervisorWorkerHeartbeat, SupervisorWorkerHeartbeat._Fields>, java.io.Serializable, Cloneable, Comparable<SupervisorWorkerHeartbeat> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("SupervisorWorkerHeartbeat");
 
@@ -523,14 +523,14 @@
           case 2: // EXECUTORS
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.LIST) {
               {
-                org.apache.storm.thrift.protocol.TList _list870 = iprot.readListBegin();
-                struct.executors = new java.util.ArrayList<ExecutorInfo>(_list870.size);
-                @org.apache.storm.thrift.annotation.Nullable ExecutorInfo _elem871;
-                for (int _i872 = 0; _i872 < _list870.size; ++_i872)
+                org.apache.storm.thrift.protocol.TList _list920 = iprot.readListBegin();
+                struct.executors = new java.util.ArrayList<ExecutorInfo>(_list920.size);
+                @org.apache.storm.thrift.annotation.Nullable ExecutorInfo _elem921;
+                for (int _i922 = 0; _i922 < _list920.size; ++_i922)
                 {
-                  _elem871 = new ExecutorInfo();
-                  _elem871.read(iprot);
-                  struct.executors.add(_elem871);
+                  _elem921 = new ExecutorInfo();
+                  _elem921.read(iprot);
+                  struct.executors.add(_elem921);
                 }
                 iprot.readListEnd();
               }
@@ -569,9 +569,9 @@
         oprot.writeFieldBegin(EXECUTORS_FIELD_DESC);
         {
           oprot.writeListBegin(new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, struct.executors.size()));
-          for (ExecutorInfo _iter873 : struct.executors)
+          for (ExecutorInfo _iter923 : struct.executors)
           {
-            _iter873.write(oprot);
+            _iter923.write(oprot);
           }
           oprot.writeListEnd();
         }
@@ -600,9 +600,9 @@
       oprot.writeString(struct.storm_id);
       {
         oprot.writeI32(struct.executors.size());
-        for (ExecutorInfo _iter874 : struct.executors)
+        for (ExecutorInfo _iter924 : struct.executors)
         {
-          _iter874.write(oprot);
+          _iter924.write(oprot);
         }
       }
       oprot.writeI32(struct.time_secs);
@@ -614,14 +614,14 @@
       struct.storm_id = iprot.readString();
       struct.set_storm_id_isSet(true);
       {
-        org.apache.storm.thrift.protocol.TList _list875 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-        struct.executors = new java.util.ArrayList<ExecutorInfo>(_list875.size);
-        @org.apache.storm.thrift.annotation.Nullable ExecutorInfo _elem876;
-        for (int _i877 = 0; _i877 < _list875.size; ++_i877)
+        org.apache.storm.thrift.protocol.TList _list925 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+        struct.executors = new java.util.ArrayList<ExecutorInfo>(_list925.size);
+        @org.apache.storm.thrift.annotation.Nullable ExecutorInfo _elem926;
+        for (int _i927 = 0; _i927 < _list925.size; ++_i927)
         {
-          _elem876 = new ExecutorInfo();
-          _elem876.read(iprot);
-          struct.executors.add(_elem876);
+          _elem926 = new ExecutorInfo();
+          _elem926.read(iprot);
+          struct.executors.add(_elem926);
         }
       }
       struct.set_executors_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/SupervisorWorkerHeartbeats.java b/storm-client/src/jvm/org/apache/storm/generated/SupervisorWorkerHeartbeats.java
index 20edbfb..bb9611d 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/SupervisorWorkerHeartbeats.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/SupervisorWorkerHeartbeats.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class SupervisorWorkerHeartbeats implements org.apache.storm.thrift.TBase<SupervisorWorkerHeartbeats, SupervisorWorkerHeartbeats._Fields>, java.io.Serializable, Cloneable, Comparable<SupervisorWorkerHeartbeats> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("SupervisorWorkerHeartbeats");
 
@@ -441,14 +441,14 @@
           case 2: // WORKER_HEARTBEATS
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.LIST) {
               {
-                org.apache.storm.thrift.protocol.TList _list878 = iprot.readListBegin();
-                struct.worker_heartbeats = new java.util.ArrayList<SupervisorWorkerHeartbeat>(_list878.size);
-                @org.apache.storm.thrift.annotation.Nullable SupervisorWorkerHeartbeat _elem879;
-                for (int _i880 = 0; _i880 < _list878.size; ++_i880)
+                org.apache.storm.thrift.protocol.TList _list928 = iprot.readListBegin();
+                struct.worker_heartbeats = new java.util.ArrayList<SupervisorWorkerHeartbeat>(_list928.size);
+                @org.apache.storm.thrift.annotation.Nullable SupervisorWorkerHeartbeat _elem929;
+                for (int _i930 = 0; _i930 < _list928.size; ++_i930)
                 {
-                  _elem879 = new SupervisorWorkerHeartbeat();
-                  _elem879.read(iprot);
-                  struct.worker_heartbeats.add(_elem879);
+                  _elem929 = new SupervisorWorkerHeartbeat();
+                  _elem929.read(iprot);
+                  struct.worker_heartbeats.add(_elem929);
                 }
                 iprot.readListEnd();
               }
@@ -479,9 +479,9 @@
         oprot.writeFieldBegin(WORKER_HEARTBEATS_FIELD_DESC);
         {
           oprot.writeListBegin(new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, struct.worker_heartbeats.size()));
-          for (SupervisorWorkerHeartbeat _iter881 : struct.worker_heartbeats)
+          for (SupervisorWorkerHeartbeat _iter931 : struct.worker_heartbeats)
           {
-            _iter881.write(oprot);
+            _iter931.write(oprot);
           }
           oprot.writeListEnd();
         }
@@ -507,9 +507,9 @@
       oprot.writeString(struct.supervisor_id);
       {
         oprot.writeI32(struct.worker_heartbeats.size());
-        for (SupervisorWorkerHeartbeat _iter882 : struct.worker_heartbeats)
+        for (SupervisorWorkerHeartbeat _iter932 : struct.worker_heartbeats)
         {
-          _iter882.write(oprot);
+          _iter932.write(oprot);
         }
       }
     }
@@ -520,14 +520,14 @@
       struct.supervisor_id = iprot.readString();
       struct.set_supervisor_id_isSet(true);
       {
-        org.apache.storm.thrift.protocol.TList _list883 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-        struct.worker_heartbeats = new java.util.ArrayList<SupervisorWorkerHeartbeat>(_list883.size);
-        @org.apache.storm.thrift.annotation.Nullable SupervisorWorkerHeartbeat _elem884;
-        for (int _i885 = 0; _i885 < _list883.size; ++_i885)
+        org.apache.storm.thrift.protocol.TList _list933 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+        struct.worker_heartbeats = new java.util.ArrayList<SupervisorWorkerHeartbeat>(_list933.size);
+        @org.apache.storm.thrift.annotation.Nullable SupervisorWorkerHeartbeat _elem934;
+        for (int _i935 = 0; _i935 < _list933.size; ++_i935)
         {
-          _elem884 = new SupervisorWorkerHeartbeat();
-          _elem884.read(iprot);
-          struct.worker_heartbeats.add(_elem884);
+          _elem934 = new SupervisorWorkerHeartbeat();
+          _elem934.read(iprot);
+          struct.worker_heartbeats.add(_elem934);
         }
       }
       struct.set_worker_heartbeats_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/ThriftSerializedObject.java b/storm-client/src/jvm/org/apache/storm/generated/ThriftSerializedObject.java
index 8a77daa..7ec1f7b 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/ThriftSerializedObject.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/ThriftSerializedObject.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class ThriftSerializedObject implements org.apache.storm.thrift.TBase<ThriftSerializedObject, ThriftSerializedObject._Fields>, java.io.Serializable, Cloneable, Comparable<ThriftSerializedObject> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("ThriftSerializedObject");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/TopologyActionOptions.java b/storm-client/src/jvm/org/apache/storm/generated/TopologyActionOptions.java
index f0ce214..c1500bd 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/TopologyActionOptions.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/TopologyActionOptions.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class TopologyActionOptions extends org.apache.storm.thrift.TUnion<TopologyActionOptions, TopologyActionOptions._Fields> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("TopologyActionOptions");
   private static final org.apache.storm.thrift.protocol.TField KILL_OPTIONS_FIELD_DESC = new org.apache.storm.thrift.protocol.TField("kill_options", org.apache.storm.thrift.protocol.TType.STRUCT, (short)1);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/TopologyHistoryInfo.java b/storm-client/src/jvm/org/apache/storm/generated/TopologyHistoryInfo.java
index f88a7fc..6c00192 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/TopologyHistoryInfo.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/TopologyHistoryInfo.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class TopologyHistoryInfo implements org.apache.storm.thrift.TBase<TopologyHistoryInfo, TopologyHistoryInfo._Fields>, java.io.Serializable, Cloneable, Comparable<TopologyHistoryInfo> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("TopologyHistoryInfo");
 
@@ -341,13 +341,13 @@
           case 1: // TOPO_IDS
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.LIST) {
               {
-                org.apache.storm.thrift.protocol.TList _list862 = iprot.readListBegin();
-                struct.topo_ids = new java.util.ArrayList<java.lang.String>(_list862.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _elem863;
-                for (int _i864 = 0; _i864 < _list862.size; ++_i864)
+                org.apache.storm.thrift.protocol.TList _list912 = iprot.readListBegin();
+                struct.topo_ids = new java.util.ArrayList<java.lang.String>(_list912.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _elem913;
+                for (int _i914 = 0; _i914 < _list912.size; ++_i914)
                 {
-                  _elem863 = iprot.readString();
-                  struct.topo_ids.add(_elem863);
+                  _elem913 = iprot.readString();
+                  struct.topo_ids.add(_elem913);
                 }
                 iprot.readListEnd();
               }
@@ -373,9 +373,9 @@
         oprot.writeFieldBegin(TOPO_IDS_FIELD_DESC);
         {
           oprot.writeListBegin(new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRING, struct.topo_ids.size()));
-          for (java.lang.String _iter865 : struct.topo_ids)
+          for (java.lang.String _iter915 : struct.topo_ids)
           {
-            oprot.writeString(_iter865);
+            oprot.writeString(_iter915);
           }
           oprot.writeListEnd();
         }
@@ -406,9 +406,9 @@
       if (struct.is_set_topo_ids()) {
         {
           oprot.writeI32(struct.topo_ids.size());
-          for (java.lang.String _iter866 : struct.topo_ids)
+          for (java.lang.String _iter916 : struct.topo_ids)
           {
-            oprot.writeString(_iter866);
+            oprot.writeString(_iter916);
           }
         }
       }
@@ -420,13 +420,13 @@
       java.util.BitSet incoming = iprot.readBitSet(1);
       if (incoming.get(0)) {
         {
-          org.apache.storm.thrift.protocol.TList _list867 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRING, iprot.readI32());
-          struct.topo_ids = new java.util.ArrayList<java.lang.String>(_list867.size);
-          @org.apache.storm.thrift.annotation.Nullable java.lang.String _elem868;
-          for (int _i869 = 0; _i869 < _list867.size; ++_i869)
+          org.apache.storm.thrift.protocol.TList _list917 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRING, iprot.readI32());
+          struct.topo_ids = new java.util.ArrayList<java.lang.String>(_list917.size);
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _elem918;
+          for (int _i919 = 0; _i919 < _list917.size; ++_i919)
           {
-            _elem868 = iprot.readString();
-            struct.topo_ids.add(_elem868);
+            _elem918 = iprot.readString();
+            struct.topo_ids.add(_elem918);
           }
         }
         struct.set_topo_ids_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/TopologyInfo.java b/storm-client/src/jvm/org/apache/storm/generated/TopologyInfo.java
index 0596f79..0ff7b57 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/TopologyInfo.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/TopologyInfo.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class TopologyInfo implements org.apache.storm.thrift.TBase<TopologyInfo, TopologyInfo._Fields>, java.io.Serializable, Cloneable, Comparable<TopologyInfo> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("TopologyInfo");
 
@@ -1698,14 +1698,14 @@
           case 4: // EXECUTORS
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.LIST) {
               {
-                org.apache.storm.thrift.protocol.TList _list360 = iprot.readListBegin();
-                struct.executors = new java.util.ArrayList<ExecutorSummary>(_list360.size);
-                @org.apache.storm.thrift.annotation.Nullable ExecutorSummary _elem361;
-                for (int _i362 = 0; _i362 < _list360.size; ++_i362)
+                org.apache.storm.thrift.protocol.TList _list390 = iprot.readListBegin();
+                struct.executors = new java.util.ArrayList<ExecutorSummary>(_list390.size);
+                @org.apache.storm.thrift.annotation.Nullable ExecutorSummary _elem391;
+                for (int _i392 = 0; _i392 < _list390.size; ++_i392)
                 {
-                  _elem361 = new ExecutorSummary();
-                  _elem361.read(iprot);
-                  struct.executors.add(_elem361);
+                  _elem391 = new ExecutorSummary();
+                  _elem391.read(iprot);
+                  struct.executors.add(_elem391);
                 }
                 iprot.readListEnd();
               }
@@ -1725,26 +1725,26 @@
           case 6: // ERRORS
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map363 = iprot.readMapBegin();
-                struct.errors = new java.util.HashMap<java.lang.String,java.util.List<ErrorInfo>>(2*_map363.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key364;
-                @org.apache.storm.thrift.annotation.Nullable java.util.List<ErrorInfo> _val365;
-                for (int _i366 = 0; _i366 < _map363.size; ++_i366)
+                org.apache.storm.thrift.protocol.TMap _map393 = iprot.readMapBegin();
+                struct.errors = new java.util.HashMap<java.lang.String,java.util.List<ErrorInfo>>(2*_map393.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key394;
+                @org.apache.storm.thrift.annotation.Nullable java.util.List<ErrorInfo> _val395;
+                for (int _i396 = 0; _i396 < _map393.size; ++_i396)
                 {
-                  _key364 = iprot.readString();
+                  _key394 = iprot.readString();
                   {
-                    org.apache.storm.thrift.protocol.TList _list367 = iprot.readListBegin();
-                    _val365 = new java.util.ArrayList<ErrorInfo>(_list367.size);
-                    @org.apache.storm.thrift.annotation.Nullable ErrorInfo _elem368;
-                    for (int _i369 = 0; _i369 < _list367.size; ++_i369)
+                    org.apache.storm.thrift.protocol.TList _list397 = iprot.readListBegin();
+                    _val395 = new java.util.ArrayList<ErrorInfo>(_list397.size);
+                    @org.apache.storm.thrift.annotation.Nullable ErrorInfo _elem398;
+                    for (int _i399 = 0; _i399 < _list397.size; ++_i399)
                     {
-                      _elem368 = new ErrorInfo();
-                      _elem368.read(iprot);
-                      _val365.add(_elem368);
+                      _elem398 = new ErrorInfo();
+                      _elem398.read(iprot);
+                      _val395.add(_elem398);
                     }
                     iprot.readListEnd();
                   }
-                  struct.errors.put(_key364, _val365);
+                  struct.errors.put(_key394, _val395);
                 }
                 iprot.readMapEnd();
               }
@@ -1756,16 +1756,16 @@
           case 7: // COMPONENT_DEBUG
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map370 = iprot.readMapBegin();
-                struct.component_debug = new java.util.HashMap<java.lang.String,DebugOptions>(2*_map370.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key371;
-                @org.apache.storm.thrift.annotation.Nullable DebugOptions _val372;
-                for (int _i373 = 0; _i373 < _map370.size; ++_i373)
+                org.apache.storm.thrift.protocol.TMap _map400 = iprot.readMapBegin();
+                struct.component_debug = new java.util.HashMap<java.lang.String,DebugOptions>(2*_map400.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key401;
+                @org.apache.storm.thrift.annotation.Nullable DebugOptions _val402;
+                for (int _i403 = 0; _i403 < _map400.size; ++_i403)
                 {
-                  _key371 = iprot.readString();
-                  _val372 = new DebugOptions();
-                  _val372.read(iprot);
-                  struct.component_debug.put(_key371, _val372);
+                  _key401 = iprot.readString();
+                  _val402 = new DebugOptions();
+                  _val402.read(iprot);
+                  struct.component_debug.put(_key401, _val402);
                 }
                 iprot.readMapEnd();
               }
@@ -1884,9 +1884,9 @@
         oprot.writeFieldBegin(EXECUTORS_FIELD_DESC);
         {
           oprot.writeListBegin(new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, struct.executors.size()));
-          for (ExecutorSummary _iter374 : struct.executors)
+          for (ExecutorSummary _iter404 : struct.executors)
           {
-            _iter374.write(oprot);
+            _iter404.write(oprot);
           }
           oprot.writeListEnd();
         }
@@ -1901,14 +1901,14 @@
         oprot.writeFieldBegin(ERRORS_FIELD_DESC);
         {
           oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.LIST, struct.errors.size()));
-          for (java.util.Map.Entry<java.lang.String, java.util.List<ErrorInfo>> _iter375 : struct.errors.entrySet())
+          for (java.util.Map.Entry<java.lang.String, java.util.List<ErrorInfo>> _iter405 : struct.errors.entrySet())
           {
-            oprot.writeString(_iter375.getKey());
+            oprot.writeString(_iter405.getKey());
             {
-              oprot.writeListBegin(new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, _iter375.getValue().size()));
-              for (ErrorInfo _iter376 : _iter375.getValue())
+              oprot.writeListBegin(new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, _iter405.getValue().size()));
+              for (ErrorInfo _iter406 : _iter405.getValue())
               {
-                _iter376.write(oprot);
+                _iter406.write(oprot);
               }
               oprot.writeListEnd();
             }
@@ -1922,10 +1922,10 @@
           oprot.writeFieldBegin(COMPONENT_DEBUG_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRUCT, struct.component_debug.size()));
-            for (java.util.Map.Entry<java.lang.String, DebugOptions> _iter377 : struct.component_debug.entrySet())
+            for (java.util.Map.Entry<java.lang.String, DebugOptions> _iter407 : struct.component_debug.entrySet())
             {
-              oprot.writeString(_iter377.getKey());
-              _iter377.getValue().write(oprot);
+              oprot.writeString(_iter407.getKey());
+              _iter407.getValue().write(oprot);
             }
             oprot.writeMapEnd();
           }
@@ -2010,22 +2010,22 @@
       oprot.writeI32(struct.uptime_secs);
       {
         oprot.writeI32(struct.executors.size());
-        for (ExecutorSummary _iter378 : struct.executors)
+        for (ExecutorSummary _iter408 : struct.executors)
         {
-          _iter378.write(oprot);
+          _iter408.write(oprot);
         }
       }
       oprot.writeString(struct.status);
       {
         oprot.writeI32(struct.errors.size());
-        for (java.util.Map.Entry<java.lang.String, java.util.List<ErrorInfo>> _iter379 : struct.errors.entrySet())
+        for (java.util.Map.Entry<java.lang.String, java.util.List<ErrorInfo>> _iter409 : struct.errors.entrySet())
         {
-          oprot.writeString(_iter379.getKey());
+          oprot.writeString(_iter409.getKey());
           {
-            oprot.writeI32(_iter379.getValue().size());
-            for (ErrorInfo _iter380 : _iter379.getValue())
+            oprot.writeI32(_iter409.getValue().size());
+            for (ErrorInfo _iter410 : _iter409.getValue())
             {
-              _iter380.write(oprot);
+              _iter410.write(oprot);
             }
           }
         }
@@ -2068,10 +2068,10 @@
       if (struct.is_set_component_debug()) {
         {
           oprot.writeI32(struct.component_debug.size());
-          for (java.util.Map.Entry<java.lang.String, DebugOptions> _iter381 : struct.component_debug.entrySet())
+          for (java.util.Map.Entry<java.lang.String, DebugOptions> _iter411 : struct.component_debug.entrySet())
           {
-            oprot.writeString(_iter381.getKey());
-            _iter381.getValue().write(oprot);
+            oprot.writeString(_iter411.getKey());
+            _iter411.getValue().write(oprot);
           }
         }
       }
@@ -2117,55 +2117,55 @@
       struct.uptime_secs = iprot.readI32();
       struct.set_uptime_secs_isSet(true);
       {
-        org.apache.storm.thrift.protocol.TList _list382 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-        struct.executors = new java.util.ArrayList<ExecutorSummary>(_list382.size);
-        @org.apache.storm.thrift.annotation.Nullable ExecutorSummary _elem383;
-        for (int _i384 = 0; _i384 < _list382.size; ++_i384)
+        org.apache.storm.thrift.protocol.TList _list412 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+        struct.executors = new java.util.ArrayList<ExecutorSummary>(_list412.size);
+        @org.apache.storm.thrift.annotation.Nullable ExecutorSummary _elem413;
+        for (int _i414 = 0; _i414 < _list412.size; ++_i414)
         {
-          _elem383 = new ExecutorSummary();
-          _elem383.read(iprot);
-          struct.executors.add(_elem383);
+          _elem413 = new ExecutorSummary();
+          _elem413.read(iprot);
+          struct.executors.add(_elem413);
         }
       }
       struct.set_executors_isSet(true);
       struct.status = iprot.readString();
       struct.set_status_isSet(true);
       {
-        org.apache.storm.thrift.protocol.TMap _map385 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.LIST, iprot.readI32());
-        struct.errors = new java.util.HashMap<java.lang.String,java.util.List<ErrorInfo>>(2*_map385.size);
-        @org.apache.storm.thrift.annotation.Nullable java.lang.String _key386;
-        @org.apache.storm.thrift.annotation.Nullable java.util.List<ErrorInfo> _val387;
-        for (int _i388 = 0; _i388 < _map385.size; ++_i388)
+        org.apache.storm.thrift.protocol.TMap _map415 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.LIST, iprot.readI32());
+        struct.errors = new java.util.HashMap<java.lang.String,java.util.List<ErrorInfo>>(2*_map415.size);
+        @org.apache.storm.thrift.annotation.Nullable java.lang.String _key416;
+        @org.apache.storm.thrift.annotation.Nullable java.util.List<ErrorInfo> _val417;
+        for (int _i418 = 0; _i418 < _map415.size; ++_i418)
         {
-          _key386 = iprot.readString();
+          _key416 = iprot.readString();
           {
-            org.apache.storm.thrift.protocol.TList _list389 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-            _val387 = new java.util.ArrayList<ErrorInfo>(_list389.size);
-            @org.apache.storm.thrift.annotation.Nullable ErrorInfo _elem390;
-            for (int _i391 = 0; _i391 < _list389.size; ++_i391)
+            org.apache.storm.thrift.protocol.TList _list419 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+            _val417 = new java.util.ArrayList<ErrorInfo>(_list419.size);
+            @org.apache.storm.thrift.annotation.Nullable ErrorInfo _elem420;
+            for (int _i421 = 0; _i421 < _list419.size; ++_i421)
             {
-              _elem390 = new ErrorInfo();
-              _elem390.read(iprot);
-              _val387.add(_elem390);
+              _elem420 = new ErrorInfo();
+              _elem420.read(iprot);
+              _val417.add(_elem420);
             }
           }
-          struct.errors.put(_key386, _val387);
+          struct.errors.put(_key416, _val417);
         }
       }
       struct.set_errors_isSet(true);
       java.util.BitSet incoming = iprot.readBitSet(11);
       if (incoming.get(0)) {
         {
-          org.apache.storm.thrift.protocol.TMap _map392 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.component_debug = new java.util.HashMap<java.lang.String,DebugOptions>(2*_map392.size);
-          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key393;
-          @org.apache.storm.thrift.annotation.Nullable DebugOptions _val394;
-          for (int _i395 = 0; _i395 < _map392.size; ++_i395)
+          org.apache.storm.thrift.protocol.TMap _map422 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.component_debug = new java.util.HashMap<java.lang.String,DebugOptions>(2*_map422.size);
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key423;
+          @org.apache.storm.thrift.annotation.Nullable DebugOptions _val424;
+          for (int _i425 = 0; _i425 < _map422.size; ++_i425)
           {
-            _key393 = iprot.readString();
-            _val394 = new DebugOptions();
-            _val394.read(iprot);
-            struct.component_debug.put(_key393, _val394);
+            _key423 = iprot.readString();
+            _val424 = new DebugOptions();
+            _val424.read(iprot);
+            struct.component_debug.put(_key423, _val424);
           }
         }
         struct.set_component_debug_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/TopologyInitialStatus.java b/storm-client/src/jvm/org/apache/storm/generated/TopologyInitialStatus.java
index e55e7fa..114d0f4 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/TopologyInitialStatus.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/TopologyInitialStatus.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public enum TopologyInitialStatus implements org.apache.storm.thrift.TEnum {
   ACTIVE(1),
   INACTIVE(2);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/TopologyPageInfo.java b/storm-client/src/jvm/org/apache/storm/generated/TopologyPageInfo.java
index 90d950a..5c028e0 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/TopologyPageInfo.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/TopologyPageInfo.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class TopologyPageInfo implements org.apache.storm.thrift.TBase<TopologyPageInfo, TopologyPageInfo._Fields>, java.io.Serializable, Cloneable, Comparable<TopologyPageInfo> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("TopologyPageInfo");
 
@@ -60,6 +60,8 @@
   private static final org.apache.storm.thrift.protocol.TField ASSIGNED_SHARED_ON_HEAP_MEMORY_FIELD_DESC = new org.apache.storm.thrift.protocol.TField("assigned_shared_on_heap_memory", org.apache.storm.thrift.protocol.TType.DOUBLE, (short)532);
   private static final org.apache.storm.thrift.protocol.TField ASSIGNED_REGULAR_OFF_HEAP_MEMORY_FIELD_DESC = new org.apache.storm.thrift.protocol.TField("assigned_regular_off_heap_memory", org.apache.storm.thrift.protocol.TType.DOUBLE, (short)533);
   private static final org.apache.storm.thrift.protocol.TField ASSIGNED_SHARED_OFF_HEAP_MEMORY_FIELD_DESC = new org.apache.storm.thrift.protocol.TField("assigned_shared_off_heap_memory", org.apache.storm.thrift.protocol.TType.DOUBLE, (short)534);
+  private static final org.apache.storm.thrift.protocol.TField REQUESTED_GENERIC_RESOURCES_FIELD_DESC = new org.apache.storm.thrift.protocol.TField("requested_generic_resources", org.apache.storm.thrift.protocol.TType.MAP, (short)535);
+  private static final org.apache.storm.thrift.protocol.TField ASSIGNED_GENERIC_RESOURCES_FIELD_DESC = new org.apache.storm.thrift.protocol.TField("assigned_generic_resources", org.apache.storm.thrift.protocol.TType.MAP, (short)536);
 
   private static final org.apache.storm.thrift.scheme.SchemeFactory STANDARD_SCHEME_FACTORY = new TopologyPageInfoStandardSchemeFactory();
   private static final org.apache.storm.thrift.scheme.SchemeFactory TUPLE_SCHEME_FACTORY = new TopologyPageInfoTupleSchemeFactory();
@@ -96,6 +98,8 @@
   private double assigned_shared_on_heap_memory; // optional
   private double assigned_regular_off_heap_memory; // optional
   private double assigned_shared_off_heap_memory; // optional
+  private @org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Double> requested_generic_resources; // optional
+  private @org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Double> assigned_generic_resources; // optional
 
   /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
   public enum _Fields implements org.apache.storm.thrift.TFieldIdEnum {
@@ -130,7 +134,9 @@
     ASSIGNED_REGULAR_ON_HEAP_MEMORY((short)531, "assigned_regular_on_heap_memory"),
     ASSIGNED_SHARED_ON_HEAP_MEMORY((short)532, "assigned_shared_on_heap_memory"),
     ASSIGNED_REGULAR_OFF_HEAP_MEMORY((short)533, "assigned_regular_off_heap_memory"),
-    ASSIGNED_SHARED_OFF_HEAP_MEMORY((short)534, "assigned_shared_off_heap_memory");
+    ASSIGNED_SHARED_OFF_HEAP_MEMORY((short)534, "assigned_shared_off_heap_memory"),
+    REQUESTED_GENERIC_RESOURCES((short)535, "requested_generic_resources"),
+    ASSIGNED_GENERIC_RESOURCES((short)536, "assigned_generic_resources");
 
     private static final java.util.Map<java.lang.String, _Fields> byName = new java.util.HashMap<java.lang.String, _Fields>();
 
@@ -210,6 +216,10 @@
           return ASSIGNED_REGULAR_OFF_HEAP_MEMORY;
         case 534: // ASSIGNED_SHARED_OFF_HEAP_MEMORY
           return ASSIGNED_SHARED_OFF_HEAP_MEMORY;
+        case 535: // REQUESTED_GENERIC_RESOURCES
+          return REQUESTED_GENERIC_RESOURCES;
+        case 536: // ASSIGNED_GENERIC_RESOURCES
+          return ASSIGNED_GENERIC_RESOURCES;
         default:
           return null;
       }
@@ -271,7 +281,7 @@
   private static final int __ASSIGNED_REGULAR_OFF_HEAP_MEMORY_ISSET_ID = 17;
   private static final int __ASSIGNED_SHARED_OFF_HEAP_MEMORY_ISSET_ID = 18;
   private int __isset_bitfield = 0;
-  private static final _Fields optionals[] = {_Fields.NAME,_Fields.UPTIME_SECS,_Fields.STATUS,_Fields.NUM_TASKS,_Fields.NUM_WORKERS,_Fields.NUM_EXECUTORS,_Fields.TOPOLOGY_CONF,_Fields.ID_TO_SPOUT_AGG_STATS,_Fields.ID_TO_BOLT_AGG_STATS,_Fields.SCHED_STATUS,_Fields.TOPOLOGY_STATS,_Fields.OWNER,_Fields.DEBUG_OPTIONS,_Fields.REPLICATION_COUNT,_Fields.WORKERS,_Fields.STORM_VERSION,_Fields.TOPOLOGY_VERSION,_Fields.REQUESTED_MEMONHEAP,_Fields.REQUESTED_MEMOFFHEAP,_Fields.REQUESTED_CPU,_Fields.ASSIGNED_MEMONHEAP,_Fields.ASSIGNED_MEMOFFHEAP,_Fields.ASSIGNED_CPU,_Fields.REQUESTED_REGULAR_ON_HEAP_MEMORY,_Fields.REQUESTED_SHARED_ON_HEAP_MEMORY,_Fields.REQUESTED_REGULAR_OFF_HEAP_MEMORY,_Fields.REQUESTED_SHARED_OFF_HEAP_MEMORY,_Fields.ASSIGNED_REGULAR_ON_HEAP_MEMORY,_Fields.ASSIGNED_SHARED_ON_HEAP_MEMORY,_Fields.ASSIGNED_REGULAR_OFF_HEAP_MEMORY,_Fields.ASSIGNED_SHARED_OFF_HEAP_MEMORY};
+  private static final _Fields optionals[] = {_Fields.NAME,_Fields.UPTIME_SECS,_Fields.STATUS,_Fields.NUM_TASKS,_Fields.NUM_WORKERS,_Fields.NUM_EXECUTORS,_Fields.TOPOLOGY_CONF,_Fields.ID_TO_SPOUT_AGG_STATS,_Fields.ID_TO_BOLT_AGG_STATS,_Fields.SCHED_STATUS,_Fields.TOPOLOGY_STATS,_Fields.OWNER,_Fields.DEBUG_OPTIONS,_Fields.REPLICATION_COUNT,_Fields.WORKERS,_Fields.STORM_VERSION,_Fields.TOPOLOGY_VERSION,_Fields.REQUESTED_MEMONHEAP,_Fields.REQUESTED_MEMOFFHEAP,_Fields.REQUESTED_CPU,_Fields.ASSIGNED_MEMONHEAP,_Fields.ASSIGNED_MEMOFFHEAP,_Fields.ASSIGNED_CPU,_Fields.REQUESTED_REGULAR_ON_HEAP_MEMORY,_Fields.REQUESTED_SHARED_ON_HEAP_MEMORY,_Fields.REQUESTED_REGULAR_OFF_HEAP_MEMORY,_Fields.REQUESTED_SHARED_OFF_HEAP_MEMORY,_Fields.ASSIGNED_REGULAR_ON_HEAP_MEMORY,_Fields.ASSIGNED_SHARED_ON_HEAP_MEMORY,_Fields.ASSIGNED_REGULAR_OFF_HEAP_MEMORY,_Fields.ASSIGNED_SHARED_OFF_HEAP_MEMORY,_Fields.REQUESTED_GENERIC_RESOURCES,_Fields.ASSIGNED_GENERIC_RESOURCES};
   public static final java.util.Map<_Fields, org.apache.storm.thrift.meta_data.FieldMetaData> metaDataMap;
   static {
     java.util.Map<_Fields, org.apache.storm.thrift.meta_data.FieldMetaData> tmpMap = new java.util.EnumMap<_Fields, org.apache.storm.thrift.meta_data.FieldMetaData>(_Fields.class);
@@ -344,6 +354,14 @@
         new org.apache.storm.thrift.meta_data.FieldValueMetaData(org.apache.storm.thrift.protocol.TType.DOUBLE)));
     tmpMap.put(_Fields.ASSIGNED_SHARED_OFF_HEAP_MEMORY, new org.apache.storm.thrift.meta_data.FieldMetaData("assigned_shared_off_heap_memory", org.apache.storm.thrift.TFieldRequirementType.OPTIONAL, 
         new org.apache.storm.thrift.meta_data.FieldValueMetaData(org.apache.storm.thrift.protocol.TType.DOUBLE)));
+    tmpMap.put(_Fields.REQUESTED_GENERIC_RESOURCES, new org.apache.storm.thrift.meta_data.FieldMetaData("requested_generic_resources", org.apache.storm.thrift.TFieldRequirementType.OPTIONAL, 
+        new org.apache.storm.thrift.meta_data.MapMetaData(org.apache.storm.thrift.protocol.TType.MAP, 
+            new org.apache.storm.thrift.meta_data.FieldValueMetaData(org.apache.storm.thrift.protocol.TType.STRING), 
+            new org.apache.storm.thrift.meta_data.FieldValueMetaData(org.apache.storm.thrift.protocol.TType.DOUBLE))));
+    tmpMap.put(_Fields.ASSIGNED_GENERIC_RESOURCES, new org.apache.storm.thrift.meta_data.FieldMetaData("assigned_generic_resources", org.apache.storm.thrift.TFieldRequirementType.OPTIONAL, 
+        new org.apache.storm.thrift.meta_data.MapMetaData(org.apache.storm.thrift.protocol.TType.MAP, 
+            new org.apache.storm.thrift.meta_data.FieldValueMetaData(org.apache.storm.thrift.protocol.TType.STRING), 
+            new org.apache.storm.thrift.meta_data.FieldValueMetaData(org.apache.storm.thrift.protocol.TType.DOUBLE))));
     metaDataMap = java.util.Collections.unmodifiableMap(tmpMap);
     org.apache.storm.thrift.meta_data.FieldMetaData.addStructMetaDataMap(TopologyPageInfo.class, metaDataMap);
   }
@@ -449,6 +467,14 @@
     this.assigned_shared_on_heap_memory = other.assigned_shared_on_heap_memory;
     this.assigned_regular_off_heap_memory = other.assigned_regular_off_heap_memory;
     this.assigned_shared_off_heap_memory = other.assigned_shared_off_heap_memory;
+    if (other.is_set_requested_generic_resources()) {
+      java.util.Map<java.lang.String,java.lang.Double> __this__requested_generic_resources = new java.util.HashMap<java.lang.String,java.lang.Double>(other.requested_generic_resources);
+      this.requested_generic_resources = __this__requested_generic_resources;
+    }
+    if (other.is_set_assigned_generic_resources()) {
+      java.util.Map<java.lang.String,java.lang.Double> __this__assigned_generic_resources = new java.util.HashMap<java.lang.String,java.lang.Double>(other.assigned_generic_resources);
+      this.assigned_generic_resources = __this__assigned_generic_resources;
+    }
   }
 
   public TopologyPageInfo deepCopy() {
@@ -508,6 +534,8 @@
     this.assigned_regular_off_heap_memory = 0.0;
     set_assigned_shared_off_heap_memory_isSet(false);
     this.assigned_shared_off_heap_memory = 0.0;
+    this.requested_generic_resources = null;
+    this.assigned_generic_resources = null;
   }
 
   @org.apache.storm.thrift.annotation.Nullable
@@ -1278,6 +1306,76 @@
     __isset_bitfield = org.apache.storm.thrift.EncodingUtils.setBit(__isset_bitfield, __ASSIGNED_SHARED_OFF_HEAP_MEMORY_ISSET_ID, value);
   }
 
+  public int get_requested_generic_resources_size() {
+    return (this.requested_generic_resources == null) ? 0 : this.requested_generic_resources.size();
+  }
+
+  public void put_to_requested_generic_resources(java.lang.String key, double val) {
+    if (this.requested_generic_resources == null) {
+      this.requested_generic_resources = new java.util.HashMap<java.lang.String,java.lang.Double>();
+    }
+    this.requested_generic_resources.put(key, val);
+  }
+
+  @org.apache.storm.thrift.annotation.Nullable
+  public java.util.Map<java.lang.String,java.lang.Double> get_requested_generic_resources() {
+    return this.requested_generic_resources;
+  }
+
+  public void set_requested_generic_resources(@org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Double> requested_generic_resources) {
+    this.requested_generic_resources = requested_generic_resources;
+  }
+
+  public void unset_requested_generic_resources() {
+    this.requested_generic_resources = null;
+  }
+
+  /** Returns true if field requested_generic_resources is set (has been assigned a value) and false otherwise */
+  public boolean is_set_requested_generic_resources() {
+    return this.requested_generic_resources != null;
+  }
+
+  public void set_requested_generic_resources_isSet(boolean value) {
+    if (!value) {
+      this.requested_generic_resources = null;
+    }
+  }
+
+  public int get_assigned_generic_resources_size() {
+    return (this.assigned_generic_resources == null) ? 0 : this.assigned_generic_resources.size();
+  }
+
+  public void put_to_assigned_generic_resources(java.lang.String key, double val) {
+    if (this.assigned_generic_resources == null) {
+      this.assigned_generic_resources = new java.util.HashMap<java.lang.String,java.lang.Double>();
+    }
+    this.assigned_generic_resources.put(key, val);
+  }
+
+  @org.apache.storm.thrift.annotation.Nullable
+  public java.util.Map<java.lang.String,java.lang.Double> get_assigned_generic_resources() {
+    return this.assigned_generic_resources;
+  }
+
+  public void set_assigned_generic_resources(@org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Double> assigned_generic_resources) {
+    this.assigned_generic_resources = assigned_generic_resources;
+  }
+
+  public void unset_assigned_generic_resources() {
+    this.assigned_generic_resources = null;
+  }
+
+  /** Returns true if field assigned_generic_resources is set (has been assigned a value) and false otherwise */
+  public boolean is_set_assigned_generic_resources() {
+    return this.assigned_generic_resources != null;
+  }
+
+  public void set_assigned_generic_resources_isSet(boolean value) {
+    if (!value) {
+      this.assigned_generic_resources = null;
+    }
+  }
+
   public void setFieldValue(_Fields field, @org.apache.storm.thrift.annotation.Nullable java.lang.Object value) {
     switch (field) {
     case ID:
@@ -1536,6 +1634,22 @@
       }
       break;
 
+    case REQUESTED_GENERIC_RESOURCES:
+      if (value == null) {
+        unset_requested_generic_resources();
+      } else {
+        set_requested_generic_resources((java.util.Map<java.lang.String,java.lang.Double>)value);
+      }
+      break;
+
+    case ASSIGNED_GENERIC_RESOURCES:
+      if (value == null) {
+        unset_assigned_generic_resources();
+      } else {
+        set_assigned_generic_resources((java.util.Map<java.lang.String,java.lang.Double>)value);
+      }
+      break;
+
     }
   }
 
@@ -1638,6 +1752,12 @@
     case ASSIGNED_SHARED_OFF_HEAP_MEMORY:
       return get_assigned_shared_off_heap_memory();
 
+    case REQUESTED_GENERIC_RESOURCES:
+      return get_requested_generic_resources();
+
+    case ASSIGNED_GENERIC_RESOURCES:
+      return get_assigned_generic_resources();
+
     }
     throw new java.lang.IllegalStateException();
   }
@@ -1713,6 +1833,10 @@
       return is_set_assigned_regular_off_heap_memory();
     case ASSIGNED_SHARED_OFF_HEAP_MEMORY:
       return is_set_assigned_shared_off_heap_memory();
+    case REQUESTED_GENERIC_RESOURCES:
+      return is_set_requested_generic_resources();
+    case ASSIGNED_GENERIC_RESOURCES:
+      return is_set_assigned_generic_resources();
     }
     throw new java.lang.IllegalStateException();
   }
@@ -2020,6 +2144,24 @@
         return false;
     }
 
+    boolean this_present_requested_generic_resources = true && this.is_set_requested_generic_resources();
+    boolean that_present_requested_generic_resources = true && that.is_set_requested_generic_resources();
+    if (this_present_requested_generic_resources || that_present_requested_generic_resources) {
+      if (!(this_present_requested_generic_resources && that_present_requested_generic_resources))
+        return false;
+      if (!this.requested_generic_resources.equals(that.requested_generic_resources))
+        return false;
+    }
+
+    boolean this_present_assigned_generic_resources = true && this.is_set_assigned_generic_resources();
+    boolean that_present_assigned_generic_resources = true && that.is_set_assigned_generic_resources();
+    if (this_present_assigned_generic_resources || that_present_assigned_generic_resources) {
+      if (!(this_present_assigned_generic_resources && that_present_assigned_generic_resources))
+        return false;
+      if (!this.assigned_generic_resources.equals(that.assigned_generic_resources))
+        return false;
+    }
+
     return true;
   }
 
@@ -2155,6 +2297,14 @@
     if (is_set_assigned_shared_off_heap_memory())
       hashCode = hashCode * 8191 + org.apache.storm.thrift.TBaseHelper.hashCode(assigned_shared_off_heap_memory);
 
+    hashCode = hashCode * 8191 + ((is_set_requested_generic_resources()) ? 131071 : 524287);
+    if (is_set_requested_generic_resources())
+      hashCode = hashCode * 8191 + requested_generic_resources.hashCode();
+
+    hashCode = hashCode * 8191 + ((is_set_assigned_generic_resources()) ? 131071 : 524287);
+    if (is_set_assigned_generic_resources())
+      hashCode = hashCode * 8191 + assigned_generic_resources.hashCode();
+
     return hashCode;
   }
 
@@ -2486,6 +2636,26 @@
         return lastComparison;
       }
     }
+    lastComparison = java.lang.Boolean.valueOf(is_set_requested_generic_resources()).compareTo(other.is_set_requested_generic_resources());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    if (is_set_requested_generic_resources()) {
+      lastComparison = org.apache.storm.thrift.TBaseHelper.compareTo(this.requested_generic_resources, other.requested_generic_resources);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+    }
+    lastComparison = java.lang.Boolean.valueOf(is_set_assigned_generic_resources()).compareTo(other.is_set_assigned_generic_resources());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    if (is_set_assigned_generic_resources()) {
+      lastComparison = org.apache.storm.thrift.TBaseHelper.compareTo(this.assigned_generic_resources, other.assigned_generic_resources);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+    }
     return 0;
   }
 
@@ -2748,6 +2918,26 @@
       sb.append(this.assigned_shared_off_heap_memory);
       first = false;
     }
+    if (is_set_requested_generic_resources()) {
+      if (!first) sb.append(", ");
+      sb.append("requested_generic_resources:");
+      if (this.requested_generic_resources == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.requested_generic_resources);
+      }
+      first = false;
+    }
+    if (is_set_assigned_generic_resources()) {
+      if (!first) sb.append(", ");
+      sb.append("assigned_generic_resources:");
+      if (this.assigned_generic_resources == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.assigned_generic_resources);
+      }
+      first = false;
+    }
     sb.append(")");
     return sb.toString();
   }
@@ -2870,16 +3060,16 @@
           case 9: // ID_TO_SPOUT_AGG_STATS
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map482 = iprot.readMapBegin();
-                struct.id_to_spout_agg_stats = new java.util.HashMap<java.lang.String,ComponentAggregateStats>(2*_map482.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key483;
-                @org.apache.storm.thrift.annotation.Nullable ComponentAggregateStats _val484;
-                for (int _i485 = 0; _i485 < _map482.size; ++_i485)
+                org.apache.storm.thrift.protocol.TMap _map512 = iprot.readMapBegin();
+                struct.id_to_spout_agg_stats = new java.util.HashMap<java.lang.String,ComponentAggregateStats>(2*_map512.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key513;
+                @org.apache.storm.thrift.annotation.Nullable ComponentAggregateStats _val514;
+                for (int _i515 = 0; _i515 < _map512.size; ++_i515)
                 {
-                  _key483 = iprot.readString();
-                  _val484 = new ComponentAggregateStats();
-                  _val484.read(iprot);
-                  struct.id_to_spout_agg_stats.put(_key483, _val484);
+                  _key513 = iprot.readString();
+                  _val514 = new ComponentAggregateStats();
+                  _val514.read(iprot);
+                  struct.id_to_spout_agg_stats.put(_key513, _val514);
                 }
                 iprot.readMapEnd();
               }
@@ -2891,16 +3081,16 @@
           case 10: // ID_TO_BOLT_AGG_STATS
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map486 = iprot.readMapBegin();
-                struct.id_to_bolt_agg_stats = new java.util.HashMap<java.lang.String,ComponentAggregateStats>(2*_map486.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key487;
-                @org.apache.storm.thrift.annotation.Nullable ComponentAggregateStats _val488;
-                for (int _i489 = 0; _i489 < _map486.size; ++_i489)
+                org.apache.storm.thrift.protocol.TMap _map516 = iprot.readMapBegin();
+                struct.id_to_bolt_agg_stats = new java.util.HashMap<java.lang.String,ComponentAggregateStats>(2*_map516.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key517;
+                @org.apache.storm.thrift.annotation.Nullable ComponentAggregateStats _val518;
+                for (int _i519 = 0; _i519 < _map516.size; ++_i519)
                 {
-                  _key487 = iprot.readString();
-                  _val488 = new ComponentAggregateStats();
-                  _val488.read(iprot);
-                  struct.id_to_bolt_agg_stats.put(_key487, _val488);
+                  _key517 = iprot.readString();
+                  _val518 = new ComponentAggregateStats();
+                  _val518.read(iprot);
+                  struct.id_to_bolt_agg_stats.put(_key517, _val518);
                 }
                 iprot.readMapEnd();
               }
@@ -2954,14 +3144,14 @@
           case 16: // WORKERS
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.LIST) {
               {
-                org.apache.storm.thrift.protocol.TList _list490 = iprot.readListBegin();
-                struct.workers = new java.util.ArrayList<WorkerSummary>(_list490.size);
-                @org.apache.storm.thrift.annotation.Nullable WorkerSummary _elem491;
-                for (int _i492 = 0; _i492 < _list490.size; ++_i492)
+                org.apache.storm.thrift.protocol.TList _list520 = iprot.readListBegin();
+                struct.workers = new java.util.ArrayList<WorkerSummary>(_list520.size);
+                @org.apache.storm.thrift.annotation.Nullable WorkerSummary _elem521;
+                for (int _i522 = 0; _i522 < _list520.size; ++_i522)
                 {
-                  _elem491 = new WorkerSummary();
-                  _elem491.read(iprot);
-                  struct.workers.add(_elem491);
+                  _elem521 = new WorkerSummary();
+                  _elem521.read(iprot);
+                  struct.workers.add(_elem521);
                 }
                 iprot.readListEnd();
               }
@@ -3098,6 +3288,46 @@
               org.apache.storm.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
             }
             break;
+          case 535: // REQUESTED_GENERIC_RESOURCES
+            if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
+              {
+                org.apache.storm.thrift.protocol.TMap _map523 = iprot.readMapBegin();
+                struct.requested_generic_resources = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map523.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key524;
+                double _val525;
+                for (int _i526 = 0; _i526 < _map523.size; ++_i526)
+                {
+                  _key524 = iprot.readString();
+                  _val525 = iprot.readDouble();
+                  struct.requested_generic_resources.put(_key524, _val525);
+                }
+                iprot.readMapEnd();
+              }
+              struct.set_requested_generic_resources_isSet(true);
+            } else { 
+              org.apache.storm.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+            }
+            break;
+          case 536: // ASSIGNED_GENERIC_RESOURCES
+            if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
+              {
+                org.apache.storm.thrift.protocol.TMap _map527 = iprot.readMapBegin();
+                struct.assigned_generic_resources = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map527.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key528;
+                double _val529;
+                for (int _i530 = 0; _i530 < _map527.size; ++_i530)
+                {
+                  _key528 = iprot.readString();
+                  _val529 = iprot.readDouble();
+                  struct.assigned_generic_resources.put(_key528, _val529);
+                }
+                iprot.readMapEnd();
+              }
+              struct.set_assigned_generic_resources_isSet(true);
+            } else { 
+              org.apache.storm.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+            }
+            break;
           default:
             org.apache.storm.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
         }
@@ -3162,10 +3392,10 @@
           oprot.writeFieldBegin(ID_TO_SPOUT_AGG_STATS_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRUCT, struct.id_to_spout_agg_stats.size()));
-            for (java.util.Map.Entry<java.lang.String, ComponentAggregateStats> _iter493 : struct.id_to_spout_agg_stats.entrySet())
+            for (java.util.Map.Entry<java.lang.String, ComponentAggregateStats> _iter531 : struct.id_to_spout_agg_stats.entrySet())
             {
-              oprot.writeString(_iter493.getKey());
-              _iter493.getValue().write(oprot);
+              oprot.writeString(_iter531.getKey());
+              _iter531.getValue().write(oprot);
             }
             oprot.writeMapEnd();
           }
@@ -3177,10 +3407,10 @@
           oprot.writeFieldBegin(ID_TO_BOLT_AGG_STATS_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRUCT, struct.id_to_bolt_agg_stats.size()));
-            for (java.util.Map.Entry<java.lang.String, ComponentAggregateStats> _iter494 : struct.id_to_bolt_agg_stats.entrySet())
+            for (java.util.Map.Entry<java.lang.String, ComponentAggregateStats> _iter532 : struct.id_to_bolt_agg_stats.entrySet())
             {
-              oprot.writeString(_iter494.getKey());
-              _iter494.getValue().write(oprot);
+              oprot.writeString(_iter532.getKey());
+              _iter532.getValue().write(oprot);
             }
             oprot.writeMapEnd();
           }
@@ -3225,9 +3455,9 @@
           oprot.writeFieldBegin(WORKERS_FIELD_DESC);
           {
             oprot.writeListBegin(new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, struct.workers.size()));
-            for (WorkerSummary _iter495 : struct.workers)
+            for (WorkerSummary _iter533 : struct.workers)
             {
-              _iter495.write(oprot);
+              _iter533.write(oprot);
             }
             oprot.writeListEnd();
           }
@@ -3318,6 +3548,36 @@
         oprot.writeDouble(struct.assigned_shared_off_heap_memory);
         oprot.writeFieldEnd();
       }
+      if (struct.requested_generic_resources != null) {
+        if (struct.is_set_requested_generic_resources()) {
+          oprot.writeFieldBegin(REQUESTED_GENERIC_RESOURCES_FIELD_DESC);
+          {
+            oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, struct.requested_generic_resources.size()));
+            for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter534 : struct.requested_generic_resources.entrySet())
+            {
+              oprot.writeString(_iter534.getKey());
+              oprot.writeDouble(_iter534.getValue());
+            }
+            oprot.writeMapEnd();
+          }
+          oprot.writeFieldEnd();
+        }
+      }
+      if (struct.assigned_generic_resources != null) {
+        if (struct.is_set_assigned_generic_resources()) {
+          oprot.writeFieldBegin(ASSIGNED_GENERIC_RESOURCES_FIELD_DESC);
+          {
+            oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, struct.assigned_generic_resources.size()));
+            for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter535 : struct.assigned_generic_resources.entrySet())
+            {
+              oprot.writeString(_iter535.getKey());
+              oprot.writeDouble(_iter535.getValue());
+            }
+            oprot.writeMapEnd();
+          }
+          oprot.writeFieldEnd();
+        }
+      }
       oprot.writeFieldStop();
       oprot.writeStructEnd();
     }
@@ -3430,7 +3690,13 @@
       if (struct.is_set_assigned_shared_off_heap_memory()) {
         optionals.set(30);
       }
-      oprot.writeBitSet(optionals, 31);
+      if (struct.is_set_requested_generic_resources()) {
+        optionals.set(31);
+      }
+      if (struct.is_set_assigned_generic_resources()) {
+        optionals.set(32);
+      }
+      oprot.writeBitSet(optionals, 33);
       if (struct.is_set_name()) {
         oprot.writeString(struct.name);
       }
@@ -3455,20 +3721,20 @@
       if (struct.is_set_id_to_spout_agg_stats()) {
         {
           oprot.writeI32(struct.id_to_spout_agg_stats.size());
-          for (java.util.Map.Entry<java.lang.String, ComponentAggregateStats> _iter496 : struct.id_to_spout_agg_stats.entrySet())
+          for (java.util.Map.Entry<java.lang.String, ComponentAggregateStats> _iter536 : struct.id_to_spout_agg_stats.entrySet())
           {
-            oprot.writeString(_iter496.getKey());
-            _iter496.getValue().write(oprot);
+            oprot.writeString(_iter536.getKey());
+            _iter536.getValue().write(oprot);
           }
         }
       }
       if (struct.is_set_id_to_bolt_agg_stats()) {
         {
           oprot.writeI32(struct.id_to_bolt_agg_stats.size());
-          for (java.util.Map.Entry<java.lang.String, ComponentAggregateStats> _iter497 : struct.id_to_bolt_agg_stats.entrySet())
+          for (java.util.Map.Entry<java.lang.String, ComponentAggregateStats> _iter537 : struct.id_to_bolt_agg_stats.entrySet())
           {
-            oprot.writeString(_iter497.getKey());
-            _iter497.getValue().write(oprot);
+            oprot.writeString(_iter537.getKey());
+            _iter537.getValue().write(oprot);
           }
         }
       }
@@ -3490,9 +3756,9 @@
       if (struct.is_set_workers()) {
         {
           oprot.writeI32(struct.workers.size());
-          for (WorkerSummary _iter498 : struct.workers)
+          for (WorkerSummary _iter538 : struct.workers)
           {
-            _iter498.write(oprot);
+            _iter538.write(oprot);
           }
         }
       }
@@ -3544,6 +3810,26 @@
       if (struct.is_set_assigned_shared_off_heap_memory()) {
         oprot.writeDouble(struct.assigned_shared_off_heap_memory);
       }
+      if (struct.is_set_requested_generic_resources()) {
+        {
+          oprot.writeI32(struct.requested_generic_resources.size());
+          for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter539 : struct.requested_generic_resources.entrySet())
+          {
+            oprot.writeString(_iter539.getKey());
+            oprot.writeDouble(_iter539.getValue());
+          }
+        }
+      }
+      if (struct.is_set_assigned_generic_resources()) {
+        {
+          oprot.writeI32(struct.assigned_generic_resources.size());
+          for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter540 : struct.assigned_generic_resources.entrySet())
+          {
+            oprot.writeString(_iter540.getKey());
+            oprot.writeDouble(_iter540.getValue());
+          }
+        }
+      }
     }
 
     @Override
@@ -3551,7 +3837,7 @@
       org.apache.storm.thrift.protocol.TTupleProtocol iprot = (org.apache.storm.thrift.protocol.TTupleProtocol) prot;
       struct.id = iprot.readString();
       struct.set_id_isSet(true);
-      java.util.BitSet incoming = iprot.readBitSet(31);
+      java.util.BitSet incoming = iprot.readBitSet(33);
       if (incoming.get(0)) {
         struct.name = iprot.readString();
         struct.set_name_isSet(true);
@@ -3582,32 +3868,32 @@
       }
       if (incoming.get(7)) {
         {
-          org.apache.storm.thrift.protocol.TMap _map499 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.id_to_spout_agg_stats = new java.util.HashMap<java.lang.String,ComponentAggregateStats>(2*_map499.size);
-          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key500;
-          @org.apache.storm.thrift.annotation.Nullable ComponentAggregateStats _val501;
-          for (int _i502 = 0; _i502 < _map499.size; ++_i502)
+          org.apache.storm.thrift.protocol.TMap _map541 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.id_to_spout_agg_stats = new java.util.HashMap<java.lang.String,ComponentAggregateStats>(2*_map541.size);
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key542;
+          @org.apache.storm.thrift.annotation.Nullable ComponentAggregateStats _val543;
+          for (int _i544 = 0; _i544 < _map541.size; ++_i544)
           {
-            _key500 = iprot.readString();
-            _val501 = new ComponentAggregateStats();
-            _val501.read(iprot);
-            struct.id_to_spout_agg_stats.put(_key500, _val501);
+            _key542 = iprot.readString();
+            _val543 = new ComponentAggregateStats();
+            _val543.read(iprot);
+            struct.id_to_spout_agg_stats.put(_key542, _val543);
           }
         }
         struct.set_id_to_spout_agg_stats_isSet(true);
       }
       if (incoming.get(8)) {
         {
-          org.apache.storm.thrift.protocol.TMap _map503 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.id_to_bolt_agg_stats = new java.util.HashMap<java.lang.String,ComponentAggregateStats>(2*_map503.size);
-          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key504;
-          @org.apache.storm.thrift.annotation.Nullable ComponentAggregateStats _val505;
-          for (int _i506 = 0; _i506 < _map503.size; ++_i506)
+          org.apache.storm.thrift.protocol.TMap _map545 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.id_to_bolt_agg_stats = new java.util.HashMap<java.lang.String,ComponentAggregateStats>(2*_map545.size);
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key546;
+          @org.apache.storm.thrift.annotation.Nullable ComponentAggregateStats _val547;
+          for (int _i548 = 0; _i548 < _map545.size; ++_i548)
           {
-            _key504 = iprot.readString();
-            _val505 = new ComponentAggregateStats();
-            _val505.read(iprot);
-            struct.id_to_bolt_agg_stats.put(_key504, _val505);
+            _key546 = iprot.readString();
+            _val547 = new ComponentAggregateStats();
+            _val547.read(iprot);
+            struct.id_to_bolt_agg_stats.put(_key546, _val547);
           }
         }
         struct.set_id_to_bolt_agg_stats_isSet(true);
@@ -3636,14 +3922,14 @@
       }
       if (incoming.get(14)) {
         {
-          org.apache.storm.thrift.protocol.TList _list507 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.workers = new java.util.ArrayList<WorkerSummary>(_list507.size);
-          @org.apache.storm.thrift.annotation.Nullable WorkerSummary _elem508;
-          for (int _i509 = 0; _i509 < _list507.size; ++_i509)
+          org.apache.storm.thrift.protocol.TList _list549 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.workers = new java.util.ArrayList<WorkerSummary>(_list549.size);
+          @org.apache.storm.thrift.annotation.Nullable WorkerSummary _elem550;
+          for (int _i551 = 0; _i551 < _list549.size; ++_i551)
           {
-            _elem508 = new WorkerSummary();
-            _elem508.read(iprot);
-            struct.workers.add(_elem508);
+            _elem550 = new WorkerSummary();
+            _elem550.read(iprot);
+            struct.workers.add(_elem550);
           }
         }
         struct.set_workers_isSet(true);
@@ -3712,6 +3998,36 @@
         struct.assigned_shared_off_heap_memory = iprot.readDouble();
         struct.set_assigned_shared_off_heap_memory_isSet(true);
       }
+      if (incoming.get(31)) {
+        {
+          org.apache.storm.thrift.protocol.TMap _map552 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, iprot.readI32());
+          struct.requested_generic_resources = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map552.size);
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key553;
+          double _val554;
+          for (int _i555 = 0; _i555 < _map552.size; ++_i555)
+          {
+            _key553 = iprot.readString();
+            _val554 = iprot.readDouble();
+            struct.requested_generic_resources.put(_key553, _val554);
+          }
+        }
+        struct.set_requested_generic_resources_isSet(true);
+      }
+      if (incoming.get(32)) {
+        {
+          org.apache.storm.thrift.protocol.TMap _map556 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, iprot.readI32());
+          struct.assigned_generic_resources = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map556.size);
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key557;
+          double _val558;
+          for (int _i559 = 0; _i559 < _map556.size; ++_i559)
+          {
+            _key557 = iprot.readString();
+            _val558 = iprot.readDouble();
+            struct.assigned_generic_resources.put(_key557, _val558);
+          }
+        }
+        struct.set_assigned_generic_resources_isSet(true);
+      }
     }
   }
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/TopologyStats.java b/storm-client/src/jvm/org/apache/storm/generated/TopologyStats.java
index de7306e..d0f399f 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/TopologyStats.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/TopologyStats.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class TopologyStats implements org.apache.storm.thrift.TBase<TopologyStats, TopologyStats._Fields>, java.io.Serializable, Cloneable, Comparable<TopologyStats> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("TopologyStats");
 
@@ -713,15 +713,15 @@
           case 1: // WINDOW_TO_EMITTED
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map406 = iprot.readMapBegin();
-                struct.window_to_emitted = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map406.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key407;
-                long _val408;
-                for (int _i409 = 0; _i409 < _map406.size; ++_i409)
+                org.apache.storm.thrift.protocol.TMap _map436 = iprot.readMapBegin();
+                struct.window_to_emitted = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map436.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key437;
+                long _val438;
+                for (int _i439 = 0; _i439 < _map436.size; ++_i439)
                 {
-                  _key407 = iprot.readString();
-                  _val408 = iprot.readI64();
-                  struct.window_to_emitted.put(_key407, _val408);
+                  _key437 = iprot.readString();
+                  _val438 = iprot.readI64();
+                  struct.window_to_emitted.put(_key437, _val438);
                 }
                 iprot.readMapEnd();
               }
@@ -733,15 +733,15 @@
           case 2: // WINDOW_TO_TRANSFERRED
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map410 = iprot.readMapBegin();
-                struct.window_to_transferred = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map410.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key411;
-                long _val412;
-                for (int _i413 = 0; _i413 < _map410.size; ++_i413)
+                org.apache.storm.thrift.protocol.TMap _map440 = iprot.readMapBegin();
+                struct.window_to_transferred = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map440.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key441;
+                long _val442;
+                for (int _i443 = 0; _i443 < _map440.size; ++_i443)
                 {
-                  _key411 = iprot.readString();
-                  _val412 = iprot.readI64();
-                  struct.window_to_transferred.put(_key411, _val412);
+                  _key441 = iprot.readString();
+                  _val442 = iprot.readI64();
+                  struct.window_to_transferred.put(_key441, _val442);
                 }
                 iprot.readMapEnd();
               }
@@ -753,15 +753,15 @@
           case 3: // WINDOW_TO_COMPLETE_LATENCIES_MS
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map414 = iprot.readMapBegin();
-                struct.window_to_complete_latencies_ms = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map414.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key415;
-                double _val416;
-                for (int _i417 = 0; _i417 < _map414.size; ++_i417)
+                org.apache.storm.thrift.protocol.TMap _map444 = iprot.readMapBegin();
+                struct.window_to_complete_latencies_ms = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map444.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key445;
+                double _val446;
+                for (int _i447 = 0; _i447 < _map444.size; ++_i447)
                 {
-                  _key415 = iprot.readString();
-                  _val416 = iprot.readDouble();
-                  struct.window_to_complete_latencies_ms.put(_key415, _val416);
+                  _key445 = iprot.readString();
+                  _val446 = iprot.readDouble();
+                  struct.window_to_complete_latencies_ms.put(_key445, _val446);
                 }
                 iprot.readMapEnd();
               }
@@ -773,15 +773,15 @@
           case 4: // WINDOW_TO_ACKED
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map418 = iprot.readMapBegin();
-                struct.window_to_acked = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map418.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key419;
-                long _val420;
-                for (int _i421 = 0; _i421 < _map418.size; ++_i421)
+                org.apache.storm.thrift.protocol.TMap _map448 = iprot.readMapBegin();
+                struct.window_to_acked = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map448.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key449;
+                long _val450;
+                for (int _i451 = 0; _i451 < _map448.size; ++_i451)
                 {
-                  _key419 = iprot.readString();
-                  _val420 = iprot.readI64();
-                  struct.window_to_acked.put(_key419, _val420);
+                  _key449 = iprot.readString();
+                  _val450 = iprot.readI64();
+                  struct.window_to_acked.put(_key449, _val450);
                 }
                 iprot.readMapEnd();
               }
@@ -793,15 +793,15 @@
           case 5: // WINDOW_TO_FAILED
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map422 = iprot.readMapBegin();
-                struct.window_to_failed = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map422.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key423;
-                long _val424;
-                for (int _i425 = 0; _i425 < _map422.size; ++_i425)
+                org.apache.storm.thrift.protocol.TMap _map452 = iprot.readMapBegin();
+                struct.window_to_failed = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map452.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key453;
+                long _val454;
+                for (int _i455 = 0; _i455 < _map452.size; ++_i455)
                 {
-                  _key423 = iprot.readString();
-                  _val424 = iprot.readI64();
-                  struct.window_to_failed.put(_key423, _val424);
+                  _key453 = iprot.readString();
+                  _val454 = iprot.readI64();
+                  struct.window_to_failed.put(_key453, _val454);
                 }
                 iprot.readMapEnd();
               }
@@ -828,10 +828,10 @@
           oprot.writeFieldBegin(WINDOW_TO_EMITTED_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I64, struct.window_to_emitted.size()));
-            for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter426 : struct.window_to_emitted.entrySet())
+            for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter456 : struct.window_to_emitted.entrySet())
             {
-              oprot.writeString(_iter426.getKey());
-              oprot.writeI64(_iter426.getValue());
+              oprot.writeString(_iter456.getKey());
+              oprot.writeI64(_iter456.getValue());
             }
             oprot.writeMapEnd();
           }
@@ -843,10 +843,10 @@
           oprot.writeFieldBegin(WINDOW_TO_TRANSFERRED_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I64, struct.window_to_transferred.size()));
-            for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter427 : struct.window_to_transferred.entrySet())
+            for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter457 : struct.window_to_transferred.entrySet())
             {
-              oprot.writeString(_iter427.getKey());
-              oprot.writeI64(_iter427.getValue());
+              oprot.writeString(_iter457.getKey());
+              oprot.writeI64(_iter457.getValue());
             }
             oprot.writeMapEnd();
           }
@@ -858,10 +858,10 @@
           oprot.writeFieldBegin(WINDOW_TO_COMPLETE_LATENCIES_MS_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, struct.window_to_complete_latencies_ms.size()));
-            for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter428 : struct.window_to_complete_latencies_ms.entrySet())
+            for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter458 : struct.window_to_complete_latencies_ms.entrySet())
             {
-              oprot.writeString(_iter428.getKey());
-              oprot.writeDouble(_iter428.getValue());
+              oprot.writeString(_iter458.getKey());
+              oprot.writeDouble(_iter458.getValue());
             }
             oprot.writeMapEnd();
           }
@@ -873,10 +873,10 @@
           oprot.writeFieldBegin(WINDOW_TO_ACKED_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I64, struct.window_to_acked.size()));
-            for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter429 : struct.window_to_acked.entrySet())
+            for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter459 : struct.window_to_acked.entrySet())
             {
-              oprot.writeString(_iter429.getKey());
-              oprot.writeI64(_iter429.getValue());
+              oprot.writeString(_iter459.getKey());
+              oprot.writeI64(_iter459.getValue());
             }
             oprot.writeMapEnd();
           }
@@ -888,10 +888,10 @@
           oprot.writeFieldBegin(WINDOW_TO_FAILED_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I64, struct.window_to_failed.size()));
-            for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter430 : struct.window_to_failed.entrySet())
+            for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter460 : struct.window_to_failed.entrySet())
             {
-              oprot.writeString(_iter430.getKey());
-              oprot.writeI64(_iter430.getValue());
+              oprot.writeString(_iter460.getKey());
+              oprot.writeI64(_iter460.getValue());
             }
             oprot.writeMapEnd();
           }
@@ -935,50 +935,50 @@
       if (struct.is_set_window_to_emitted()) {
         {
           oprot.writeI32(struct.window_to_emitted.size());
-          for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter431 : struct.window_to_emitted.entrySet())
+          for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter461 : struct.window_to_emitted.entrySet())
           {
-            oprot.writeString(_iter431.getKey());
-            oprot.writeI64(_iter431.getValue());
+            oprot.writeString(_iter461.getKey());
+            oprot.writeI64(_iter461.getValue());
           }
         }
       }
       if (struct.is_set_window_to_transferred()) {
         {
           oprot.writeI32(struct.window_to_transferred.size());
-          for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter432 : struct.window_to_transferred.entrySet())
+          for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter462 : struct.window_to_transferred.entrySet())
           {
-            oprot.writeString(_iter432.getKey());
-            oprot.writeI64(_iter432.getValue());
+            oprot.writeString(_iter462.getKey());
+            oprot.writeI64(_iter462.getValue());
           }
         }
       }
       if (struct.is_set_window_to_complete_latencies_ms()) {
         {
           oprot.writeI32(struct.window_to_complete_latencies_ms.size());
-          for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter433 : struct.window_to_complete_latencies_ms.entrySet())
+          for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter463 : struct.window_to_complete_latencies_ms.entrySet())
           {
-            oprot.writeString(_iter433.getKey());
-            oprot.writeDouble(_iter433.getValue());
+            oprot.writeString(_iter463.getKey());
+            oprot.writeDouble(_iter463.getValue());
           }
         }
       }
       if (struct.is_set_window_to_acked()) {
         {
           oprot.writeI32(struct.window_to_acked.size());
-          for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter434 : struct.window_to_acked.entrySet())
+          for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter464 : struct.window_to_acked.entrySet())
           {
-            oprot.writeString(_iter434.getKey());
-            oprot.writeI64(_iter434.getValue());
+            oprot.writeString(_iter464.getKey());
+            oprot.writeI64(_iter464.getValue());
           }
         }
       }
       if (struct.is_set_window_to_failed()) {
         {
           oprot.writeI32(struct.window_to_failed.size());
-          for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter435 : struct.window_to_failed.entrySet())
+          for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter465 : struct.window_to_failed.entrySet())
           {
-            oprot.writeString(_iter435.getKey());
-            oprot.writeI64(_iter435.getValue());
+            oprot.writeString(_iter465.getKey());
+            oprot.writeI64(_iter465.getValue());
           }
         }
       }
@@ -990,75 +990,75 @@
       java.util.BitSet incoming = iprot.readBitSet(5);
       if (incoming.get(0)) {
         {
-          org.apache.storm.thrift.protocol.TMap _map436 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
-          struct.window_to_emitted = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map436.size);
-          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key437;
-          long _val438;
-          for (int _i439 = 0; _i439 < _map436.size; ++_i439)
+          org.apache.storm.thrift.protocol.TMap _map466 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
+          struct.window_to_emitted = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map466.size);
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key467;
+          long _val468;
+          for (int _i469 = 0; _i469 < _map466.size; ++_i469)
           {
-            _key437 = iprot.readString();
-            _val438 = iprot.readI64();
-            struct.window_to_emitted.put(_key437, _val438);
+            _key467 = iprot.readString();
+            _val468 = iprot.readI64();
+            struct.window_to_emitted.put(_key467, _val468);
           }
         }
         struct.set_window_to_emitted_isSet(true);
       }
       if (incoming.get(1)) {
         {
-          org.apache.storm.thrift.protocol.TMap _map440 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
-          struct.window_to_transferred = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map440.size);
-          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key441;
-          long _val442;
-          for (int _i443 = 0; _i443 < _map440.size; ++_i443)
+          org.apache.storm.thrift.protocol.TMap _map470 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
+          struct.window_to_transferred = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map470.size);
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key471;
+          long _val472;
+          for (int _i473 = 0; _i473 < _map470.size; ++_i473)
           {
-            _key441 = iprot.readString();
-            _val442 = iprot.readI64();
-            struct.window_to_transferred.put(_key441, _val442);
+            _key471 = iprot.readString();
+            _val472 = iprot.readI64();
+            struct.window_to_transferred.put(_key471, _val472);
           }
         }
         struct.set_window_to_transferred_isSet(true);
       }
       if (incoming.get(2)) {
         {
-          org.apache.storm.thrift.protocol.TMap _map444 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, iprot.readI32());
-          struct.window_to_complete_latencies_ms = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map444.size);
-          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key445;
-          double _val446;
-          for (int _i447 = 0; _i447 < _map444.size; ++_i447)
+          org.apache.storm.thrift.protocol.TMap _map474 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, iprot.readI32());
+          struct.window_to_complete_latencies_ms = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map474.size);
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key475;
+          double _val476;
+          for (int _i477 = 0; _i477 < _map474.size; ++_i477)
           {
-            _key445 = iprot.readString();
-            _val446 = iprot.readDouble();
-            struct.window_to_complete_latencies_ms.put(_key445, _val446);
+            _key475 = iprot.readString();
+            _val476 = iprot.readDouble();
+            struct.window_to_complete_latencies_ms.put(_key475, _val476);
           }
         }
         struct.set_window_to_complete_latencies_ms_isSet(true);
       }
       if (incoming.get(3)) {
         {
-          org.apache.storm.thrift.protocol.TMap _map448 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
-          struct.window_to_acked = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map448.size);
-          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key449;
-          long _val450;
-          for (int _i451 = 0; _i451 < _map448.size; ++_i451)
+          org.apache.storm.thrift.protocol.TMap _map478 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
+          struct.window_to_acked = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map478.size);
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key479;
+          long _val480;
+          for (int _i481 = 0; _i481 < _map478.size; ++_i481)
           {
-            _key449 = iprot.readString();
-            _val450 = iprot.readI64();
-            struct.window_to_acked.put(_key449, _val450);
+            _key479 = iprot.readString();
+            _val480 = iprot.readI64();
+            struct.window_to_acked.put(_key479, _val480);
           }
         }
         struct.set_window_to_acked_isSet(true);
       }
       if (incoming.get(4)) {
         {
-          org.apache.storm.thrift.protocol.TMap _map452 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
-          struct.window_to_failed = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map452.size);
-          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key453;
-          long _val454;
-          for (int _i455 = 0; _i455 < _map452.size; ++_i455)
+          org.apache.storm.thrift.protocol.TMap _map482 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
+          struct.window_to_failed = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map482.size);
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key483;
+          long _val484;
+          for (int _i485 = 0; _i485 < _map482.size; ++_i485)
           {
-            _key453 = iprot.readString();
-            _val454 = iprot.readI64();
-            struct.window_to_failed.put(_key453, _val454);
+            _key483 = iprot.readString();
+            _val484 = iprot.readI64();
+            struct.window_to_failed.put(_key483, _val484);
           }
         }
         struct.set_window_to_failed_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/TopologyStatus.java b/storm-client/src/jvm/org/apache/storm/generated/TopologyStatus.java
index b138865..4563334 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/TopologyStatus.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/TopologyStatus.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public enum TopologyStatus implements org.apache.storm.thrift.TEnum {
   ACTIVE(1),
   INACTIVE(2),
diff --git a/storm-client/src/jvm/org/apache/storm/generated/TopologySummary.java b/storm-client/src/jvm/org/apache/storm/generated/TopologySummary.java
index 656c8e1..e76e4a8 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/TopologySummary.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/TopologySummary.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class TopologySummary implements org.apache.storm.thrift.TBase<TopologySummary, TopologySummary._Fields>, java.io.Serializable, Cloneable, Comparable<TopologySummary> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("TopologySummary");
 
@@ -46,6 +46,8 @@
   private static final org.apache.storm.thrift.protocol.TField ASSIGNED_MEMONHEAP_FIELD_DESC = new org.apache.storm.thrift.protocol.TField("assigned_memonheap", org.apache.storm.thrift.protocol.TType.DOUBLE, (short)524);
   private static final org.apache.storm.thrift.protocol.TField ASSIGNED_MEMOFFHEAP_FIELD_DESC = new org.apache.storm.thrift.protocol.TField("assigned_memoffheap", org.apache.storm.thrift.protocol.TType.DOUBLE, (short)525);
   private static final org.apache.storm.thrift.protocol.TField ASSIGNED_CPU_FIELD_DESC = new org.apache.storm.thrift.protocol.TField("assigned_cpu", org.apache.storm.thrift.protocol.TType.DOUBLE, (short)526);
+  private static final org.apache.storm.thrift.protocol.TField REQUESTED_GENERIC_RESOURCES_FIELD_DESC = new org.apache.storm.thrift.protocol.TField("requested_generic_resources", org.apache.storm.thrift.protocol.TType.MAP, (short)527);
+  private static final org.apache.storm.thrift.protocol.TField ASSIGNED_GENERIC_RESOURCES_FIELD_DESC = new org.apache.storm.thrift.protocol.TField("assigned_generic_resources", org.apache.storm.thrift.protocol.TType.MAP, (short)528);
 
   private static final org.apache.storm.thrift.scheme.SchemeFactory STANDARD_SCHEME_FACTORY = new TopologySummaryStandardSchemeFactory();
   private static final org.apache.storm.thrift.scheme.SchemeFactory TUPLE_SCHEME_FACTORY = new TopologySummaryTupleSchemeFactory();
@@ -68,6 +70,8 @@
   private double assigned_memonheap; // optional
   private double assigned_memoffheap; // optional
   private double assigned_cpu; // optional
+  private @org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Double> requested_generic_resources; // optional
+  private @org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Double> assigned_generic_resources; // optional
 
   /** The set of fields this struct contains, along with convenience methods for finding and manipulating them. */
   public enum _Fields implements org.apache.storm.thrift.TFieldIdEnum {
@@ -88,7 +92,9 @@
     REQUESTED_CPU((short)523, "requested_cpu"),
     ASSIGNED_MEMONHEAP((short)524, "assigned_memonheap"),
     ASSIGNED_MEMOFFHEAP((short)525, "assigned_memoffheap"),
-    ASSIGNED_CPU((short)526, "assigned_cpu");
+    ASSIGNED_CPU((short)526, "assigned_cpu"),
+    REQUESTED_GENERIC_RESOURCES((short)527, "requested_generic_resources"),
+    ASSIGNED_GENERIC_RESOURCES((short)528, "assigned_generic_resources");
 
     private static final java.util.Map<java.lang.String, _Fields> byName = new java.util.HashMap<java.lang.String, _Fields>();
 
@@ -140,6 +146,10 @@
           return ASSIGNED_MEMOFFHEAP;
         case 526: // ASSIGNED_CPU
           return ASSIGNED_CPU;
+        case 527: // REQUESTED_GENERIC_RESOURCES
+          return REQUESTED_GENERIC_RESOURCES;
+        case 528: // ASSIGNED_GENERIC_RESOURCES
+          return ASSIGNED_GENERIC_RESOURCES;
         default:
           return null;
       }
@@ -193,7 +203,7 @@
   private static final int __ASSIGNED_MEMOFFHEAP_ISSET_ID = 9;
   private static final int __ASSIGNED_CPU_ISSET_ID = 10;
   private short __isset_bitfield = 0;
-  private static final _Fields optionals[] = {_Fields.STORM_VERSION,_Fields.TOPOLOGY_VERSION,_Fields.SCHED_STATUS,_Fields.OWNER,_Fields.REPLICATION_COUNT,_Fields.REQUESTED_MEMONHEAP,_Fields.REQUESTED_MEMOFFHEAP,_Fields.REQUESTED_CPU,_Fields.ASSIGNED_MEMONHEAP,_Fields.ASSIGNED_MEMOFFHEAP,_Fields.ASSIGNED_CPU};
+  private static final _Fields optionals[] = {_Fields.STORM_VERSION,_Fields.TOPOLOGY_VERSION,_Fields.SCHED_STATUS,_Fields.OWNER,_Fields.REPLICATION_COUNT,_Fields.REQUESTED_MEMONHEAP,_Fields.REQUESTED_MEMOFFHEAP,_Fields.REQUESTED_CPU,_Fields.ASSIGNED_MEMONHEAP,_Fields.ASSIGNED_MEMOFFHEAP,_Fields.ASSIGNED_CPU,_Fields.REQUESTED_GENERIC_RESOURCES,_Fields.ASSIGNED_GENERIC_RESOURCES};
   public static final java.util.Map<_Fields, org.apache.storm.thrift.meta_data.FieldMetaData> metaDataMap;
   static {
     java.util.Map<_Fields, org.apache.storm.thrift.meta_data.FieldMetaData> tmpMap = new java.util.EnumMap<_Fields, org.apache.storm.thrift.meta_data.FieldMetaData>(_Fields.class);
@@ -233,6 +243,14 @@
         new org.apache.storm.thrift.meta_data.FieldValueMetaData(org.apache.storm.thrift.protocol.TType.DOUBLE)));
     tmpMap.put(_Fields.ASSIGNED_CPU, new org.apache.storm.thrift.meta_data.FieldMetaData("assigned_cpu", org.apache.storm.thrift.TFieldRequirementType.OPTIONAL, 
         new org.apache.storm.thrift.meta_data.FieldValueMetaData(org.apache.storm.thrift.protocol.TType.DOUBLE)));
+    tmpMap.put(_Fields.REQUESTED_GENERIC_RESOURCES, new org.apache.storm.thrift.meta_data.FieldMetaData("requested_generic_resources", org.apache.storm.thrift.TFieldRequirementType.OPTIONAL, 
+        new org.apache.storm.thrift.meta_data.MapMetaData(org.apache.storm.thrift.protocol.TType.MAP, 
+            new org.apache.storm.thrift.meta_data.FieldValueMetaData(org.apache.storm.thrift.protocol.TType.STRING), 
+            new org.apache.storm.thrift.meta_data.FieldValueMetaData(org.apache.storm.thrift.protocol.TType.DOUBLE))));
+    tmpMap.put(_Fields.ASSIGNED_GENERIC_RESOURCES, new org.apache.storm.thrift.meta_data.FieldMetaData("assigned_generic_resources", org.apache.storm.thrift.TFieldRequirementType.OPTIONAL, 
+        new org.apache.storm.thrift.meta_data.MapMetaData(org.apache.storm.thrift.protocol.TType.MAP, 
+            new org.apache.storm.thrift.meta_data.FieldValueMetaData(org.apache.storm.thrift.protocol.TType.STRING), 
+            new org.apache.storm.thrift.meta_data.FieldValueMetaData(org.apache.storm.thrift.protocol.TType.DOUBLE))));
     metaDataMap = java.util.Collections.unmodifiableMap(tmpMap);
     org.apache.storm.thrift.meta_data.FieldMetaData.addStructMetaDataMap(TopologySummary.class, metaDataMap);
   }
@@ -300,6 +318,14 @@
     this.assigned_memonheap = other.assigned_memonheap;
     this.assigned_memoffheap = other.assigned_memoffheap;
     this.assigned_cpu = other.assigned_cpu;
+    if (other.is_set_requested_generic_resources()) {
+      java.util.Map<java.lang.String,java.lang.Double> __this__requested_generic_resources = new java.util.HashMap<java.lang.String,java.lang.Double>(other.requested_generic_resources);
+      this.requested_generic_resources = __this__requested_generic_resources;
+    }
+    if (other.is_set_assigned_generic_resources()) {
+      java.util.Map<java.lang.String,java.lang.Double> __this__assigned_generic_resources = new java.util.HashMap<java.lang.String,java.lang.Double>(other.assigned_generic_resources);
+      this.assigned_generic_resources = __this__assigned_generic_resources;
+    }
   }
 
   public TopologySummary deepCopy() {
@@ -337,6 +363,8 @@
     this.assigned_memoffheap = 0.0;
     set_assigned_cpu_isSet(false);
     this.assigned_cpu = 0.0;
+    this.requested_generic_resources = null;
+    this.assigned_generic_resources = null;
   }
 
   @org.apache.storm.thrift.annotation.Nullable
@@ -749,6 +777,76 @@
     __isset_bitfield = org.apache.storm.thrift.EncodingUtils.setBit(__isset_bitfield, __ASSIGNED_CPU_ISSET_ID, value);
   }
 
+  public int get_requested_generic_resources_size() {
+    return (this.requested_generic_resources == null) ? 0 : this.requested_generic_resources.size();
+  }
+
+  public void put_to_requested_generic_resources(java.lang.String key, double val) {
+    if (this.requested_generic_resources == null) {
+      this.requested_generic_resources = new java.util.HashMap<java.lang.String,java.lang.Double>();
+    }
+    this.requested_generic_resources.put(key, val);
+  }
+
+  @org.apache.storm.thrift.annotation.Nullable
+  public java.util.Map<java.lang.String,java.lang.Double> get_requested_generic_resources() {
+    return this.requested_generic_resources;
+  }
+
+  public void set_requested_generic_resources(@org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Double> requested_generic_resources) {
+    this.requested_generic_resources = requested_generic_resources;
+  }
+
+  public void unset_requested_generic_resources() {
+    this.requested_generic_resources = null;
+  }
+
+  /** Returns true if field requested_generic_resources is set (has been assigned a value) and false otherwise */
+  public boolean is_set_requested_generic_resources() {
+    return this.requested_generic_resources != null;
+  }
+
+  public void set_requested_generic_resources_isSet(boolean value) {
+    if (!value) {
+      this.requested_generic_resources = null;
+    }
+  }
+
+  public int get_assigned_generic_resources_size() {
+    return (this.assigned_generic_resources == null) ? 0 : this.assigned_generic_resources.size();
+  }
+
+  public void put_to_assigned_generic_resources(java.lang.String key, double val) {
+    if (this.assigned_generic_resources == null) {
+      this.assigned_generic_resources = new java.util.HashMap<java.lang.String,java.lang.Double>();
+    }
+    this.assigned_generic_resources.put(key, val);
+  }
+
+  @org.apache.storm.thrift.annotation.Nullable
+  public java.util.Map<java.lang.String,java.lang.Double> get_assigned_generic_resources() {
+    return this.assigned_generic_resources;
+  }
+
+  public void set_assigned_generic_resources(@org.apache.storm.thrift.annotation.Nullable java.util.Map<java.lang.String,java.lang.Double> assigned_generic_resources) {
+    this.assigned_generic_resources = assigned_generic_resources;
+  }
+
+  public void unset_assigned_generic_resources() {
+    this.assigned_generic_resources = null;
+  }
+
+  /** Returns true if field assigned_generic_resources is set (has been assigned a value) and false otherwise */
+  public boolean is_set_assigned_generic_resources() {
+    return this.assigned_generic_resources != null;
+  }
+
+  public void set_assigned_generic_resources_isSet(boolean value) {
+    if (!value) {
+      this.assigned_generic_resources = null;
+    }
+  }
+
   public void setFieldValue(_Fields field, @org.apache.storm.thrift.annotation.Nullable java.lang.Object value) {
     switch (field) {
     case ID:
@@ -895,6 +993,22 @@
       }
       break;
 
+    case REQUESTED_GENERIC_RESOURCES:
+      if (value == null) {
+        unset_requested_generic_resources();
+      } else {
+        set_requested_generic_resources((java.util.Map<java.lang.String,java.lang.Double>)value);
+      }
+      break;
+
+    case ASSIGNED_GENERIC_RESOURCES:
+      if (value == null) {
+        unset_assigned_generic_resources();
+      } else {
+        set_assigned_generic_resources((java.util.Map<java.lang.String,java.lang.Double>)value);
+      }
+      break;
+
     }
   }
 
@@ -955,6 +1069,12 @@
     case ASSIGNED_CPU:
       return get_assigned_cpu();
 
+    case REQUESTED_GENERIC_RESOURCES:
+      return get_requested_generic_resources();
+
+    case ASSIGNED_GENERIC_RESOURCES:
+      return get_assigned_generic_resources();
+
     }
     throw new java.lang.IllegalStateException();
   }
@@ -1002,6 +1122,10 @@
       return is_set_assigned_memoffheap();
     case ASSIGNED_CPU:
       return is_set_assigned_cpu();
+    case REQUESTED_GENERIC_RESOURCES:
+      return is_set_requested_generic_resources();
+    case ASSIGNED_GENERIC_RESOURCES:
+      return is_set_assigned_generic_resources();
     }
     throw new java.lang.IllegalStateException();
   }
@@ -1183,6 +1307,24 @@
         return false;
     }
 
+    boolean this_present_requested_generic_resources = true && this.is_set_requested_generic_resources();
+    boolean that_present_requested_generic_resources = true && that.is_set_requested_generic_resources();
+    if (this_present_requested_generic_resources || that_present_requested_generic_resources) {
+      if (!(this_present_requested_generic_resources && that_present_requested_generic_resources))
+        return false;
+      if (!this.requested_generic_resources.equals(that.requested_generic_resources))
+        return false;
+    }
+
+    boolean this_present_assigned_generic_resources = true && this.is_set_assigned_generic_resources();
+    boolean that_present_assigned_generic_resources = true && that.is_set_assigned_generic_resources();
+    if (this_present_assigned_generic_resources || that_present_assigned_generic_resources) {
+      if (!(this_present_assigned_generic_resources && that_present_assigned_generic_resources))
+        return false;
+      if (!this.assigned_generic_resources.equals(that.assigned_generic_resources))
+        return false;
+    }
+
     return true;
   }
 
@@ -1254,6 +1396,14 @@
     if (is_set_assigned_cpu())
       hashCode = hashCode * 8191 + org.apache.storm.thrift.TBaseHelper.hashCode(assigned_cpu);
 
+    hashCode = hashCode * 8191 + ((is_set_requested_generic_resources()) ? 131071 : 524287);
+    if (is_set_requested_generic_resources())
+      hashCode = hashCode * 8191 + requested_generic_resources.hashCode();
+
+    hashCode = hashCode * 8191 + ((is_set_assigned_generic_resources()) ? 131071 : 524287);
+    if (is_set_assigned_generic_resources())
+      hashCode = hashCode * 8191 + assigned_generic_resources.hashCode();
+
     return hashCode;
   }
 
@@ -1445,6 +1595,26 @@
         return lastComparison;
       }
     }
+    lastComparison = java.lang.Boolean.valueOf(is_set_requested_generic_resources()).compareTo(other.is_set_requested_generic_resources());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    if (is_set_requested_generic_resources()) {
+      lastComparison = org.apache.storm.thrift.TBaseHelper.compareTo(this.requested_generic_resources, other.requested_generic_resources);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+    }
+    lastComparison = java.lang.Boolean.valueOf(is_set_assigned_generic_resources()).compareTo(other.is_set_assigned_generic_resources());
+    if (lastComparison != 0) {
+      return lastComparison;
+    }
+    if (is_set_assigned_generic_resources()) {
+      lastComparison = org.apache.storm.thrift.TBaseHelper.compareTo(this.assigned_generic_resources, other.assigned_generic_resources);
+      if (lastComparison != 0) {
+        return lastComparison;
+      }
+    }
     return 0;
   }
 
@@ -1587,6 +1757,26 @@
       sb.append(this.assigned_cpu);
       first = false;
     }
+    if (is_set_requested_generic_resources()) {
+      if (!first) sb.append(", ");
+      sb.append("requested_generic_resources:");
+      if (this.requested_generic_resources == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.requested_generic_resources);
+      }
+      first = false;
+    }
+    if (is_set_assigned_generic_resources()) {
+      if (!first) sb.append(", ");
+      sb.append("assigned_generic_resources:");
+      if (this.assigned_generic_resources == null) {
+        sb.append("null");
+      } else {
+        sb.append(this.assigned_generic_resources);
+      }
+      first = false;
+    }
     sb.append(")");
     return sb.toString();
   }
@@ -1804,6 +1994,46 @@
               org.apache.storm.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
             }
             break;
+          case 527: // REQUESTED_GENERIC_RESOURCES
+            if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
+              {
+                org.apache.storm.thrift.protocol.TMap _map126 = iprot.readMapBegin();
+                struct.requested_generic_resources = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map126.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key127;
+                double _val128;
+                for (int _i129 = 0; _i129 < _map126.size; ++_i129)
+                {
+                  _key127 = iprot.readString();
+                  _val128 = iprot.readDouble();
+                  struct.requested_generic_resources.put(_key127, _val128);
+                }
+                iprot.readMapEnd();
+              }
+              struct.set_requested_generic_resources_isSet(true);
+            } else { 
+              org.apache.storm.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+            }
+            break;
+          case 528: // ASSIGNED_GENERIC_RESOURCES
+            if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
+              {
+                org.apache.storm.thrift.protocol.TMap _map130 = iprot.readMapBegin();
+                struct.assigned_generic_resources = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map130.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key131;
+                double _val132;
+                for (int _i133 = 0; _i133 < _map130.size; ++_i133)
+                {
+                  _key131 = iprot.readString();
+                  _val132 = iprot.readDouble();
+                  struct.assigned_generic_resources.put(_key131, _val132);
+                }
+                iprot.readMapEnd();
+              }
+              struct.set_assigned_generic_resources_isSet(true);
+            } else { 
+              org.apache.storm.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
+            }
+            break;
           default:
             org.apache.storm.thrift.protocol.TProtocolUtil.skip(iprot, schemeField.type);
         }
@@ -1907,6 +2137,36 @@
         oprot.writeDouble(struct.assigned_cpu);
         oprot.writeFieldEnd();
       }
+      if (struct.requested_generic_resources != null) {
+        if (struct.is_set_requested_generic_resources()) {
+          oprot.writeFieldBegin(REQUESTED_GENERIC_RESOURCES_FIELD_DESC);
+          {
+            oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, struct.requested_generic_resources.size()));
+            for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter134 : struct.requested_generic_resources.entrySet())
+            {
+              oprot.writeString(_iter134.getKey());
+              oprot.writeDouble(_iter134.getValue());
+            }
+            oprot.writeMapEnd();
+          }
+          oprot.writeFieldEnd();
+        }
+      }
+      if (struct.assigned_generic_resources != null) {
+        if (struct.is_set_assigned_generic_resources()) {
+          oprot.writeFieldBegin(ASSIGNED_GENERIC_RESOURCES_FIELD_DESC);
+          {
+            oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, struct.assigned_generic_resources.size()));
+            for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter135 : struct.assigned_generic_resources.entrySet())
+            {
+              oprot.writeString(_iter135.getKey());
+              oprot.writeDouble(_iter135.getValue());
+            }
+            oprot.writeMapEnd();
+          }
+          oprot.writeFieldEnd();
+        }
+      }
       oprot.writeFieldStop();
       oprot.writeStructEnd();
     }
@@ -1965,7 +2225,13 @@
       if (struct.is_set_assigned_cpu()) {
         optionals.set(10);
       }
-      oprot.writeBitSet(optionals, 11);
+      if (struct.is_set_requested_generic_resources()) {
+        optionals.set(11);
+      }
+      if (struct.is_set_assigned_generic_resources()) {
+        optionals.set(12);
+      }
+      oprot.writeBitSet(optionals, 13);
       if (struct.is_set_storm_version()) {
         oprot.writeString(struct.storm_version);
       }
@@ -1999,6 +2265,26 @@
       if (struct.is_set_assigned_cpu()) {
         oprot.writeDouble(struct.assigned_cpu);
       }
+      if (struct.is_set_requested_generic_resources()) {
+        {
+          oprot.writeI32(struct.requested_generic_resources.size());
+          for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter136 : struct.requested_generic_resources.entrySet())
+          {
+            oprot.writeString(_iter136.getKey());
+            oprot.writeDouble(_iter136.getValue());
+          }
+        }
+      }
+      if (struct.is_set_assigned_generic_resources()) {
+        {
+          oprot.writeI32(struct.assigned_generic_resources.size());
+          for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter137 : struct.assigned_generic_resources.entrySet())
+          {
+            oprot.writeString(_iter137.getKey());
+            oprot.writeDouble(_iter137.getValue());
+          }
+        }
+      }
     }
 
     @Override
@@ -2018,7 +2304,7 @@
       struct.set_uptime_secs_isSet(true);
       struct.status = iprot.readString();
       struct.set_status_isSet(true);
-      java.util.BitSet incoming = iprot.readBitSet(11);
+      java.util.BitSet incoming = iprot.readBitSet(13);
       if (incoming.get(0)) {
         struct.storm_version = iprot.readString();
         struct.set_storm_version_isSet(true);
@@ -2063,6 +2349,36 @@
         struct.assigned_cpu = iprot.readDouble();
         struct.set_assigned_cpu_isSet(true);
       }
+      if (incoming.get(11)) {
+        {
+          org.apache.storm.thrift.protocol.TMap _map138 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, iprot.readI32());
+          struct.requested_generic_resources = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map138.size);
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key139;
+          double _val140;
+          for (int _i141 = 0; _i141 < _map138.size; ++_i141)
+          {
+            _key139 = iprot.readString();
+            _val140 = iprot.readDouble();
+            struct.requested_generic_resources.put(_key139, _val140);
+          }
+        }
+        struct.set_requested_generic_resources_isSet(true);
+      }
+      if (incoming.get(12)) {
+        {
+          org.apache.storm.thrift.protocol.TMap _map142 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, iprot.readI32());
+          struct.assigned_generic_resources = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map142.size);
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key143;
+          double _val144;
+          for (int _i145 = 0; _i145 < _map142.size; ++_i145)
+          {
+            _key143 = iprot.readString();
+            _val144 = iprot.readDouble();
+            struct.assigned_generic_resources.put(_key143, _val144);
+          }
+        }
+        struct.set_assigned_generic_resources_isSet(true);
+      }
     }
   }
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/WorkerMetricList.java b/storm-client/src/jvm/org/apache/storm/generated/WorkerMetricList.java
index 6ca8470..ef9aca4 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/WorkerMetricList.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/WorkerMetricList.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class WorkerMetricList implements org.apache.storm.thrift.TBase<WorkerMetricList, WorkerMetricList._Fields>, java.io.Serializable, Cloneable, Comparable<WorkerMetricList> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("WorkerMetricList");
 
@@ -344,14 +344,14 @@
           case 1: // METRICS
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.LIST) {
               {
-                org.apache.storm.thrift.protocol.TList _list896 = iprot.readListBegin();
-                struct.metrics = new java.util.ArrayList<WorkerMetricPoint>(_list896.size);
-                @org.apache.storm.thrift.annotation.Nullable WorkerMetricPoint _elem897;
-                for (int _i898 = 0; _i898 < _list896.size; ++_i898)
+                org.apache.storm.thrift.protocol.TList _list946 = iprot.readListBegin();
+                struct.metrics = new java.util.ArrayList<WorkerMetricPoint>(_list946.size);
+                @org.apache.storm.thrift.annotation.Nullable WorkerMetricPoint _elem947;
+                for (int _i948 = 0; _i948 < _list946.size; ++_i948)
                 {
-                  _elem897 = new WorkerMetricPoint();
-                  _elem897.read(iprot);
-                  struct.metrics.add(_elem897);
+                  _elem947 = new WorkerMetricPoint();
+                  _elem947.read(iprot);
+                  struct.metrics.add(_elem947);
                 }
                 iprot.readListEnd();
               }
@@ -377,9 +377,9 @@
         oprot.writeFieldBegin(METRICS_FIELD_DESC);
         {
           oprot.writeListBegin(new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, struct.metrics.size()));
-          for (WorkerMetricPoint _iter899 : struct.metrics)
+          for (WorkerMetricPoint _iter949 : struct.metrics)
           {
-            _iter899.write(oprot);
+            _iter949.write(oprot);
           }
           oprot.writeListEnd();
         }
@@ -410,9 +410,9 @@
       if (struct.is_set_metrics()) {
         {
           oprot.writeI32(struct.metrics.size());
-          for (WorkerMetricPoint _iter900 : struct.metrics)
+          for (WorkerMetricPoint _iter950 : struct.metrics)
           {
-            _iter900.write(oprot);
+            _iter950.write(oprot);
           }
         }
       }
@@ -424,14 +424,14 @@
       java.util.BitSet incoming = iprot.readBitSet(1);
       if (incoming.get(0)) {
         {
-          org.apache.storm.thrift.protocol.TList _list901 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
-          struct.metrics = new java.util.ArrayList<WorkerMetricPoint>(_list901.size);
-          @org.apache.storm.thrift.annotation.Nullable WorkerMetricPoint _elem902;
-          for (int _i903 = 0; _i903 < _list901.size; ++_i903)
+          org.apache.storm.thrift.protocol.TList _list951 = new org.apache.storm.thrift.protocol.TList(org.apache.storm.thrift.protocol.TType.STRUCT, iprot.readI32());
+          struct.metrics = new java.util.ArrayList<WorkerMetricPoint>(_list951.size);
+          @org.apache.storm.thrift.annotation.Nullable WorkerMetricPoint _elem952;
+          for (int _i953 = 0; _i953 < _list951.size; ++_i953)
           {
-            _elem902 = new WorkerMetricPoint();
-            _elem902.read(iprot);
-            struct.metrics.add(_elem902);
+            _elem952 = new WorkerMetricPoint();
+            _elem952.read(iprot);
+            struct.metrics.add(_elem952);
           }
         }
         struct.set_metrics_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/WorkerMetricPoint.java b/storm-client/src/jvm/org/apache/storm/generated/WorkerMetricPoint.java
index ca3b14e..879f686 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/WorkerMetricPoint.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/WorkerMetricPoint.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class WorkerMetricPoint implements org.apache.storm.thrift.TBase<WorkerMetricPoint, WorkerMetricPoint._Fields>, java.io.Serializable, Cloneable, Comparable<WorkerMetricPoint> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("WorkerMetricPoint");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/WorkerMetrics.java b/storm-client/src/jvm/org/apache/storm/generated/WorkerMetrics.java
index 935cb66..dcdbdb5 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/WorkerMetrics.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/WorkerMetrics.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class WorkerMetrics implements org.apache.storm.thrift.TBase<WorkerMetrics, WorkerMetrics._Fields>, java.io.Serializable, Cloneable, Comparable<WorkerMetrics> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("WorkerMetrics");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/WorkerResources.java b/storm-client/src/jvm/org/apache/storm/generated/WorkerResources.java
index 8f08b33..26a3175 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/WorkerResources.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/WorkerResources.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class WorkerResources implements org.apache.storm.thrift.TBase<WorkerResources, WorkerResources._Fields>, java.io.Serializable, Cloneable, Comparable<WorkerResources> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("WorkerResources");
 
@@ -847,15 +847,15 @@
           case 6: // RESOURCES
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map666 = iprot.readMapBegin();
-                struct.resources = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map666.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key667;
-                double _val668;
-                for (int _i669 = 0; _i669 < _map666.size; ++_i669)
+                org.apache.storm.thrift.protocol.TMap _map716 = iprot.readMapBegin();
+                struct.resources = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map716.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key717;
+                double _val718;
+                for (int _i719 = 0; _i719 < _map716.size; ++_i719)
                 {
-                  _key667 = iprot.readString();
-                  _val668 = iprot.readDouble();
-                  struct.resources.put(_key667, _val668);
+                  _key717 = iprot.readString();
+                  _val718 = iprot.readDouble();
+                  struct.resources.put(_key717, _val718);
                 }
                 iprot.readMapEnd();
               }
@@ -867,15 +867,15 @@
           case 7: // SHARED_RESOURCES
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map670 = iprot.readMapBegin();
-                struct.shared_resources = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map670.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key671;
-                double _val672;
-                for (int _i673 = 0; _i673 < _map670.size; ++_i673)
+                org.apache.storm.thrift.protocol.TMap _map720 = iprot.readMapBegin();
+                struct.shared_resources = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map720.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key721;
+                double _val722;
+                for (int _i723 = 0; _i723 < _map720.size; ++_i723)
                 {
-                  _key671 = iprot.readString();
-                  _val672 = iprot.readDouble();
-                  struct.shared_resources.put(_key671, _val672);
+                  _key721 = iprot.readString();
+                  _val722 = iprot.readDouble();
+                  struct.shared_resources.put(_key721, _val722);
                 }
                 iprot.readMapEnd();
               }
@@ -927,10 +927,10 @@
           oprot.writeFieldBegin(RESOURCES_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, struct.resources.size()));
-            for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter674 : struct.resources.entrySet())
+            for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter724 : struct.resources.entrySet())
             {
-              oprot.writeString(_iter674.getKey());
-              oprot.writeDouble(_iter674.getValue());
+              oprot.writeString(_iter724.getKey());
+              oprot.writeDouble(_iter724.getValue());
             }
             oprot.writeMapEnd();
           }
@@ -942,10 +942,10 @@
           oprot.writeFieldBegin(SHARED_RESOURCES_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, struct.shared_resources.size()));
-            for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter675 : struct.shared_resources.entrySet())
+            for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter725 : struct.shared_resources.entrySet())
             {
-              oprot.writeString(_iter675.getKey());
-              oprot.writeDouble(_iter675.getValue());
+              oprot.writeString(_iter725.getKey());
+              oprot.writeDouble(_iter725.getValue());
             }
             oprot.writeMapEnd();
           }
@@ -1010,20 +1010,20 @@
       if (struct.is_set_resources()) {
         {
           oprot.writeI32(struct.resources.size());
-          for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter676 : struct.resources.entrySet())
+          for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter726 : struct.resources.entrySet())
           {
-            oprot.writeString(_iter676.getKey());
-            oprot.writeDouble(_iter676.getValue());
+            oprot.writeString(_iter726.getKey());
+            oprot.writeDouble(_iter726.getValue());
           }
         }
       }
       if (struct.is_set_shared_resources()) {
         {
           oprot.writeI32(struct.shared_resources.size());
-          for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter677 : struct.shared_resources.entrySet())
+          for (java.util.Map.Entry<java.lang.String, java.lang.Double> _iter727 : struct.shared_resources.entrySet())
           {
-            oprot.writeString(_iter677.getKey());
-            oprot.writeDouble(_iter677.getValue());
+            oprot.writeString(_iter727.getKey());
+            oprot.writeDouble(_iter727.getValue());
           }
         }
       }
@@ -1055,30 +1055,30 @@
       }
       if (incoming.get(5)) {
         {
-          org.apache.storm.thrift.protocol.TMap _map678 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, iprot.readI32());
-          struct.resources = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map678.size);
-          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key679;
-          double _val680;
-          for (int _i681 = 0; _i681 < _map678.size; ++_i681)
+          org.apache.storm.thrift.protocol.TMap _map728 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, iprot.readI32());
+          struct.resources = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map728.size);
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key729;
+          double _val730;
+          for (int _i731 = 0; _i731 < _map728.size; ++_i731)
           {
-            _key679 = iprot.readString();
-            _val680 = iprot.readDouble();
-            struct.resources.put(_key679, _val680);
+            _key729 = iprot.readString();
+            _val730 = iprot.readDouble();
+            struct.resources.put(_key729, _val730);
           }
         }
         struct.set_resources_isSet(true);
       }
       if (incoming.get(6)) {
         {
-          org.apache.storm.thrift.protocol.TMap _map682 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, iprot.readI32());
-          struct.shared_resources = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map682.size);
-          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key683;
-          double _val684;
-          for (int _i685 = 0; _i685 < _map682.size; ++_i685)
+          org.apache.storm.thrift.protocol.TMap _map732 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.DOUBLE, iprot.readI32());
+          struct.shared_resources = new java.util.HashMap<java.lang.String,java.lang.Double>(2*_map732.size);
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key733;
+          double _val734;
+          for (int _i735 = 0; _i735 < _map732.size; ++_i735)
           {
-            _key683 = iprot.readString();
-            _val684 = iprot.readDouble();
-            struct.shared_resources.put(_key683, _val684);
+            _key733 = iprot.readString();
+            _val734 = iprot.readDouble();
+            struct.shared_resources.put(_key733, _val734);
           }
         }
         struct.set_shared_resources_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/WorkerSummary.java b/storm-client/src/jvm/org/apache/storm/generated/WorkerSummary.java
index dbecba0..b27bf16 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/WorkerSummary.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/WorkerSummary.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class WorkerSummary implements org.apache.storm.thrift.TBase<WorkerSummary, WorkerSummary._Fields>, java.io.Serializable, Cloneable, Comparable<WorkerSummary> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("WorkerSummary");
 
@@ -1540,15 +1540,15 @@
           case 7: // COMPONENT_TO_NUM_TASKS
             if (schemeField.type == org.apache.storm.thrift.protocol.TType.MAP) {
               {
-                org.apache.storm.thrift.protocol.TMap _map456 = iprot.readMapBegin();
-                struct.component_to_num_tasks = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map456.size);
-                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key457;
-                long _val458;
-                for (int _i459 = 0; _i459 < _map456.size; ++_i459)
+                org.apache.storm.thrift.protocol.TMap _map486 = iprot.readMapBegin();
+                struct.component_to_num_tasks = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map486.size);
+                @org.apache.storm.thrift.annotation.Nullable java.lang.String _key487;
+                long _val488;
+                for (int _i489 = 0; _i489 < _map486.size; ++_i489)
                 {
-                  _key457 = iprot.readString();
-                  _val458 = iprot.readI64();
-                  struct.component_to_num_tasks.put(_key457, _val458);
+                  _key487 = iprot.readString();
+                  _val488 = iprot.readI64();
+                  struct.component_to_num_tasks.put(_key487, _val488);
                 }
                 iprot.readMapEnd();
               }
@@ -1685,10 +1685,10 @@
           oprot.writeFieldBegin(COMPONENT_TO_NUM_TASKS_FIELD_DESC);
           {
             oprot.writeMapBegin(new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I64, struct.component_to_num_tasks.size()));
-            for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter460 : struct.component_to_num_tasks.entrySet())
+            for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter490 : struct.component_to_num_tasks.entrySet())
             {
-              oprot.writeString(_iter460.getKey());
-              oprot.writeI64(_iter460.getValue());
+              oprot.writeString(_iter490.getKey());
+              oprot.writeI64(_iter490.getValue());
             }
             oprot.writeMapEnd();
           }
@@ -1830,10 +1830,10 @@
       if (struct.is_set_component_to_num_tasks()) {
         {
           oprot.writeI32(struct.component_to_num_tasks.size());
-          for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter461 : struct.component_to_num_tasks.entrySet())
+          for (java.util.Map.Entry<java.lang.String, java.lang.Long> _iter491 : struct.component_to_num_tasks.entrySet())
           {
-            oprot.writeString(_iter461.getKey());
-            oprot.writeI64(_iter461.getValue());
+            oprot.writeString(_iter491.getKey());
+            oprot.writeI64(_iter491.getValue());
           }
         }
       }
@@ -1896,15 +1896,15 @@
       }
       if (incoming.get(6)) {
         {
-          org.apache.storm.thrift.protocol.TMap _map462 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
-          struct.component_to_num_tasks = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map462.size);
-          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key463;
-          long _val464;
-          for (int _i465 = 0; _i465 < _map462.size; ++_i465)
+          org.apache.storm.thrift.protocol.TMap _map492 = new org.apache.storm.thrift.protocol.TMap(org.apache.storm.thrift.protocol.TType.STRING, org.apache.storm.thrift.protocol.TType.I64, iprot.readI32());
+          struct.component_to_num_tasks = new java.util.HashMap<java.lang.String,java.lang.Long>(2*_map492.size);
+          @org.apache.storm.thrift.annotation.Nullable java.lang.String _key493;
+          long _val494;
+          for (int _i495 = 0; _i495 < _map492.size; ++_i495)
           {
-            _key463 = iprot.readString();
-            _val464 = iprot.readI64();
-            struct.component_to_num_tasks.put(_key463, _val464);
+            _key493 = iprot.readString();
+            _val494 = iprot.readI64();
+            struct.component_to_num_tasks.put(_key493, _val494);
           }
         }
         struct.set_component_to_num_tasks_isSet(true);
diff --git a/storm-client/src/jvm/org/apache/storm/generated/WorkerToken.java b/storm-client/src/jvm/org/apache/storm/generated/WorkerToken.java
index efeb1d4..8bdbdca 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/WorkerToken.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/WorkerToken.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class WorkerToken implements org.apache.storm.thrift.TBase<WorkerToken, WorkerToken._Fields>, java.io.Serializable, Cloneable, Comparable<WorkerToken> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("WorkerToken");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/WorkerTokenInfo.java b/storm-client/src/jvm/org/apache/storm/generated/WorkerTokenInfo.java
index 4736b65..b58b240 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/WorkerTokenInfo.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/WorkerTokenInfo.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 @SuppressWarnings({"cast", "rawtypes", "serial", "unchecked", "unused"})
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public class WorkerTokenInfo implements org.apache.storm.thrift.TBase<WorkerTokenInfo, WorkerTokenInfo._Fields>, java.io.Serializable, Cloneable, Comparable<WorkerTokenInfo> {
   private static final org.apache.storm.thrift.protocol.TStruct STRUCT_DESC = new org.apache.storm.thrift.protocol.TStruct("WorkerTokenInfo");
 
diff --git a/storm-client/src/jvm/org/apache/storm/generated/WorkerTokenServiceType.java b/storm-client/src/jvm/org/apache/storm/generated/WorkerTokenServiceType.java
index 5d834d0..420e571 100644
--- a/storm-client/src/jvm/org/apache/storm/generated/WorkerTokenServiceType.java
+++ b/storm-client/src/jvm/org/apache/storm/generated/WorkerTokenServiceType.java
@@ -16,7 +16,7 @@
  * limitations under the License.
  */
 /**
- * Autogenerated by Thrift Compiler (0.12.0)
+ * Autogenerated by Thrift Compiler (0.13.0)
  *
  * DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
  *  @generated
@@ -24,7 +24,7 @@
 package org.apache.storm.generated;
 
 
-@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.12.0)")
+@javax.annotation.Generated(value = "Autogenerated by Thrift Compiler (0.13.0)")
 public enum WorkerTokenServiceType implements org.apache.storm.thrift.TEnum {
   NIMBUS(0),
   DRPC(1),
diff --git a/storm-client/src/jvm/org/apache/storm/hooks/info/BoltAckInfo.java b/storm-client/src/jvm/org/apache/storm/hooks/info/BoltAckInfo.java
index 76f4314..3bfd501 100644
--- a/storm-client/src/jvm/org/apache/storm/hooks/info/BoltAckInfo.java
+++ b/storm-client/src/jvm/org/apache/storm/hooks/info/BoltAckInfo.java
@@ -12,6 +12,8 @@
 
 package org.apache.storm.hooks.info;
 
+import java.util.List;
+
 import org.apache.storm.hooks.ITaskHook;
 import org.apache.storm.task.TopologyContext;
 import org.apache.storm.tuple.Tuple;
@@ -28,8 +30,9 @@
     }
 
     public void applyOn(TopologyContext topologyContext) {
-        for (ITaskHook hook : topologyContext.getHooks()) {
-            hook.boltAck(this);
+        List<ITaskHook> hooks = topologyContext.getHooks();
+        for (int i = 0; i < hooks.size(); i++) {
+            hooks.get(i).boltAck(this);
         }
     }
 }
diff --git a/storm-client/src/jvm/org/apache/storm/hooks/info/BoltExecuteInfo.java b/storm-client/src/jvm/org/apache/storm/hooks/info/BoltExecuteInfo.java
index 8593527..9c5659e 100644
--- a/storm-client/src/jvm/org/apache/storm/hooks/info/BoltExecuteInfo.java
+++ b/storm-client/src/jvm/org/apache/storm/hooks/info/BoltExecuteInfo.java
@@ -12,6 +12,8 @@
 
 package org.apache.storm.hooks.info;
 
+import java.util.List;
+
 import org.apache.storm.hooks.ITaskHook;
 import org.apache.storm.task.TopologyContext;
 import org.apache.storm.tuple.Tuple;
@@ -28,9 +30,9 @@
     }
 
     public void applyOn(TopologyContext topologyContext) {
-        for (int i = 0; i < topologyContext.getHooks().size(); i++) { // perf critical loop. dont use iterators
-            ITaskHook hook = topologyContext.getHooks().get(i);
-            hook.boltExecute(this);
+        List<ITaskHook> hooks = topologyContext.getHooks();
+        for (int i = 0; i < hooks.size(); i++) {
+            hooks.get(i).boltExecute(this);
         }
     }
 }
diff --git a/storm-client/src/jvm/org/apache/storm/hooks/info/BoltFailInfo.java b/storm-client/src/jvm/org/apache/storm/hooks/info/BoltFailInfo.java
index e9a0233..59c7e19 100644
--- a/storm-client/src/jvm/org/apache/storm/hooks/info/BoltFailInfo.java
+++ b/storm-client/src/jvm/org/apache/storm/hooks/info/BoltFailInfo.java
@@ -12,6 +12,8 @@
 
 package org.apache.storm.hooks.info;
 
+import java.util.List;
+
 import org.apache.storm.hooks.ITaskHook;
 import org.apache.storm.task.TopologyContext;
 import org.apache.storm.tuple.Tuple;
@@ -28,8 +30,9 @@
     }
 
     public void applyOn(TopologyContext topologyContext) {
-        for (ITaskHook hook : topologyContext.getHooks()) {
-            hook.boltFail(this);
+        List<ITaskHook> hooks = topologyContext.getHooks();
+        for (int i = 0; i < hooks.size(); i++) {
+            hooks.get(i).boltFail(this);
         }
     }
 }
diff --git a/storm-client/src/jvm/org/apache/storm/hooks/info/EmitInfo.java b/storm-client/src/jvm/org/apache/storm/hooks/info/EmitInfo.java
index 9f4b156..35d2256 100644
--- a/storm-client/src/jvm/org/apache/storm/hooks/info/EmitInfo.java
+++ b/storm-client/src/jvm/org/apache/storm/hooks/info/EmitInfo.java
@@ -31,8 +31,9 @@
     }
 
     public void applyOn(TopologyContext topologyContext) {
-        for (ITaskHook hook : topologyContext.getHooks()) {
-            hook.emit(this);
+        List<ITaskHook> hooks = topologyContext.getHooks();
+        for (int i = 0; i < hooks.size(); i++) {
+            hooks.get(i).emit(this);
         }
     }
 }
diff --git a/storm-client/src/jvm/org/apache/storm/hooks/info/SpoutAckInfo.java b/storm-client/src/jvm/org/apache/storm/hooks/info/SpoutAckInfo.java
index 9c39ba1..ac053be 100644
--- a/storm-client/src/jvm/org/apache/storm/hooks/info/SpoutAckInfo.java
+++ b/storm-client/src/jvm/org/apache/storm/hooks/info/SpoutAckInfo.java
@@ -12,6 +12,8 @@
 
 package org.apache.storm.hooks.info;
 
+import java.util.List;
+
 import org.apache.storm.hooks.ITaskHook;
 import org.apache.storm.task.TopologyContext;
 
@@ -27,8 +29,9 @@
     }
 
     public void applyOn(TopologyContext topologyContext) {
-        for (ITaskHook hook : topologyContext.getHooks()) {
-            hook.spoutAck(this);
+        List<ITaskHook> hooks = topologyContext.getHooks();
+        for (int i = 0; i < hooks.size(); i++) {
+            hooks.get(i).spoutAck(this);
         }
     }
 }
diff --git a/storm-client/src/jvm/org/apache/storm/hooks/info/SpoutFailInfo.java b/storm-client/src/jvm/org/apache/storm/hooks/info/SpoutFailInfo.java
index 80a2742..df7ef29 100644
--- a/storm-client/src/jvm/org/apache/storm/hooks/info/SpoutFailInfo.java
+++ b/storm-client/src/jvm/org/apache/storm/hooks/info/SpoutFailInfo.java
@@ -12,6 +12,8 @@
 
 package org.apache.storm.hooks.info;
 
+import java.util.List;
+
 import org.apache.storm.hooks.ITaskHook;
 import org.apache.storm.task.TopologyContext;
 
@@ -27,8 +29,9 @@
     }
 
     public void applyOn(TopologyContext topologyContext) {
-        for (ITaskHook hook : topologyContext.getHooks()) {
-            hook.spoutFail(this);
+        List<ITaskHook> hooks = topologyContext.getHooks();
+        for (int i = 0; i < hooks.size(); i++) {
+            hooks.get(i).spoutFail(this);
         }
     }
 }
diff --git a/storm-client/src/jvm/org/apache/storm/messaging/netty/KerberosSaslNettyClient.java b/storm-client/src/jvm/org/apache/storm/messaging/netty/KerberosSaslNettyClient.java
index c4d15fc..31425d4 100644
--- a/storm-client/src/jvm/org/apache/storm/messaging/netty/KerberosSaslNettyClient.java
+++ b/storm-client/src/jvm/org/apache/storm/messaging/netty/KerberosSaslNettyClient.java
@@ -23,7 +23,6 @@
 import javax.security.auth.callback.CallbackHandler;
 import javax.security.auth.callback.UnsupportedCallbackException;
 import javax.security.auth.kerberos.KerberosTicket;
-import javax.security.auth.login.Configuration;
 import javax.security.auth.login.LoginException;
 import javax.security.sasl.Sasl;
 import javax.security.sasl.SaslClient;
@@ -55,26 +54,16 @@
                   SaslUtils.KERBEROS);
 
         LOG.info("Creating Kerberos Client.");
-
-        Configuration loginConf;
-        try {
-            loginConf = ClientAuthUtils.getConfiguration(topoConf);
-        } catch (Throwable t) {
-            LOG.error("Failed to get loginConf: ", t);
-            throw t;
-        }
         LOG.debug("KerberosSaslNettyClient: authmethod {}", SaslUtils.KERBEROS);
 
         SaslClientCallbackHandler ch = new SaslClientCallbackHandler();
 
+        String jaasConfFile = ClientAuthUtils.getJaasConf(topoConf);
+
         subject = null;
         try {
-            LOG.debug("Setting Configuration to login_config: {}", loginConf);
-            //specify a configuration object to be used
-            Configuration.setConfiguration(loginConf);
-            //now login
-            LOG.debug("Trying to login.");
-            Login login = new Login(jaasSection, ch);
+            LOG.debug("Trying to login using {}.", jaasConfFile);
+            Login login = new Login(jaasSection, ch, jaasConfFile);
             subject = login.getSubject();
             LOG.debug("Got Subject: {}", subject.toString());
         } catch (LoginException ex) {
@@ -88,12 +77,12 @@
             throw new RuntimeException("Fail to verify user principal with section \""
                     + jaasSection
                     + "\" in login configuration file "
-                    + loginConf);
+                    + jaasConfFile);
         }
 
         String serviceName = null;
         try {
-            serviceName = ClientAuthUtils.get(loginConf, jaasSection, "serviceName");
+            serviceName = ClientAuthUtils.get(topoConf, jaasSection, "serviceName");
         } catch (IOException e) {
             LOG.error("Failed to get service name.", e);
             throw new RuntimeException(e);
diff --git a/storm-client/src/jvm/org/apache/storm/messaging/netty/KerberosSaslNettyServer.java b/storm-client/src/jvm/org/apache/storm/messaging/netty/KerberosSaslNettyServer.java
index ee410d6..fefcdc6 100644
--- a/storm-client/src/jvm/org/apache/storm/messaging/netty/KerberosSaslNettyServer.java
+++ b/storm-client/src/jvm/org/apache/storm/messaging/netty/KerberosSaslNettyServer.java
@@ -25,7 +25,6 @@
 import javax.security.auth.callback.UnsupportedCallbackException;
 import javax.security.auth.kerberos.KerberosPrincipal;
 import javax.security.auth.kerberos.KerberosTicket;
-import javax.security.auth.login.Configuration;
 import javax.security.auth.login.LoginException;
 import javax.security.sasl.AuthorizeCallback;
 import javax.security.sasl.Sasl;
@@ -49,28 +48,17 @@
 
     KerberosSaslNettyServer(Map<String, Object> topoConf, String jaasSection, List<String> authorizedUsers) {
         this.authorizedUsers = authorizedUsers;
-        LOG.debug("Getting Configuration.");
-        Configuration loginConf;
-        try {
-            loginConf = ClientAuthUtils.getConfiguration(topoConf);
-        } catch (Throwable t) {
-            LOG.error("Failed to get loginConf: ", t);
-            throw t;
-        }
 
         LOG.debug("KerberosSaslNettyServer: authmethod {}", SaslUtils.KERBEROS);
 
         KerberosSaslCallbackHandler ch = new KerberosSaslNettyServer.KerberosSaslCallbackHandler(authorizedUsers);
+        String jaasConfFile = ClientAuthUtils.getJaasConf(topoConf);
 
         //login our principal
         subject = null;
         try {
-            LOG.debug("Setting Configuration to login_config: {}", loginConf);
-            //specify a configuration object to be used
-            Configuration.setConfiguration(loginConf);
-            //now login
-            LOG.debug("Trying to login.");
-            Login login = new Login(jaasSection, ch);
+            LOG.debug("Trying to login using {}.", jaasConfFile);
+            Login login = new Login(jaasSection, ch, jaasConfFile);
             subject = login.getSubject();
             LOG.debug("Got Subject: {}", subject.toString());
         } catch (LoginException ex) {
@@ -84,7 +72,7 @@
             throw new RuntimeException("Fail to verify user principal with section \""
                                        + jaasSection
                                        + "\" in login configuration file "
-                                       + loginConf);
+                                       + jaasConfFile);
         }
 
         try {
diff --git a/storm-client/src/jvm/org/apache/storm/messaging/netty/Login.java b/storm-client/src/jvm/org/apache/storm/messaging/netty/Login.java
index 6a50d84..d6a345e 100644
--- a/storm-client/src/jvm/org/apache/storm/messaging/netty/Login.java
+++ b/storm-client/src/jvm/org/apache/storm/messaging/netty/Login.java
@@ -19,6 +19,9 @@
  * does not die.
  */
 
+import java.io.File;
+import java.net.URI;
+import java.security.URIParameter;
 import java.util.Date;
 import java.util.Random;
 import java.util.Set;
@@ -31,6 +34,7 @@
 import javax.security.auth.login.LoginContext;
 import javax.security.auth.login.LoginException;
 import org.apache.log4j.Logger;
+import org.apache.storm.security.auth.ClientAuthUtils;
 import org.apache.storm.shade.org.apache.zookeeper.Shell;
 import org.apache.storm.shade.org.apache.zookeeper.client.ZooKeeperSaslClient;
 
@@ -57,12 +61,9 @@
     private Thread thread = null;
     private boolean isKrbTicket = false;
     private boolean isUsingTicketCache = false;
-    private boolean isUsingKeytab = false;
     private LoginContext login = null;
     private String loginContextName = null;
-    private String keytabFile = null;
     private String principal = null;
-
     private long lastLogin = 0;
 
     /**
@@ -77,14 +78,14 @@
      * @throws javax.security.auth.login.LoginException
      *               Thrown if authentication fails.
      */
-    public Login(final String loginContextName, CallbackHandler callbackHandler)
+    public Login(final String loginContextName, CallbackHandler callbackHandler, String jaasConfFile)
         throws LoginException {
         this.callbackHandler = callbackHandler;
-        login = login(loginContextName);
+        login = login(loginContextName, jaasConfFile);
         this.loginContextName = loginContextName;
         subject = login.getSubject();
         isKrbTicket = !subject.getPrivateCredentials(KerberosTicket.class).isEmpty();
-        AppConfigurationEntry[] entries = Configuration.getConfiguration().getAppConfigurationEntry(loginContextName);
+        AppConfigurationEntry[] entries = this.getConfiguration(jaasConfFile).getAppConfigurationEntry(loginContextName);
         for (AppConfigurationEntry entry : entries) {
             // there will only be a single entry, so this for() loop will only be iterated through once.
             if (entry.getOptions().get("useTicketCache") != null) {
@@ -93,10 +94,6 @@
                     isUsingTicketCache = true;
                 }
             }
-            if (entry.getOptions().get("keyTab") != null) {
-                keytabFile = (String) entry.getOptions().get("keyTab");
-                isUsingKeytab = true;
-            }
             if (entry.getOptions().get("principal") != null) {
                 principal = (String) entry.getOptions().get("principal");
             }
@@ -251,6 +248,19 @@
         thread.setDaemon(true);
     }
 
+    private Configuration getConfiguration(String jaasConfFile) {
+        File configFile = new File(jaasConfFile);
+        if (!configFile.canRead()) {
+            throw new RuntimeException("File " + jaasConfFile + " cannot be read.");
+        }
+        try {
+            URI configUri = configFile.toURI();
+            return Configuration.getInstance("JavaLoginConfig", new URIParameter(configUri));
+        } catch (Exception ex) {
+            throw new RuntimeException("Failed to get configuration for " + jaasConfFile, ex);
+        }
+    }
+
     public void startThreadIfNeeded() {
         // thread object 'thread' will be null if a refresh thread is not needed.
         if (thread != null) {
@@ -277,7 +287,7 @@
         return loginContextName;
     }
 
-    private synchronized LoginContext login(final String loginContextName) throws LoginException {
+    private synchronized LoginContext login(final String loginContextName, String jaasConfFile) throws LoginException {
         if (loginContextName == null) {
             throw new LoginException("loginContext name (JAAS file section header) was null. "
                     + "Please check your java.security.login.auth.config (="
@@ -285,9 +295,10 @@
                     + ") and your " + ZooKeeperSaslClient.LOGIN_CONTEXT_NAME_KEY + "(="
                     + System.getProperty(ZooKeeperSaslClient.LOGIN_CONTEXT_NAME_KEY, "Client") + ")");
         }
-        LoginContext loginContext = new LoginContext(loginContextName, callbackHandler);
+        Configuration configuration = this.getConfiguration(jaasConfFile);
+        LoginContext loginContext = new LoginContext(loginContextName, null, callbackHandler, configuration);
         loginContext.login();
-        LOG.info("successfully logged in.");
+        LOG.info("Successfully logged in to context " + loginContextName + " using " + jaasConfFile);
         return loginContext;
     }
 
diff --git a/storm-client/src/jvm/org/apache/storm/pacemaker/PacemakerClient.java b/storm-client/src/jvm/org/apache/storm/pacemaker/PacemakerClient.java
index b77e003..30faf1f 100644
--- a/storm-client/src/jvm/org/apache/storm/pacemaker/PacemakerClient.java
+++ b/storm-client/src/jvm/org/apache/storm/pacemaker/PacemakerClient.java
@@ -73,9 +73,8 @@
         switch (auth) {
 
             case "DIGEST":
-                Configuration loginConf = ClientAuthUtils.getConfiguration(config);
                 authMethod = ThriftNettyClientCodec.AuthMethod.DIGEST;
-                secret = ClientAuthUtils.makeDigestPayload(loginConf, ClientAuthUtils.LOGIN_CONTEXT_PACEMAKER_DIGEST);
+                secret = ClientAuthUtils.makeDigestPayload(config, ClientAuthUtils.LOGIN_CONTEXT_PACEMAKER_DIGEST);
                 if (secret == null) {
                     LOG.error("Can't start pacemaker server without digest secret.");
                     throw new RuntimeException("Can't start pacemaker server without digest secret.");
diff --git a/storm-client/src/jvm/org/apache/storm/pacemaker/PacemakerClientHandler.java b/storm-client/src/jvm/org/apache/storm/pacemaker/PacemakerClientHandler.java
index b81f02d..31fabc1 100644
--- a/storm-client/src/jvm/org/apache/storm/pacemaker/PacemakerClientHandler.java
+++ b/storm-client/src/jvm/org/apache/storm/pacemaker/PacemakerClientHandler.java
@@ -36,6 +36,7 @@
         Channel channel = ctx.channel();
         LOG.info("Connection established from {} to {}",
                  channel.localAddress(), channel.remoteAddress());
+        client.channelReady(channel);
     }
 
     @Override
@@ -57,7 +58,7 @@
         if (cause instanceof ConnectException) {
             LOG.warn("Connection to pacemaker failed. Trying to reconnect {}", cause.getMessage());
         } else {
-            LOG.error("Exception occurred in Pacemaker.", cause);
+            LOG.error("Exception occurred in Pacemaker: " + cause);
         }
         client.reconnect();
     }
diff --git a/storm-client/src/jvm/org/apache/storm/pacemaker/codec/ThriftNettyClientCodec.java b/storm-client/src/jvm/org/apache/storm/pacemaker/codec/ThriftNettyClientCodec.java
index 8b9b0a2..b208834 100644
--- a/storm-client/src/jvm/org/apache/storm/pacemaker/codec/ThriftNettyClientCodec.java
+++ b/storm-client/src/jvm/org/apache/storm/pacemaker/codec/ThriftNettyClientCodec.java
@@ -72,7 +72,7 @@
                 throw new RuntimeException(e);
             }
         } else {
-            client.channelReady(ch);
+            // no work for AuthMethod.NONE
         }
 
         pipeline.addLast("PacemakerClientHandler", new PacemakerClientHandler(client));
diff --git a/storm-client/src/jvm/org/apache/storm/security/auth/ClientAuthUtils.java b/storm-client/src/jvm/org/apache/storm/security/auth/ClientAuthUtils.java
index 9fb0e4b..6a9b703 100644
--- a/storm-client/src/jvm/org/apache/storm/security/auth/ClientAuthUtils.java
+++ b/storm-client/src/jvm/org/apache/storm/security/auth/ClientAuthUtils.java
@@ -60,6 +60,10 @@
     private static final String USERNAME = "username";
     private static final String PASSWORD = "password";
 
+    public static String getJaasConf(Map<String, Object> topoConf) {
+        return (String) topoConf.get("java.security.auth.login.config");
+    }
+
     /**
      * Construct a JAAS configuration object per storm configuration file.
      *
@@ -70,7 +74,7 @@
         Configuration loginConf = null;
 
         //find login file configuration from Storm configuration
-        String loginConfigurationFile = (String) topoConf.get("java.security.auth.login.config");
+        String loginConfigurationFile = getJaasConf(topoConf);
         if ((loginConfigurationFile != null) && (loginConfigurationFile.length() > 0)) {
             File configFile = new File(loginConfigurationFile);
             if (!configFile.canRead()) {
@@ -111,12 +115,13 @@
     /**
      * Pull a set of keys out of a Configuration.
      *
-     * @param configuration The config to pull the key/value pairs out of.
+     * @param topoConf  The config containing the jaas conf file.
      * @param section       The app configuration entry name to get stuff from.
      * @return Return a map of the configs in conf.
      */
-    public static SortedMap<String, ?> pullConfig(Configuration configuration,
+    public static SortedMap<String, ?> pullConfig(Map<String, Object> topoConf,
                                                   String section) throws IOException {
+        Configuration configuration = ClientAuthUtils.getConfiguration(topoConf);
         AppConfigurationEntry[] configurationEntries = ClientAuthUtils.getEntries(configuration, section);
 
         if (configurationEntries == null) {
@@ -138,12 +143,17 @@
     /**
      * Pull a the value given section and key from Configuration.
      *
-     * @param configuration The config to pull the key/value pairs out of.
+     * @param topoConf   The config containing the jaas conf file.
      * @param section       The app configuration entry name to get stuff from.
      * @param key           The key to look up inside of the section
      * @return Return a the String value of the configuration value
      */
-    public static String get(Configuration configuration, String section, String key) throws IOException {
+    public static String get(Map<String, Object> topoConf, String section, String key) throws IOException {
+        Configuration configuration = ClientAuthUtils.getConfiguration(topoConf);
+        return get(configuration, section, key);
+    }
+
+    static String get(Configuration configuration, String section, String key) throws IOException {
         AppConfigurationEntry[] configurationEntries = ClientAuthUtils.getEntries(configuration, section);
 
         if (configurationEntries == null) {
@@ -456,22 +466,22 @@
     /**
      * Construct a transport plugin per storm configuration.
      */
-    public static ITransportPlugin getTransportPlugin(ThriftConnectionType type, Map<String, Object> topoConf, Configuration loginConf) {
+    public static ITransportPlugin getTransportPlugin(ThriftConnectionType type, Map<String, Object> topoConf) {
         try {
             String transportPluginClassName = type.getTransportPlugin(topoConf);
             ITransportPlugin transportPlugin = ReflectionUtils.newInstance(transportPluginClassName);
-            transportPlugin.prepare(type, topoConf, loginConf);
+            transportPlugin.prepare(type, topoConf);
             return transportPlugin;
         } catch (Exception e) {
             throw new RuntimeException(e);
         }
     }
 
-    public static String makeDigestPayload(Configuration loginConfig, String configSection) {
+    public static String makeDigestPayload(Map<String, Object> topoConf, String configSection) {
         String username = null;
         String password = null;
         try {
-            Map<String, ?> results = ClientAuthUtils.pullConfig(loginConfig, configSection);
+            Map<String, ?> results = ClientAuthUtils.pullConfig(topoConf, configSection);
             username = (String) results.get(USERNAME);
             password = (String) results.get(PASSWORD);
         } catch (Exception e) {
@@ -492,6 +502,8 @@
         }
     }
 
+
+
     public static byte[] serializeKerberosTicket(KerberosTicket tgt) throws Exception {
         ByteArrayOutputStream bao = new ByteArrayOutputStream();
         ObjectOutputStream out = new ObjectOutputStream(bao);
diff --git a/storm-client/src/jvm/org/apache/storm/security/auth/ITransportPlugin.java b/storm-client/src/jvm/org/apache/storm/security/auth/ITransportPlugin.java
index 6bf3b0b..19ec374 100644
--- a/storm-client/src/jvm/org/apache/storm/security/auth/ITransportPlugin.java
+++ b/storm-client/src/jvm/org/apache/storm/security/auth/ITransportPlugin.java
@@ -29,9 +29,8 @@
      *
      * @param type      the type of connection this will process.
      * @param topoConf  Storm configuration
-     * @param loginConf login configuration
      */
-    void prepare(ThriftConnectionType type, Map<String, Object> topoConf, Configuration loginConf);
+    void prepare(ThriftConnectionType type, Map<String, Object> topoConf);
 
     /**
      * Create a server associated with a given port, service handler, and purpose.
diff --git a/storm-client/src/jvm/org/apache/storm/security/auth/SimpleTransportPlugin.java b/storm-client/src/jvm/org/apache/storm/security/auth/SimpleTransportPlugin.java
index 21a6ce1..577f9fd 100644
--- a/storm-client/src/jvm/org/apache/storm/security/auth/SimpleTransportPlugin.java
+++ b/storm-client/src/jvm/org/apache/storm/security/auth/SimpleTransportPlugin.java
@@ -48,14 +48,12 @@
     private static final Logger LOG = LoggerFactory.getLogger(SimpleTransportPlugin.class);
     protected ThriftConnectionType type;
     protected Map<String, Object> topoConf;
-    protected Configuration loginConf;
     private int port;
 
     @Override
-    public void prepare(ThriftConnectionType type, Map<String, Object> topoConf, Configuration loginConf) {
+    public void prepare(ThriftConnectionType type, Map<String, Object> topoConf) {
         this.type = type;
         this.topoConf = topoConf;
-        this.loginConf = loginConf;
     }
 
     @Override
@@ -130,7 +128,7 @@
         }
 
         @Override
-        public boolean process(final TProtocol inProt, final TProtocol outProt) throws TException {
+        public void process(final TProtocol inProt, final TProtocol outProt) throws TException {
             //populating request context 
             ReqContext reqContext = ReqContext.context();
 
@@ -171,7 +169,7 @@
             reqContext.setSubject(s);
 
             //invoke service handler
-            return wrapped.process(inProt, outProt);
+            wrapped.process(inProt, outProt);
         }
     }
 }
diff --git a/storm-client/src/jvm/org/apache/storm/security/auth/ThriftClient.java b/storm-client/src/jvm/org/apache/storm/security/auth/ThriftClient.java
index a9d9129..88d70ec 100644
--- a/storm-client/src/jvm/org/apache/storm/security/auth/ThriftClient.java
+++ b/storm-client/src/jvm/org/apache/storm/security/auth/ThriftClient.java
@@ -83,11 +83,8 @@
                 socket.setTimeout(timeout);
             }
 
-            //locate login configuration 
-            Configuration loginConf = ClientAuthUtils.getConfiguration(conf);
-
             //construct a transport plugin
-            ITransportPlugin transportPlugin = ClientAuthUtils.getTransportPlugin(type, conf, loginConf);
+            ITransportPlugin transportPlugin = ClientAuthUtils.getTransportPlugin(type, conf);
 
             //TODO get this from type instead of hardcoding to Nimbus.
             //establish client-server transport via plugin
diff --git a/storm-client/src/jvm/org/apache/storm/security/auth/ThriftServer.java b/storm-client/src/jvm/org/apache/storm/security/auth/ThriftServer.java
index f6102bc..9938b7c 100644
--- a/storm-client/src/jvm/org/apache/storm/security/auth/ThriftServer.java
+++ b/storm-client/src/jvm/org/apache/storm/security/auth/ThriftServer.java
@@ -12,10 +12,8 @@
 
 package org.apache.storm.security.auth;
 
-import java.io.Closeable;
 import java.io.IOException;
 import java.util.Map;
-import javax.security.auth.login.Configuration;
 import org.apache.storm.security.auth.sasl.SaslTransportPlugin;
 import org.apache.storm.thrift.TProcessor;
 import org.apache.storm.thrift.server.TServer;
@@ -29,7 +27,6 @@
     private final Map<String, Object> conf; //storm configuration
     private final ThriftConnectionType type;
     private TServer server;
-    private Configuration loginConf;
     private int port;
     private boolean areWorkerTokensSupported;
     private ITransportPlugin transportPlugin;
@@ -40,14 +37,8 @@
         this.type = type;
 
         try {
-            //retrieve authentication configuration 
-            loginConf = ClientAuthUtils.getConfiguration(this.conf);
-        } catch (Exception x) {
-            LOG.error(x.getMessage(), x);
-        }
-        try {
             //locate our thrift transport plugin
-            transportPlugin = ClientAuthUtils.getTransportPlugin(this.type, this.conf, loginConf);
+            transportPlugin = ClientAuthUtils.getTransportPlugin(this.type, this.conf);
             //server
             server = transportPlugin.getServer(this.processor);
             port = transportPlugin.getPort();
diff --git a/storm-client/src/jvm/org/apache/storm/security/auth/digest/DigestSaslTransportPlugin.java b/storm-client/src/jvm/org/apache/storm/security/auth/digest/DigestSaslTransportPlugin.java
index e3e4497..5a77d51 100644
--- a/storm-client/src/jvm/org/apache/storm/security/auth/digest/DigestSaslTransportPlugin.java
+++ b/storm-client/src/jvm/org/apache/storm/security/auth/digest/DigestSaslTransportPlugin.java
@@ -16,6 +16,8 @@
 import java.util.Map;
 import javax.security.auth.callback.CallbackHandler;
 import javax.security.auth.login.AppConfigurationEntry;
+import javax.security.auth.login.Configuration;
+
 import org.apache.storm.generated.WorkerToken;
 import org.apache.storm.security.auth.ClientAuthUtils;
 import org.apache.storm.security.auth.sasl.SaslTransportPlugin;
@@ -44,7 +46,7 @@
         //create an authentication callback handler
         CallbackHandler serverCallbackHandler = new SimpleSaslServerCallbackHandler(impersonationAllowed,
                                                                                     workerTokenAuthorizer,
-                                                                                    new JassPasswordProvider(loginConf));
+                                                                                    new JassPasswordProvider(conf));
 
         //create a transport factory that will invoke our auth callback for digest
         TSaslServerTransport.Factory factory = new TSaslServerTransport.Factory();
@@ -60,7 +62,11 @@
         WorkerToken token = WorkerTokenClientCallbackHandler.findWorkerTokenInSubject(type);
         if (token != null) {
             clientCallbackHandler = new WorkerTokenClientCallbackHandler(token);
-        } else if (loginConf != null) {
+        } else {
+            Configuration loginConf = ClientAuthUtils.getConfiguration(conf);
+            if (loginConf == null) {
+                throw new IOException("Could not find any way to authenticate with the server.");
+            }
             AppConfigurationEntry[] configurationEntries = loginConf.getAppConfigurationEntry(ClientAuthUtils.LOGIN_CONTEXT_CLIENT);
             if (configurationEntries == null) {
                 String errorMessage = "Could not find a '" + ClientAuthUtils.LOGIN_CONTEXT_CLIENT
@@ -76,8 +82,6 @@
                 password = (String) options.getOrDefault("password", password);
             }
             clientCallbackHandler = new SimpleSaslClientCallbackHandler(username, password);
-        } else {
-            throw new IOException("Could not find any way to authenticate with the server.");
         }
 
         TSaslClientTransport wrapperTransport = new TSaslClientTransport(DIGEST,
diff --git a/storm-client/src/jvm/org/apache/storm/security/auth/digest/JassPasswordProvider.java b/storm-client/src/jvm/org/apache/storm/security/auth/digest/JassPasswordProvider.java
index 5c0d7a4..1d5c7e3 100644
--- a/storm-client/src/jvm/org/apache/storm/security/auth/digest/JassPasswordProvider.java
+++ b/storm-client/src/jvm/org/apache/storm/security/auth/digest/JassPasswordProvider.java
@@ -36,10 +36,12 @@
     /**
      * Constructor.
      *
-     * @param configuration the jaas configuration to get the credentials out of.
+     * @param topoConf the configuration containing the jaas conf to use.
      * @throws IOException if we could not read the Server section in the jaas conf.
      */
-    public JassPasswordProvider(Configuration configuration) throws IOException {
+    public JassPasswordProvider(Map<String, Object> topoConf) throws IOException {
+
+        Configuration configuration = ClientAuthUtils.getConfiguration(topoConf);
         if (configuration == null) {
             return;
         }
diff --git a/storm-client/src/jvm/org/apache/storm/security/auth/kerberos/AutoTGT.java b/storm-client/src/jvm/org/apache/storm/security/auth/kerberos/AutoTGT.java
index c4f9f91..6936555 100644
--- a/storm-client/src/jvm/org/apache/storm/security/auth/kerberos/AutoTGT.java
+++ b/storm-client/src/jvm/org/apache/storm/security/auth/kerberos/AutoTGT.java
@@ -121,11 +121,10 @@
         //Log the user in and get the TGT
         try {
             Configuration loginConf = ClientAuthUtils.getConfiguration(conf);
-            ClientCallbackHandler clientCallbackHandler = new ClientCallbackHandler(loginConf);
+            ClientCallbackHandler clientCallbackHandler = new ClientCallbackHandler(conf);
 
             //login our user
-            Configuration.setConfiguration(loginConf);
-            LoginContext lc = new LoginContext(ClientAuthUtils.LOGIN_CONTEXT_CLIENT, clientCallbackHandler);
+            LoginContext lc = new LoginContext(ClientAuthUtils.LOGIN_CONTEXT_CLIENT, null, clientCallbackHandler, loginConf);
             try {
                 lc.login();
                 final Subject subject = lc.getSubject();
diff --git a/storm-client/src/jvm/org/apache/storm/security/auth/kerberos/ClientCallbackHandler.java b/storm-client/src/jvm/org/apache/storm/security/auth/kerberos/ClientCallbackHandler.java
index 64673ce..f9a8e09 100644
--- a/storm-client/src/jvm/org/apache/storm/security/auth/kerberos/ClientCallbackHandler.java
+++ b/storm-client/src/jvm/org/apache/storm/security/auth/kerberos/ClientCallbackHandler.java
@@ -13,6 +13,7 @@
 package org.apache.storm.security.auth.kerberos;
 
 import java.io.IOException;
+import java.util.Map;
 import javax.security.auth.callback.Callback;
 import javax.security.auth.callback.CallbackHandler;
 import javax.security.auth.callback.NameCallback;
@@ -36,7 +37,8 @@
      *
      * <p>For digest, you should have a pair of user name and password defined in this figgure.
      */
-    public ClientCallbackHandler(Configuration configuration) throws IOException {
+    public ClientCallbackHandler(Map<String, Object> topoConf) throws IOException {
+        Configuration configuration = ClientAuthUtils.getConfiguration(topoConf);
         if (configuration == null) {
             return;
         }
diff --git a/storm-client/src/jvm/org/apache/storm/security/auth/kerberos/KerberosSaslTransportPlugin.java b/storm-client/src/jvm/org/apache/storm/security/auth/kerberos/KerberosSaslTransportPlugin.java
index 915473b..5b42886 100644
--- a/storm-client/src/jvm/org/apache/storm/security/auth/kerberos/KerberosSaslTransportPlugin.java
+++ b/storm-client/src/jvm/org/apache/storm/security/auth/kerberos/KerberosSaslTransportPlugin.java
@@ -58,15 +58,15 @@
             workerTokenAuthorizer = new WorkerTokenAuthorizer(conf, type);
         }
         //create an authentication callback handler
-        CallbackHandler serverCallbackHandler = new ServerCallbackHandler(loginConf, impersonationAllowed);
+        CallbackHandler serverCallbackHandler = new ServerCallbackHandler(conf, impersonationAllowed);
+
+        String jaasConfFile = ClientAuthUtils.getJaasConf(conf);
 
         //login our principal
         Subject subject = null;
         try {
-            //specify a configuration object to be used
-            Configuration.setConfiguration(loginConf);
             //now login
-            Login login = new Login(ClientAuthUtils.LOGIN_CONTEXT_SERVER, serverCallbackHandler);
+            Login login = new Login(ClientAuthUtils.LOGIN_CONTEXT_SERVER, serverCallbackHandler, jaasConfFile);
             subject = login.getSubject();
             login.startThreadIfNeeded();
         } catch (LoginException ex) {
@@ -77,10 +77,10 @@
         //check the credential of our principal
         if (subject.getPrivateCredentials(KerberosTicket.class).isEmpty()) {
             throw new RuntimeException("Fail to verify user principal with section \""
-                                       + ClientAuthUtils.LOGIN_CONTEXT_SERVER + "\" in login configuration file " + loginConf);
+                                       + ClientAuthUtils.LOGIN_CONTEXT_SERVER + "\" in login configuration file " + jaasConfFile);
         }
 
-        String principal = ClientAuthUtils.get(loginConf, ClientAuthUtils.LOGIN_CONTEXT_SERVER, "principal");
+        String principal = ClientAuthUtils.get(conf, ClientAuthUtils.LOGIN_CONTEXT_SERVER, "principal");
         LOG.debug("principal:" + principal);
         KerberosName serviceKerberosName = new KerberosName(principal);
         String serviceName = serviceKerberosName.getServiceName();
@@ -107,11 +107,9 @@
     private Login mkLogin() throws IOException {
         try {
             //create an authentication callback handler
-            ClientCallbackHandler clientCallbackHandler = new ClientCallbackHandler(loginConf);
-            //specify a configuration object to be used
-            Configuration.setConfiguration(loginConf);
+            ClientCallbackHandler clientCallbackHandler = new ClientCallbackHandler(conf);
             //now login
-            Login login = new Login(ClientAuthUtils.LOGIN_CONTEXT_CLIENT, clientCallbackHandler);
+            Login login = new Login(ClientAuthUtils.LOGIN_CONTEXT_CLIENT, clientCallbackHandler, ClientAuthUtils.getJaasConf(conf));
             login.startThreadIfNeeded();
             return login;
         } catch (LoginException ex) {
@@ -142,7 +140,7 @@
 
     private TTransport kerberosConnect(TTransport transport, String serverHost, String asUser) throws IOException {
         //login our user
-        SortedMap<String, ?> authConf = ClientAuthUtils.pullConfig(loginConf, ClientAuthUtils.LOGIN_CONTEXT_CLIENT);
+        SortedMap<String, ?> authConf = ClientAuthUtils.pullConfig(conf, ClientAuthUtils.LOGIN_CONTEXT_CLIENT);
         if (authConf == null) {
             throw new RuntimeException("Error in parsing the kerberos login Configuration, returned null");
         }
@@ -180,11 +178,11 @@
         final Subject subject = login.getSubject();
         if (subject.getPrivateCredentials(KerberosTicket.class).isEmpty()) { //error
             throw new RuntimeException("Fail to verify user principal with section \""
-                                       + ClientAuthUtils.LOGIN_CONTEXT_CLIENT + "\" in login configuration file " + loginConf);
+                    + ClientAuthUtils.LOGIN_CONTEXT_CLIENT + "\" in login configuration file " + ClientAuthUtils.getJaasConf(conf));
         }
 
         final String principal = StringUtils.isBlank(asUser) ? getPrincipal(subject) : asUser;
-        String serviceName = ClientAuthUtils.get(loginConf, ClientAuthUtils.LOGIN_CONTEXT_CLIENT, "serviceName");
+        String serviceName = ClientAuthUtils.get(conf, ClientAuthUtils.LOGIN_CONTEXT_CLIENT, "serviceName");
         if (serviceName == null) {
             serviceName = ClientAuthUtils.SERVICE;
         }
diff --git a/storm-client/src/jvm/org/apache/storm/security/auth/kerberos/ServerCallbackHandler.java b/storm-client/src/jvm/org/apache/storm/security/auth/kerberos/ServerCallbackHandler.java
index 05630e9..daa5fbf 100644
--- a/storm-client/src/jvm/org/apache/storm/security/auth/kerberos/ServerCallbackHandler.java
+++ b/storm-client/src/jvm/org/apache/storm/security/auth/kerberos/ServerCallbackHandler.java
@@ -13,6 +13,7 @@
 package org.apache.storm.security.auth.kerberos;
 
 import java.io.IOException;
+import java.util.Map;
 import javax.security.auth.callback.Callback;
 import javax.security.auth.callback.CallbackHandler;
 import javax.security.auth.callback.NameCallback;
@@ -35,8 +36,10 @@
     private static final Logger LOG = LoggerFactory.getLogger(ServerCallbackHandler.class);
     private final boolean impersonationAllowed;
 
-    public ServerCallbackHandler(Configuration configuration, boolean impersonationAllowed) throws IOException {
+    public ServerCallbackHandler(Map<String, Object> topoConf, boolean impersonationAllowed) throws IOException {
         this.impersonationAllowed = impersonationAllowed;
+
+        Configuration configuration = ClientAuthUtils.getConfiguration(topoConf);
         if (configuration == null) {
             return;
         }
diff --git a/storm-client/src/jvm/org/apache/storm/security/auth/sasl/SaslTransportPlugin.java b/storm-client/src/jvm/org/apache/storm/security/auth/sasl/SaslTransportPlugin.java
index b204c1a..745d360 100644
--- a/storm-client/src/jvm/org/apache/storm/security/auth/sasl/SaslTransportPlugin.java
+++ b/storm-client/src/jvm/org/apache/storm/security/auth/sasl/SaslTransportPlugin.java
@@ -49,14 +49,12 @@
 public abstract class SaslTransportPlugin implements ITransportPlugin, Closeable {
     protected ThriftConnectionType type;
     protected Map<String, Object> conf;
-    protected Configuration loginConf;
     private int port;
 
     @Override
-    public void prepare(ThriftConnectionType type, Map<String, Object> conf, Configuration loginConf) {
+    public void prepare(ThriftConnectionType type, Map<String, Object> conf) {
         this.type = type;
         this.conf = conf;
-        this.loginConf = loginConf;
     }
 
     @Override
@@ -126,7 +124,7 @@
         }
 
         @Override
-        public boolean process(final TProtocol inProt, final TProtocol outProt) throws TException {
+        public void process(final TProtocol inProt, final TProtocol outProt) throws TException {
             //populating request context
             ReqContext reqContext = ReqContext.context();
 
@@ -135,7 +133,7 @@
             TSaslServerTransport saslTrans = (TSaslServerTransport) trans;
 
             if (trans instanceof NoOpTTrasport) {
-                return false;
+                return;
             }
 
             //remote address
@@ -151,7 +149,7 @@
             reqContext.setSubject(remoteUser);
 
             //invoke service handler
-            return wrapped.process(inProt, outProt);
+            wrapped.process(inProt, outProt);
         }
     }
 
diff --git a/storm-client/src/jvm/org/apache/storm/topology/TopologyBuilder.java b/storm-client/src/jvm/org/apache/storm/topology/TopologyBuilder.java
index 63384df..c14d6fb 100644
--- a/storm-client/src/jvm/org/apache/storm/topology/TopologyBuilder.java
+++ b/storm-client/src/jvm/org/apache/storm/topology/TopologyBuilder.java
@@ -473,15 +473,6 @@
         return setSpout(id, new LambdaSpout(supplier), parallelismHint);
     }
 
-    public void setStateSpout(String id, IRichStateSpout stateSpout) throws IllegalArgumentException {
-        setStateSpout(id, stateSpout, null);
-    }
-
-    public void setStateSpout(String id, IRichStateSpout stateSpout, Number parallelismHint) throws IllegalArgumentException {
-        validateUnusedId(id);
-        // TODO: finish
-    }
-
     /**
      * Add a new worker lifecycle hook.
      *
diff --git a/storm-client/src/jvm/org/apache/storm/topology/WindowedBoltExecutor.java b/storm-client/src/jvm/org/apache/storm/topology/WindowedBoltExecutor.java
index e6a12d9..4713e95 100644
--- a/storm-client/src/jvm/org/apache/storm/topology/WindowedBoltExecutor.java
+++ b/storm-client/src/jvm/org/apache/storm/topology/WindowedBoltExecutor.java
@@ -16,6 +16,7 @@
 import static org.apache.storm.topology.base.BaseWindowedBolt.Duration;
 
 import java.util.Collection;
+import java.util.Collections;
 import java.util.HashSet;
 import java.util.Iterator;
 import java.util.List;
@@ -345,7 +346,7 @@
 
     @Override
     public Map<String, Object> getComponentConfiguration() {
-        return bolt.getComponentConfiguration();
+        return bolt.getComponentConfiguration() != null ? bolt.getComponentConfiguration() : Collections.emptyMap();
     }
 
     protected WindowLifecycleListener<Tuple> newWindowLifecycleListener() {
diff --git a/storm-client/src/py/storm/DistributedRPC-remote b/storm-client/src/py/storm/DistributedRPC-remote
index c0ebdb6..8a04566 100644
--- a/storm-client/src/py/storm/DistributedRPC-remote
+++ b/storm-client/src/py/storm/DistributedRPC-remote
@@ -18,7 +18,7 @@
 
 #!/usr/bin/env python
 #
-# Autogenerated by Thrift Compiler (0.12.0)
+# Autogenerated by Thrift Compiler (0.13.0)
 #
 # DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
 #
diff --git a/storm-client/src/py/storm/DistributedRPC.py b/storm-client/src/py/storm/DistributedRPC.py
index 4d91b1d..24be6a1 100644
--- a/storm-client/src/py/storm/DistributedRPC.py
+++ b/storm-client/src/py/storm/DistributedRPC.py
@@ -17,7 +17,7 @@
 # limitations under the License.
 
 #
-# Autogenerated by Thrift Compiler (0.12.0)
+# Autogenerated by Thrift Compiler (0.13.0)
 #
 # DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
 #
@@ -98,9 +98,15 @@
         self._handler = handler
         self._processMap = {}
         self._processMap["execute"] = Processor.process_execute
+        self._on_message_begin = None
+
+    def on_message_begin(self, func):
+        self._on_message_begin = func
 
     def process(self, iprot, oprot):
         (name, type, seqid) = iprot.readMessageBegin()
+        if self._on_message_begin:
+            self._on_message_begin(name, type, seqid)
         if name not in self._processMap:
             iprot.skip(TType.STRUCT)
             iprot.readMessageEnd()
diff --git a/storm-client/src/py/storm/DistributedRPCInvocations-remote b/storm-client/src/py/storm/DistributedRPCInvocations-remote
index 6535546..2213e35 100644
--- a/storm-client/src/py/storm/DistributedRPCInvocations-remote
+++ b/storm-client/src/py/storm/DistributedRPCInvocations-remote
@@ -18,7 +18,7 @@
 
 #!/usr/bin/env python
 #
-# Autogenerated by Thrift Compiler (0.12.0)
+# Autogenerated by Thrift Compiler (0.13.0)
 #
 # DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
 #
diff --git a/storm-client/src/py/storm/DistributedRPCInvocations.py b/storm-client/src/py/storm/DistributedRPCInvocations.py
index d58e70c..bf71173 100644
--- a/storm-client/src/py/storm/DistributedRPCInvocations.py
+++ b/storm-client/src/py/storm/DistributedRPCInvocations.py
@@ -17,7 +17,7 @@
 # limitations under the License.
 
 #
-# Autogenerated by Thrift Compiler (0.12.0)
+# Autogenerated by Thrift Compiler (0.13.0)
 #
 # DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
 #
@@ -222,9 +222,15 @@
         self._processMap["fetchRequest"] = Processor.process_fetchRequest
         self._processMap["failRequest"] = Processor.process_failRequest
         self._processMap["failRequestV2"] = Processor.process_failRequestV2
+        self._on_message_begin = None
+
+    def on_message_begin(self, func):
+        self._on_message_begin = func
 
     def process(self, iprot, oprot):
         (name, type, seqid) = iprot.readMessageBegin()
+        if self._on_message_begin:
+            self._on_message_begin(name, type, seqid)
         if name not in self._processMap:
             iprot.skip(TType.STRUCT)
             iprot.readMessageEnd()
diff --git a/storm-client/src/py/storm/Nimbus-remote b/storm-client/src/py/storm/Nimbus-remote
index f1e56d5..6b104dc 100644
--- a/storm-client/src/py/storm/Nimbus-remote
+++ b/storm-client/src/py/storm/Nimbus-remote
@@ -18,7 +18,7 @@
 
 #!/usr/bin/env python
 #
-# Autogenerated by Thrift Compiler (0.12.0)
+# Autogenerated by Thrift Compiler (0.13.0)
 #
 # DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
 #
diff --git a/storm-client/src/py/storm/Nimbus.py b/storm-client/src/py/storm/Nimbus.py
index ea55747..e2dac26 100644
--- a/storm-client/src/py/storm/Nimbus.py
+++ b/storm-client/src/py/storm/Nimbus.py
@@ -17,7 +17,7 @@
 # limitations under the License.
 
 #
-# Autogenerated by Thrift Compiler (0.12.0)
+# Autogenerated by Thrift Compiler (0.13.0)
 #
 # DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
 #
@@ -2286,9 +2286,15 @@
         self._processMap["sendSupervisorWorkerHeartbeat"] = Processor.process_sendSupervisorWorkerHeartbeat
         self._processMap["processWorkerMetrics"] = Processor.process_processWorkerMetrics
         self._processMap["isRemoteBlobExists"] = Processor.process_isRemoteBlobExists
+        self._on_message_begin = None
+
+    def on_message_begin(self, func):
+        self._on_message_begin = func
 
     def process(self, iprot, oprot):
         (name, type, seqid) = iprot.readMessageBegin()
+        if self._on_message_begin:
+            self._on_message_begin(name, type, seqid)
         if name not in self._processMap:
             iprot.skip(TType.STRUCT)
             iprot.readMessageEnd()
@@ -5438,11 +5444,11 @@
             if fid == 0:
                 if ftype == TType.LIST:
                     self.success = []
-                    (_etype824, _size821) = iprot.readListBegin()
-                    for _i825 in range(_size821):
-                        _elem826 = ProfileRequest()
-                        _elem826.read(iprot)
-                        self.success.append(_elem826)
+                    (_etype869, _size866) = iprot.readListBegin()
+                    for _i870 in range(_size866):
+                        _elem871 = ProfileRequest()
+                        _elem871.read(iprot)
+                        self.success.append(_elem871)
                     iprot.readListEnd()
                 else:
                     iprot.skip(ftype)
@@ -5459,8 +5465,8 @@
         if self.success is not None:
             oprot.writeFieldBegin('success', TType.LIST, 0)
             oprot.writeListBegin(TType.STRUCT, len(self.success))
-            for iter827 in self.success:
-                iter827.write(oprot)
+            for iter872 in self.success:
+                iter872.write(oprot)
             oprot.writeListEnd()
             oprot.writeFieldEnd()
         oprot.writeFieldStop()
@@ -10148,11 +10154,11 @@
             if fid == 0:
                 if ftype == TType.LIST:
                     self.success = []
-                    (_etype831, _size828) = iprot.readListBegin()
-                    for _i832 in range(_size828):
-                        _elem833 = OwnerResourceSummary()
-                        _elem833.read(iprot)
-                        self.success.append(_elem833)
+                    (_etype876, _size873) = iprot.readListBegin()
+                    for _i877 in range(_size873):
+                        _elem878 = OwnerResourceSummary()
+                        _elem878.read(iprot)
+                        self.success.append(_elem878)
                     iprot.readListEnd()
                 else:
                     iprot.skip(ftype)
@@ -10175,8 +10181,8 @@
         if self.success is not None:
             oprot.writeFieldBegin('success', TType.LIST, 0)
             oprot.writeListBegin(TType.STRUCT, len(self.success))
-            for iter834 in self.success:
-                iter834.write(oprot)
+            for iter879 in self.success:
+                iter879.write(oprot)
             oprot.writeListEnd()
             oprot.writeFieldEnd()
         if self.aze is not None:
diff --git a/storm-client/src/py/storm/Supervisor-remote b/storm-client/src/py/storm/Supervisor-remote
index 4f0bc2f..b08c4fd 100644
--- a/storm-client/src/py/storm/Supervisor-remote
+++ b/storm-client/src/py/storm/Supervisor-remote
@@ -18,7 +18,7 @@
 
 #!/usr/bin/env python
 #
-# Autogenerated by Thrift Compiler (0.12.0)
+# Autogenerated by Thrift Compiler (0.13.0)
 #
 # DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
 #
diff --git a/storm-client/src/py/storm/Supervisor.py b/storm-client/src/py/storm/Supervisor.py
index 5ade6cc..72d4529 100644
--- a/storm-client/src/py/storm/Supervisor.py
+++ b/storm-client/src/py/storm/Supervisor.py
@@ -17,7 +17,7 @@
 # limitations under the License.
 
 #
-# Autogenerated by Thrift Compiler (0.12.0)
+# Autogenerated by Thrift Compiler (0.13.0)
 #
 # DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
 #
@@ -189,9 +189,15 @@
         self._processMap["sendSupervisorAssignments"] = Processor.process_sendSupervisorAssignments
         self._processMap["getLocalAssignmentForStorm"] = Processor.process_getLocalAssignmentForStorm
         self._processMap["sendSupervisorWorkerHeartbeat"] = Processor.process_sendSupervisorWorkerHeartbeat
+        self._on_message_begin = None
+
+    def on_message_begin(self, func):
+        self._on_message_begin = func
 
     def process(self, iprot, oprot):
         (name, type, seqid) = iprot.readMessageBegin()
+        if self._on_message_begin:
+            self._on_message_begin(name, type, seqid)
         if name not in self._processMap:
             iprot.skip(TType.STRUCT)
             iprot.readMessageEnd()
diff --git a/storm-client/src/py/storm/constants.py b/storm-client/src/py/storm/constants.py
index 8d8ee2c..2bfb61e 100644
--- a/storm-client/src/py/storm/constants.py
+++ b/storm-client/src/py/storm/constants.py
@@ -17,7 +17,7 @@
 # limitations under the License.
 
 #
-# Autogenerated by Thrift Compiler (0.12.0)
+# Autogenerated by Thrift Compiler (0.13.0)
 #
 # DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
 #
diff --git a/storm-client/src/py/storm/ttypes.py b/storm-client/src/py/storm/ttypes.py
index 166e604..2bd65b8 100644
--- a/storm-client/src/py/storm/ttypes.py
+++ b/storm-client/src/py/storm/ttypes.py
@@ -17,7 +17,7 @@
 # limitations under the License.
 
 #
-# Autogenerated by Thrift Compiler (0.12.0)
+# Autogenerated by Thrift Compiler (0.13.0)
 #
 # DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
 #
@@ -2084,11 +2084,13 @@
      - assigned_memonheap
      - assigned_memoffheap
      - assigned_cpu
+     - requested_generic_resources
+     - assigned_generic_resources
 
     """
 
 
-    def __init__(self, id=None, name=None, num_tasks=None, num_executors=None, num_workers=None, uptime_secs=None, status=None, storm_version=None, topology_version=None, sched_status=None, owner=None, replication_count=None, requested_memonheap=None, requested_memoffheap=None, requested_cpu=None, assigned_memonheap=None, assigned_memoffheap=None, assigned_cpu=None,):
+    def __init__(self, id=None, name=None, num_tasks=None, num_executors=None, num_workers=None, uptime_secs=None, status=None, storm_version=None, topology_version=None, sched_status=None, owner=None, replication_count=None, requested_memonheap=None, requested_memoffheap=None, requested_cpu=None, assigned_memonheap=None, assigned_memoffheap=None, assigned_cpu=None, requested_generic_resources=None, assigned_generic_resources=None,):
         self.id = id
         self.name = name
         self.num_tasks = num_tasks
@@ -2107,6 +2109,8 @@
         self.assigned_memonheap = assigned_memonheap
         self.assigned_memoffheap = assigned_memoffheap
         self.assigned_cpu = assigned_cpu
+        self.requested_generic_resources = requested_generic_resources
+        self.assigned_generic_resources = assigned_generic_resources
 
     def read(self, iprot):
         if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None:
@@ -2207,6 +2211,28 @@
                     self.assigned_cpu = iprot.readDouble()
                 else:
                     iprot.skip(ftype)
+            elif fid == 527:
+                if ftype == TType.MAP:
+                    self.requested_generic_resources = {}
+                    (_ktype113, _vtype114, _size112) = iprot.readMapBegin()
+                    for _i116 in range(_size112):
+                        _key117 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val118 = iprot.readDouble()
+                        self.requested_generic_resources[_key117] = _val118
+                    iprot.readMapEnd()
+                else:
+                    iprot.skip(ftype)
+            elif fid == 528:
+                if ftype == TType.MAP:
+                    self.assigned_generic_resources = {}
+                    (_ktype120, _vtype121, _size119) = iprot.readMapBegin()
+                    for _i123 in range(_size119):
+                        _key124 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val125 = iprot.readDouble()
+                        self.assigned_generic_resources[_key124] = _val125
+                    iprot.readMapEnd()
+                else:
+                    iprot.skip(ftype)
             else:
                 iprot.skip(ftype)
             iprot.readFieldEnd()
@@ -2289,6 +2315,22 @@
             oprot.writeFieldBegin('assigned_cpu', TType.DOUBLE, 526)
             oprot.writeDouble(self.assigned_cpu)
             oprot.writeFieldEnd()
+        if self.requested_generic_resources is not None:
+            oprot.writeFieldBegin('requested_generic_resources', TType.MAP, 527)
+            oprot.writeMapBegin(TType.STRING, TType.DOUBLE, len(self.requested_generic_resources))
+            for kiter126, viter127 in self.requested_generic_resources.items():
+                oprot.writeString(kiter126.encode('utf-8') if sys.version_info[0] == 2 else kiter126)
+                oprot.writeDouble(viter127)
+            oprot.writeMapEnd()
+            oprot.writeFieldEnd()
+        if self.assigned_generic_resources is not None:
+            oprot.writeFieldBegin('assigned_generic_resources', TType.MAP, 528)
+            oprot.writeMapBegin(TType.STRING, TType.DOUBLE, len(self.assigned_generic_resources))
+            for kiter128, viter129 in self.assigned_generic_resources.items():
+                oprot.writeString(kiter128.encode('utf-8') if sys.version_info[0] == 2 else kiter128)
+                oprot.writeDouble(viter129)
+            oprot.writeMapEnd()
+            oprot.writeFieldEnd()
         oprot.writeFieldStop()
         oprot.writeStructEnd()
 
@@ -2336,11 +2378,12 @@
      - fragmented_mem
      - fragmented_cpu
      - blacklisted
+     - used_generic_resources
 
     """
 
 
-    def __init__(self, host=None, uptime_secs=None, num_workers=None, num_used_workers=None, supervisor_id=None, version="VERSION_NOT_PROVIDED", total_resources=None, used_mem=None, used_cpu=None, fragmented_mem=None, fragmented_cpu=None, blacklisted=None,):
+    def __init__(self, host=None, uptime_secs=None, num_workers=None, num_used_workers=None, supervisor_id=None, version="VERSION_NOT_PROVIDED", total_resources=None, used_mem=None, used_cpu=None, fragmented_mem=None, fragmented_cpu=None, blacklisted=None, used_generic_resources=None,):
         self.host = host
         self.uptime_secs = uptime_secs
         self.num_workers = num_workers
@@ -2353,6 +2396,7 @@
         self.fragmented_mem = fragmented_mem
         self.fragmented_cpu = fragmented_cpu
         self.blacklisted = blacklisted
+        self.used_generic_resources = used_generic_resources
 
     def read(self, iprot):
         if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None:
@@ -2396,11 +2440,11 @@
             elif fid == 7:
                 if ftype == TType.MAP:
                     self.total_resources = {}
-                    (_ktype113, _vtype114, _size112) = iprot.readMapBegin()
-                    for _i116 in range(_size112):
-                        _key117 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val118 = iprot.readDouble()
-                        self.total_resources[_key117] = _val118
+                    (_ktype131, _vtype132, _size130) = iprot.readMapBegin()
+                    for _i134 in range(_size130):
+                        _key135 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val136 = iprot.readDouble()
+                        self.total_resources[_key135] = _val136
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
@@ -2429,6 +2473,17 @@
                     self.blacklisted = iprot.readBool()
                 else:
                     iprot.skip(ftype)
+            elif fid == 13:
+                if ftype == TType.MAP:
+                    self.used_generic_resources = {}
+                    (_ktype138, _vtype139, _size137) = iprot.readMapBegin()
+                    for _i141 in range(_size137):
+                        _key142 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val143 = iprot.readDouble()
+                        self.used_generic_resources[_key142] = _val143
+                    iprot.readMapEnd()
+                else:
+                    iprot.skip(ftype)
             else:
                 iprot.skip(ftype)
             iprot.readFieldEnd()
@@ -2466,9 +2521,9 @@
         if self.total_resources is not None:
             oprot.writeFieldBegin('total_resources', TType.MAP, 7)
             oprot.writeMapBegin(TType.STRING, TType.DOUBLE, len(self.total_resources))
-            for kiter119, viter120 in self.total_resources.items():
-                oprot.writeString(kiter119.encode('utf-8') if sys.version_info[0] == 2 else kiter119)
-                oprot.writeDouble(viter120)
+            for kiter144, viter145 in self.total_resources.items():
+                oprot.writeString(kiter144.encode('utf-8') if sys.version_info[0] == 2 else kiter144)
+                oprot.writeDouble(viter145)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         if self.used_mem is not None:
@@ -2491,6 +2546,14 @@
             oprot.writeFieldBegin('blacklisted', TType.BOOL, 12)
             oprot.writeBool(self.blacklisted)
             oprot.writeFieldEnd()
+        if self.used_generic_resources is not None:
+            oprot.writeFieldBegin('used_generic_resources', TType.MAP, 13)
+            oprot.writeMapBegin(TType.STRING, TType.DOUBLE, len(self.used_generic_resources))
+            for kiter146, viter147 in self.used_generic_resources.items():
+                oprot.writeString(kiter146.encode('utf-8') if sys.version_info[0] == 2 else kiter146)
+                oprot.writeDouble(viter147)
+            oprot.writeMapEnd()
+            oprot.writeFieldEnd()
         oprot.writeFieldStop()
         oprot.writeStructEnd()
 
@@ -2657,33 +2720,33 @@
             if fid == 1:
                 if ftype == TType.LIST:
                     self.supervisors = []
-                    (_etype124, _size121) = iprot.readListBegin()
-                    for _i125 in range(_size121):
-                        _elem126 = SupervisorSummary()
-                        _elem126.read(iprot)
-                        self.supervisors.append(_elem126)
+                    (_etype151, _size148) = iprot.readListBegin()
+                    for _i152 in range(_size148):
+                        _elem153 = SupervisorSummary()
+                        _elem153.read(iprot)
+                        self.supervisors.append(_elem153)
                     iprot.readListEnd()
                 else:
                     iprot.skip(ftype)
             elif fid == 3:
                 if ftype == TType.LIST:
                     self.topologies = []
-                    (_etype130, _size127) = iprot.readListBegin()
-                    for _i131 in range(_size127):
-                        _elem132 = TopologySummary()
-                        _elem132.read(iprot)
-                        self.topologies.append(_elem132)
+                    (_etype157, _size154) = iprot.readListBegin()
+                    for _i158 in range(_size154):
+                        _elem159 = TopologySummary()
+                        _elem159.read(iprot)
+                        self.topologies.append(_elem159)
                     iprot.readListEnd()
                 else:
                     iprot.skip(ftype)
             elif fid == 4:
                 if ftype == TType.LIST:
                     self.nimbuses = []
-                    (_etype136, _size133) = iprot.readListBegin()
-                    for _i137 in range(_size133):
-                        _elem138 = NimbusSummary()
-                        _elem138.read(iprot)
-                        self.nimbuses.append(_elem138)
+                    (_etype163, _size160) = iprot.readListBegin()
+                    for _i164 in range(_size160):
+                        _elem165 = NimbusSummary()
+                        _elem165.read(iprot)
+                        self.nimbuses.append(_elem165)
                     iprot.readListEnd()
                 else:
                     iprot.skip(ftype)
@@ -2700,22 +2763,22 @@
         if self.supervisors is not None:
             oprot.writeFieldBegin('supervisors', TType.LIST, 1)
             oprot.writeListBegin(TType.STRUCT, len(self.supervisors))
-            for iter139 in self.supervisors:
-                iter139.write(oprot)
+            for iter166 in self.supervisors:
+                iter166.write(oprot)
             oprot.writeListEnd()
             oprot.writeFieldEnd()
         if self.topologies is not None:
             oprot.writeFieldBegin('topologies', TType.LIST, 3)
             oprot.writeListBegin(TType.STRUCT, len(self.topologies))
-            for iter140 in self.topologies:
-                iter140.write(oprot)
+            for iter167 in self.topologies:
+                iter167.write(oprot)
             oprot.writeListEnd()
             oprot.writeFieldEnd()
         if self.nimbuses is not None:
             oprot.writeFieldBegin('nimbuses', TType.LIST, 4)
             oprot.writeListBegin(TType.STRUCT, len(self.nimbuses))
-            for iter141 in self.nimbuses:
-                iter141.write(oprot)
+            for iter168 in self.nimbuses:
+                iter168.write(oprot)
             oprot.writeListEnd()
             oprot.writeFieldEnd()
         oprot.writeFieldStop()
@@ -2867,90 +2930,90 @@
             if fid == 1:
                 if ftype == TType.MAP:
                     self.acked = {}
-                    (_ktype143, _vtype144, _size142) = iprot.readMapBegin()
-                    for _i146 in range(_size142):
-                        _key147 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val148 = {}
-                        (_ktype150, _vtype151, _size149) = iprot.readMapBegin()
-                        for _i153 in range(_size149):
-                            _key154 = GlobalStreamId()
-                            _key154.read(iprot)
-                            _val155 = iprot.readI64()
-                            _val148[_key154] = _val155
+                    (_ktype170, _vtype171, _size169) = iprot.readMapBegin()
+                    for _i173 in range(_size169):
+                        _key174 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val175 = {}
+                        (_ktype177, _vtype178, _size176) = iprot.readMapBegin()
+                        for _i180 in range(_size176):
+                            _key181 = GlobalStreamId()
+                            _key181.read(iprot)
+                            _val182 = iprot.readI64()
+                            _val175[_key181] = _val182
                         iprot.readMapEnd()
-                        self.acked[_key147] = _val148
+                        self.acked[_key174] = _val175
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
             elif fid == 2:
                 if ftype == TType.MAP:
                     self.failed = {}
-                    (_ktype157, _vtype158, _size156) = iprot.readMapBegin()
-                    for _i160 in range(_size156):
-                        _key161 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val162 = {}
-                        (_ktype164, _vtype165, _size163) = iprot.readMapBegin()
-                        for _i167 in range(_size163):
-                            _key168 = GlobalStreamId()
-                            _key168.read(iprot)
-                            _val169 = iprot.readI64()
-                            _val162[_key168] = _val169
+                    (_ktype184, _vtype185, _size183) = iprot.readMapBegin()
+                    for _i187 in range(_size183):
+                        _key188 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val189 = {}
+                        (_ktype191, _vtype192, _size190) = iprot.readMapBegin()
+                        for _i194 in range(_size190):
+                            _key195 = GlobalStreamId()
+                            _key195.read(iprot)
+                            _val196 = iprot.readI64()
+                            _val189[_key195] = _val196
                         iprot.readMapEnd()
-                        self.failed[_key161] = _val162
+                        self.failed[_key188] = _val189
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
             elif fid == 3:
                 if ftype == TType.MAP:
                     self.process_ms_avg = {}
-                    (_ktype171, _vtype172, _size170) = iprot.readMapBegin()
-                    for _i174 in range(_size170):
-                        _key175 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val176 = {}
-                        (_ktype178, _vtype179, _size177) = iprot.readMapBegin()
-                        for _i181 in range(_size177):
-                            _key182 = GlobalStreamId()
-                            _key182.read(iprot)
-                            _val183 = iprot.readDouble()
-                            _val176[_key182] = _val183
+                    (_ktype198, _vtype199, _size197) = iprot.readMapBegin()
+                    for _i201 in range(_size197):
+                        _key202 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val203 = {}
+                        (_ktype205, _vtype206, _size204) = iprot.readMapBegin()
+                        for _i208 in range(_size204):
+                            _key209 = GlobalStreamId()
+                            _key209.read(iprot)
+                            _val210 = iprot.readDouble()
+                            _val203[_key209] = _val210
                         iprot.readMapEnd()
-                        self.process_ms_avg[_key175] = _val176
+                        self.process_ms_avg[_key202] = _val203
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
             elif fid == 4:
                 if ftype == TType.MAP:
                     self.executed = {}
-                    (_ktype185, _vtype186, _size184) = iprot.readMapBegin()
-                    for _i188 in range(_size184):
-                        _key189 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val190 = {}
-                        (_ktype192, _vtype193, _size191) = iprot.readMapBegin()
-                        for _i195 in range(_size191):
-                            _key196 = GlobalStreamId()
-                            _key196.read(iprot)
-                            _val197 = iprot.readI64()
-                            _val190[_key196] = _val197
+                    (_ktype212, _vtype213, _size211) = iprot.readMapBegin()
+                    for _i215 in range(_size211):
+                        _key216 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val217 = {}
+                        (_ktype219, _vtype220, _size218) = iprot.readMapBegin()
+                        for _i222 in range(_size218):
+                            _key223 = GlobalStreamId()
+                            _key223.read(iprot)
+                            _val224 = iprot.readI64()
+                            _val217[_key223] = _val224
                         iprot.readMapEnd()
-                        self.executed[_key189] = _val190
+                        self.executed[_key216] = _val217
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
             elif fid == 5:
                 if ftype == TType.MAP:
                     self.execute_ms_avg = {}
-                    (_ktype199, _vtype200, _size198) = iprot.readMapBegin()
-                    for _i202 in range(_size198):
-                        _key203 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val204 = {}
-                        (_ktype206, _vtype207, _size205) = iprot.readMapBegin()
-                        for _i209 in range(_size205):
-                            _key210 = GlobalStreamId()
-                            _key210.read(iprot)
-                            _val211 = iprot.readDouble()
-                            _val204[_key210] = _val211
+                    (_ktype226, _vtype227, _size225) = iprot.readMapBegin()
+                    for _i229 in range(_size225):
+                        _key230 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val231 = {}
+                        (_ktype233, _vtype234, _size232) = iprot.readMapBegin()
+                        for _i236 in range(_size232):
+                            _key237 = GlobalStreamId()
+                            _key237.read(iprot)
+                            _val238 = iprot.readDouble()
+                            _val231[_key237] = _val238
                         iprot.readMapEnd()
-                        self.execute_ms_avg[_key203] = _val204
+                        self.execute_ms_avg[_key230] = _val231
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
@@ -2967,60 +3030,60 @@
         if self.acked is not None:
             oprot.writeFieldBegin('acked', TType.MAP, 1)
             oprot.writeMapBegin(TType.STRING, TType.MAP, len(self.acked))
-            for kiter212, viter213 in self.acked.items():
-                oprot.writeString(kiter212.encode('utf-8') if sys.version_info[0] == 2 else kiter212)
-                oprot.writeMapBegin(TType.STRUCT, TType.I64, len(viter213))
-                for kiter214, viter215 in viter213.items():
-                    kiter214.write(oprot)
-                    oprot.writeI64(viter215)
+            for kiter239, viter240 in self.acked.items():
+                oprot.writeString(kiter239.encode('utf-8') if sys.version_info[0] == 2 else kiter239)
+                oprot.writeMapBegin(TType.STRUCT, TType.I64, len(viter240))
+                for kiter241, viter242 in viter240.items():
+                    kiter241.write(oprot)
+                    oprot.writeI64(viter242)
                 oprot.writeMapEnd()
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         if self.failed is not None:
             oprot.writeFieldBegin('failed', TType.MAP, 2)
             oprot.writeMapBegin(TType.STRING, TType.MAP, len(self.failed))
-            for kiter216, viter217 in self.failed.items():
-                oprot.writeString(kiter216.encode('utf-8') if sys.version_info[0] == 2 else kiter216)
-                oprot.writeMapBegin(TType.STRUCT, TType.I64, len(viter217))
-                for kiter218, viter219 in viter217.items():
-                    kiter218.write(oprot)
-                    oprot.writeI64(viter219)
+            for kiter243, viter244 in self.failed.items():
+                oprot.writeString(kiter243.encode('utf-8') if sys.version_info[0] == 2 else kiter243)
+                oprot.writeMapBegin(TType.STRUCT, TType.I64, len(viter244))
+                for kiter245, viter246 in viter244.items():
+                    kiter245.write(oprot)
+                    oprot.writeI64(viter246)
                 oprot.writeMapEnd()
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         if self.process_ms_avg is not None:
             oprot.writeFieldBegin('process_ms_avg', TType.MAP, 3)
             oprot.writeMapBegin(TType.STRING, TType.MAP, len(self.process_ms_avg))
-            for kiter220, viter221 in self.process_ms_avg.items():
-                oprot.writeString(kiter220.encode('utf-8') if sys.version_info[0] == 2 else kiter220)
-                oprot.writeMapBegin(TType.STRUCT, TType.DOUBLE, len(viter221))
-                for kiter222, viter223 in viter221.items():
-                    kiter222.write(oprot)
-                    oprot.writeDouble(viter223)
+            for kiter247, viter248 in self.process_ms_avg.items():
+                oprot.writeString(kiter247.encode('utf-8') if sys.version_info[0] == 2 else kiter247)
+                oprot.writeMapBegin(TType.STRUCT, TType.DOUBLE, len(viter248))
+                for kiter249, viter250 in viter248.items():
+                    kiter249.write(oprot)
+                    oprot.writeDouble(viter250)
                 oprot.writeMapEnd()
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         if self.executed is not None:
             oprot.writeFieldBegin('executed', TType.MAP, 4)
             oprot.writeMapBegin(TType.STRING, TType.MAP, len(self.executed))
-            for kiter224, viter225 in self.executed.items():
-                oprot.writeString(kiter224.encode('utf-8') if sys.version_info[0] == 2 else kiter224)
-                oprot.writeMapBegin(TType.STRUCT, TType.I64, len(viter225))
-                for kiter226, viter227 in viter225.items():
-                    kiter226.write(oprot)
-                    oprot.writeI64(viter227)
+            for kiter251, viter252 in self.executed.items():
+                oprot.writeString(kiter251.encode('utf-8') if sys.version_info[0] == 2 else kiter251)
+                oprot.writeMapBegin(TType.STRUCT, TType.I64, len(viter252))
+                for kiter253, viter254 in viter252.items():
+                    kiter253.write(oprot)
+                    oprot.writeI64(viter254)
                 oprot.writeMapEnd()
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         if self.execute_ms_avg is not None:
             oprot.writeFieldBegin('execute_ms_avg', TType.MAP, 5)
             oprot.writeMapBegin(TType.STRING, TType.MAP, len(self.execute_ms_avg))
-            for kiter228, viter229 in self.execute_ms_avg.items():
-                oprot.writeString(kiter228.encode('utf-8') if sys.version_info[0] == 2 else kiter228)
-                oprot.writeMapBegin(TType.STRUCT, TType.DOUBLE, len(viter229))
-                for kiter230, viter231 in viter229.items():
-                    kiter230.write(oprot)
-                    oprot.writeDouble(viter231)
+            for kiter255, viter256 in self.execute_ms_avg.items():
+                oprot.writeString(kiter255.encode('utf-8') if sys.version_info[0] == 2 else kiter255)
+                oprot.writeMapBegin(TType.STRUCT, TType.DOUBLE, len(viter256))
+                for kiter257, viter258 in viter256.items():
+                    kiter257.write(oprot)
+                    oprot.writeDouble(viter258)
                 oprot.writeMapEnd()
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
@@ -3079,51 +3142,51 @@
             if fid == 1:
                 if ftype == TType.MAP:
                     self.acked = {}
-                    (_ktype233, _vtype234, _size232) = iprot.readMapBegin()
-                    for _i236 in range(_size232):
-                        _key237 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val238 = {}
-                        (_ktype240, _vtype241, _size239) = iprot.readMapBegin()
-                        for _i243 in range(_size239):
-                            _key244 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                            _val245 = iprot.readI64()
-                            _val238[_key244] = _val245
+                    (_ktype260, _vtype261, _size259) = iprot.readMapBegin()
+                    for _i263 in range(_size259):
+                        _key264 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val265 = {}
+                        (_ktype267, _vtype268, _size266) = iprot.readMapBegin()
+                        for _i270 in range(_size266):
+                            _key271 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                            _val272 = iprot.readI64()
+                            _val265[_key271] = _val272
                         iprot.readMapEnd()
-                        self.acked[_key237] = _val238
+                        self.acked[_key264] = _val265
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
             elif fid == 2:
                 if ftype == TType.MAP:
                     self.failed = {}
-                    (_ktype247, _vtype248, _size246) = iprot.readMapBegin()
-                    for _i250 in range(_size246):
-                        _key251 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val252 = {}
-                        (_ktype254, _vtype255, _size253) = iprot.readMapBegin()
-                        for _i257 in range(_size253):
-                            _key258 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                            _val259 = iprot.readI64()
-                            _val252[_key258] = _val259
+                    (_ktype274, _vtype275, _size273) = iprot.readMapBegin()
+                    for _i277 in range(_size273):
+                        _key278 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val279 = {}
+                        (_ktype281, _vtype282, _size280) = iprot.readMapBegin()
+                        for _i284 in range(_size280):
+                            _key285 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                            _val286 = iprot.readI64()
+                            _val279[_key285] = _val286
                         iprot.readMapEnd()
-                        self.failed[_key251] = _val252
+                        self.failed[_key278] = _val279
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
             elif fid == 3:
                 if ftype == TType.MAP:
                     self.complete_ms_avg = {}
-                    (_ktype261, _vtype262, _size260) = iprot.readMapBegin()
-                    for _i264 in range(_size260):
-                        _key265 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val266 = {}
-                        (_ktype268, _vtype269, _size267) = iprot.readMapBegin()
-                        for _i271 in range(_size267):
-                            _key272 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                            _val273 = iprot.readDouble()
-                            _val266[_key272] = _val273
+                    (_ktype288, _vtype289, _size287) = iprot.readMapBegin()
+                    for _i291 in range(_size287):
+                        _key292 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val293 = {}
+                        (_ktype295, _vtype296, _size294) = iprot.readMapBegin()
+                        for _i298 in range(_size294):
+                            _key299 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                            _val300 = iprot.readDouble()
+                            _val293[_key299] = _val300
                         iprot.readMapEnd()
-                        self.complete_ms_avg[_key265] = _val266
+                        self.complete_ms_avg[_key292] = _val293
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
@@ -3140,36 +3203,36 @@
         if self.acked is not None:
             oprot.writeFieldBegin('acked', TType.MAP, 1)
             oprot.writeMapBegin(TType.STRING, TType.MAP, len(self.acked))
-            for kiter274, viter275 in self.acked.items():
-                oprot.writeString(kiter274.encode('utf-8') if sys.version_info[0] == 2 else kiter274)
-                oprot.writeMapBegin(TType.STRING, TType.I64, len(viter275))
-                for kiter276, viter277 in viter275.items():
-                    oprot.writeString(kiter276.encode('utf-8') if sys.version_info[0] == 2 else kiter276)
-                    oprot.writeI64(viter277)
+            for kiter301, viter302 in self.acked.items():
+                oprot.writeString(kiter301.encode('utf-8') if sys.version_info[0] == 2 else kiter301)
+                oprot.writeMapBegin(TType.STRING, TType.I64, len(viter302))
+                for kiter303, viter304 in viter302.items():
+                    oprot.writeString(kiter303.encode('utf-8') if sys.version_info[0] == 2 else kiter303)
+                    oprot.writeI64(viter304)
                 oprot.writeMapEnd()
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         if self.failed is not None:
             oprot.writeFieldBegin('failed', TType.MAP, 2)
             oprot.writeMapBegin(TType.STRING, TType.MAP, len(self.failed))
-            for kiter278, viter279 in self.failed.items():
-                oprot.writeString(kiter278.encode('utf-8') if sys.version_info[0] == 2 else kiter278)
-                oprot.writeMapBegin(TType.STRING, TType.I64, len(viter279))
-                for kiter280, viter281 in viter279.items():
-                    oprot.writeString(kiter280.encode('utf-8') if sys.version_info[0] == 2 else kiter280)
-                    oprot.writeI64(viter281)
+            for kiter305, viter306 in self.failed.items():
+                oprot.writeString(kiter305.encode('utf-8') if sys.version_info[0] == 2 else kiter305)
+                oprot.writeMapBegin(TType.STRING, TType.I64, len(viter306))
+                for kiter307, viter308 in viter306.items():
+                    oprot.writeString(kiter307.encode('utf-8') if sys.version_info[0] == 2 else kiter307)
+                    oprot.writeI64(viter308)
                 oprot.writeMapEnd()
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         if self.complete_ms_avg is not None:
             oprot.writeFieldBegin('complete_ms_avg', TType.MAP, 3)
             oprot.writeMapBegin(TType.STRING, TType.MAP, len(self.complete_ms_avg))
-            for kiter282, viter283 in self.complete_ms_avg.items():
-                oprot.writeString(kiter282.encode('utf-8') if sys.version_info[0] == 2 else kiter282)
-                oprot.writeMapBegin(TType.STRING, TType.DOUBLE, len(viter283))
-                for kiter284, viter285 in viter283.items():
-                    oprot.writeString(kiter284.encode('utf-8') if sys.version_info[0] == 2 else kiter284)
-                    oprot.writeDouble(viter285)
+            for kiter309, viter310 in self.complete_ms_avg.items():
+                oprot.writeString(kiter309.encode('utf-8') if sys.version_info[0] == 2 else kiter309)
+                oprot.writeMapBegin(TType.STRING, TType.DOUBLE, len(viter310))
+                for kiter311, viter312 in viter310.items():
+                    oprot.writeString(kiter311.encode('utf-8') if sys.version_info[0] == 2 else kiter311)
+                    oprot.writeDouble(viter312)
                 oprot.writeMapEnd()
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
@@ -3296,34 +3359,34 @@
             if fid == 1:
                 if ftype == TType.MAP:
                     self.emitted = {}
-                    (_ktype287, _vtype288, _size286) = iprot.readMapBegin()
-                    for _i290 in range(_size286):
-                        _key291 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val292 = {}
-                        (_ktype294, _vtype295, _size293) = iprot.readMapBegin()
-                        for _i297 in range(_size293):
-                            _key298 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                            _val299 = iprot.readI64()
-                            _val292[_key298] = _val299
+                    (_ktype314, _vtype315, _size313) = iprot.readMapBegin()
+                    for _i317 in range(_size313):
+                        _key318 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val319 = {}
+                        (_ktype321, _vtype322, _size320) = iprot.readMapBegin()
+                        for _i324 in range(_size320):
+                            _key325 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                            _val326 = iprot.readI64()
+                            _val319[_key325] = _val326
                         iprot.readMapEnd()
-                        self.emitted[_key291] = _val292
+                        self.emitted[_key318] = _val319
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
             elif fid == 2:
                 if ftype == TType.MAP:
                     self.transferred = {}
-                    (_ktype301, _vtype302, _size300) = iprot.readMapBegin()
-                    for _i304 in range(_size300):
-                        _key305 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val306 = {}
-                        (_ktype308, _vtype309, _size307) = iprot.readMapBegin()
-                        for _i311 in range(_size307):
-                            _key312 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                            _val313 = iprot.readI64()
-                            _val306[_key312] = _val313
+                    (_ktype328, _vtype329, _size327) = iprot.readMapBegin()
+                    for _i331 in range(_size327):
+                        _key332 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val333 = {}
+                        (_ktype335, _vtype336, _size334) = iprot.readMapBegin()
+                        for _i338 in range(_size334):
+                            _key339 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                            _val340 = iprot.readI64()
+                            _val333[_key339] = _val340
                         iprot.readMapEnd()
-                        self.transferred[_key305] = _val306
+                        self.transferred[_key332] = _val333
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
@@ -3351,24 +3414,24 @@
         if self.emitted is not None:
             oprot.writeFieldBegin('emitted', TType.MAP, 1)
             oprot.writeMapBegin(TType.STRING, TType.MAP, len(self.emitted))
-            for kiter314, viter315 in self.emitted.items():
-                oprot.writeString(kiter314.encode('utf-8') if sys.version_info[0] == 2 else kiter314)
-                oprot.writeMapBegin(TType.STRING, TType.I64, len(viter315))
-                for kiter316, viter317 in viter315.items():
-                    oprot.writeString(kiter316.encode('utf-8') if sys.version_info[0] == 2 else kiter316)
-                    oprot.writeI64(viter317)
+            for kiter341, viter342 in self.emitted.items():
+                oprot.writeString(kiter341.encode('utf-8') if sys.version_info[0] == 2 else kiter341)
+                oprot.writeMapBegin(TType.STRING, TType.I64, len(viter342))
+                for kiter343, viter344 in viter342.items():
+                    oprot.writeString(kiter343.encode('utf-8') if sys.version_info[0] == 2 else kiter343)
+                    oprot.writeI64(viter344)
                 oprot.writeMapEnd()
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         if self.transferred is not None:
             oprot.writeFieldBegin('transferred', TType.MAP, 2)
             oprot.writeMapBegin(TType.STRING, TType.MAP, len(self.transferred))
-            for kiter318, viter319 in self.transferred.items():
-                oprot.writeString(kiter318.encode('utf-8') if sys.version_info[0] == 2 else kiter318)
-                oprot.writeMapBegin(TType.STRING, TType.I64, len(viter319))
-                for kiter320, viter321 in viter319.items():
-                    oprot.writeString(kiter320.encode('utf-8') if sys.version_info[0] == 2 else kiter320)
-                    oprot.writeI64(viter321)
+            for kiter345, viter346 in self.transferred.items():
+                oprot.writeString(kiter345.encode('utf-8') if sys.version_info[0] == 2 else kiter345)
+                oprot.writeMapBegin(TType.STRING, TType.I64, len(viter346))
+                for kiter347, viter348 in viter346.items():
+                    oprot.writeString(kiter347.encode('utf-8') if sys.version_info[0] == 2 else kiter347)
+                    oprot.writeI64(viter348)
                 oprot.writeMapEnd()
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
@@ -3740,11 +3803,11 @@
             elif fid == 4:
                 if ftype == TType.LIST:
                     self.executors = []
-                    (_etype325, _size322) = iprot.readListBegin()
-                    for _i326 in range(_size322):
-                        _elem327 = ExecutorSummary()
-                        _elem327.read(iprot)
-                        self.executors.append(_elem327)
+                    (_etype352, _size349) = iprot.readListBegin()
+                    for _i353 in range(_size349):
+                        _elem354 = ExecutorSummary()
+                        _elem354.read(iprot)
+                        self.executors.append(_elem354)
                     iprot.readListEnd()
                 else:
                     iprot.skip(ftype)
@@ -3756,29 +3819,29 @@
             elif fid == 6:
                 if ftype == TType.MAP:
                     self.errors = {}
-                    (_ktype329, _vtype330, _size328) = iprot.readMapBegin()
-                    for _i332 in range(_size328):
-                        _key333 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val334 = []
-                        (_etype338, _size335) = iprot.readListBegin()
-                        for _i339 in range(_size335):
-                            _elem340 = ErrorInfo()
-                            _elem340.read(iprot)
-                            _val334.append(_elem340)
+                    (_ktype356, _vtype357, _size355) = iprot.readMapBegin()
+                    for _i359 in range(_size355):
+                        _key360 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val361 = []
+                        (_etype365, _size362) = iprot.readListBegin()
+                        for _i366 in range(_size362):
+                            _elem367 = ErrorInfo()
+                            _elem367.read(iprot)
+                            _val361.append(_elem367)
                         iprot.readListEnd()
-                        self.errors[_key333] = _val334
+                        self.errors[_key360] = _val361
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
             elif fid == 7:
                 if ftype == TType.MAP:
                     self.component_debug = {}
-                    (_ktype342, _vtype343, _size341) = iprot.readMapBegin()
-                    for _i345 in range(_size341):
-                        _key346 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val347 = DebugOptions()
-                        _val347.read(iprot)
-                        self.component_debug[_key346] = _val347
+                    (_ktype369, _vtype370, _size368) = iprot.readMapBegin()
+                    for _i372 in range(_size368):
+                        _key373 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val374 = DebugOptions()
+                        _val374.read(iprot)
+                        self.component_debug[_key373] = _val374
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
@@ -3857,8 +3920,8 @@
         if self.executors is not None:
             oprot.writeFieldBegin('executors', TType.LIST, 4)
             oprot.writeListBegin(TType.STRUCT, len(self.executors))
-            for iter348 in self.executors:
-                iter348.write(oprot)
+            for iter375 in self.executors:
+                iter375.write(oprot)
             oprot.writeListEnd()
             oprot.writeFieldEnd()
         if self.status is not None:
@@ -3868,20 +3931,20 @@
         if self.errors is not None:
             oprot.writeFieldBegin('errors', TType.MAP, 6)
             oprot.writeMapBegin(TType.STRING, TType.LIST, len(self.errors))
-            for kiter349, viter350 in self.errors.items():
-                oprot.writeString(kiter349.encode('utf-8') if sys.version_info[0] == 2 else kiter349)
-                oprot.writeListBegin(TType.STRUCT, len(viter350))
-                for iter351 in viter350:
-                    iter351.write(oprot)
+            for kiter376, viter377 in self.errors.items():
+                oprot.writeString(kiter376.encode('utf-8') if sys.version_info[0] == 2 else kiter376)
+                oprot.writeListBegin(TType.STRUCT, len(viter377))
+                for iter378 in viter377:
+                    iter378.write(oprot)
                 oprot.writeListEnd()
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         if self.component_debug is not None:
             oprot.writeFieldBegin('component_debug', TType.MAP, 7)
             oprot.writeMapBegin(TType.STRING, TType.STRUCT, len(self.component_debug))
-            for kiter352, viter353 in self.component_debug.items():
-                oprot.writeString(kiter352.encode('utf-8') if sys.version_info[0] == 2 else kiter352)
-                viter353.write(oprot)
+            for kiter379, viter380 in self.component_debug.items():
+                oprot.writeString(kiter379.encode('utf-8') if sys.version_info[0] == 2 else kiter379)
+                viter380.write(oprot)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         if self.storm_version is not None:
@@ -4019,11 +4082,11 @@
             elif fid == 7:
                 if ftype == TType.MAP:
                     self.resources_map = {}
-                    (_ktype355, _vtype356, _size354) = iprot.readMapBegin()
-                    for _i358 in range(_size354):
-                        _key359 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val360 = iprot.readDouble()
-                        self.resources_map[_key359] = _val360
+                    (_ktype382, _vtype383, _size381) = iprot.readMapBegin()
+                    for _i385 in range(_size381):
+                        _key386 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val387 = iprot.readDouble()
+                        self.resources_map[_key386] = _val387
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
@@ -4064,9 +4127,9 @@
         if self.resources_map is not None:
             oprot.writeFieldBegin('resources_map', TType.MAP, 7)
             oprot.writeMapBegin(TType.STRING, TType.DOUBLE, len(self.resources_map))
-            for kiter361, viter362 in self.resources_map.items():
-                oprot.writeString(kiter361.encode('utf-8') if sys.version_info[0] == 2 else kiter361)
-                oprot.writeDouble(viter362)
+            for kiter388, viter389 in self.resources_map.items():
+                oprot.writeString(kiter388.encode('utf-8') if sys.version_info[0] == 2 else kiter388)
+                oprot.writeDouble(viter389)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         oprot.writeFieldStop()
@@ -4428,55 +4491,55 @@
             if fid == 1:
                 if ftype == TType.MAP:
                     self.window_to_emitted = {}
-                    (_ktype364, _vtype365, _size363) = iprot.readMapBegin()
-                    for _i367 in range(_size363):
-                        _key368 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val369 = iprot.readI64()
-                        self.window_to_emitted[_key368] = _val369
+                    (_ktype391, _vtype392, _size390) = iprot.readMapBegin()
+                    for _i394 in range(_size390):
+                        _key395 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val396 = iprot.readI64()
+                        self.window_to_emitted[_key395] = _val396
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
             elif fid == 2:
                 if ftype == TType.MAP:
                     self.window_to_transferred = {}
-                    (_ktype371, _vtype372, _size370) = iprot.readMapBegin()
-                    for _i374 in range(_size370):
-                        _key375 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val376 = iprot.readI64()
-                        self.window_to_transferred[_key375] = _val376
+                    (_ktype398, _vtype399, _size397) = iprot.readMapBegin()
+                    for _i401 in range(_size397):
+                        _key402 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val403 = iprot.readI64()
+                        self.window_to_transferred[_key402] = _val403
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
             elif fid == 3:
                 if ftype == TType.MAP:
                     self.window_to_complete_latencies_ms = {}
-                    (_ktype378, _vtype379, _size377) = iprot.readMapBegin()
-                    for _i381 in range(_size377):
-                        _key382 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val383 = iprot.readDouble()
-                        self.window_to_complete_latencies_ms[_key382] = _val383
+                    (_ktype405, _vtype406, _size404) = iprot.readMapBegin()
+                    for _i408 in range(_size404):
+                        _key409 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val410 = iprot.readDouble()
+                        self.window_to_complete_latencies_ms[_key409] = _val410
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
             elif fid == 4:
                 if ftype == TType.MAP:
                     self.window_to_acked = {}
-                    (_ktype385, _vtype386, _size384) = iprot.readMapBegin()
-                    for _i388 in range(_size384):
-                        _key389 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val390 = iprot.readI64()
-                        self.window_to_acked[_key389] = _val390
+                    (_ktype412, _vtype413, _size411) = iprot.readMapBegin()
+                    for _i415 in range(_size411):
+                        _key416 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val417 = iprot.readI64()
+                        self.window_to_acked[_key416] = _val417
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
             elif fid == 5:
                 if ftype == TType.MAP:
                     self.window_to_failed = {}
-                    (_ktype392, _vtype393, _size391) = iprot.readMapBegin()
-                    for _i395 in range(_size391):
-                        _key396 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val397 = iprot.readI64()
-                        self.window_to_failed[_key396] = _val397
+                    (_ktype419, _vtype420, _size418) = iprot.readMapBegin()
+                    for _i422 in range(_size418):
+                        _key423 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val424 = iprot.readI64()
+                        self.window_to_failed[_key423] = _val424
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
@@ -4493,41 +4556,41 @@
         if self.window_to_emitted is not None:
             oprot.writeFieldBegin('window_to_emitted', TType.MAP, 1)
             oprot.writeMapBegin(TType.STRING, TType.I64, len(self.window_to_emitted))
-            for kiter398, viter399 in self.window_to_emitted.items():
-                oprot.writeString(kiter398.encode('utf-8') if sys.version_info[0] == 2 else kiter398)
-                oprot.writeI64(viter399)
+            for kiter425, viter426 in self.window_to_emitted.items():
+                oprot.writeString(kiter425.encode('utf-8') if sys.version_info[0] == 2 else kiter425)
+                oprot.writeI64(viter426)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         if self.window_to_transferred is not None:
             oprot.writeFieldBegin('window_to_transferred', TType.MAP, 2)
             oprot.writeMapBegin(TType.STRING, TType.I64, len(self.window_to_transferred))
-            for kiter400, viter401 in self.window_to_transferred.items():
-                oprot.writeString(kiter400.encode('utf-8') if sys.version_info[0] == 2 else kiter400)
-                oprot.writeI64(viter401)
+            for kiter427, viter428 in self.window_to_transferred.items():
+                oprot.writeString(kiter427.encode('utf-8') if sys.version_info[0] == 2 else kiter427)
+                oprot.writeI64(viter428)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         if self.window_to_complete_latencies_ms is not None:
             oprot.writeFieldBegin('window_to_complete_latencies_ms', TType.MAP, 3)
             oprot.writeMapBegin(TType.STRING, TType.DOUBLE, len(self.window_to_complete_latencies_ms))
-            for kiter402, viter403 in self.window_to_complete_latencies_ms.items():
-                oprot.writeString(kiter402.encode('utf-8') if sys.version_info[0] == 2 else kiter402)
-                oprot.writeDouble(viter403)
+            for kiter429, viter430 in self.window_to_complete_latencies_ms.items():
+                oprot.writeString(kiter429.encode('utf-8') if sys.version_info[0] == 2 else kiter429)
+                oprot.writeDouble(viter430)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         if self.window_to_acked is not None:
             oprot.writeFieldBegin('window_to_acked', TType.MAP, 4)
             oprot.writeMapBegin(TType.STRING, TType.I64, len(self.window_to_acked))
-            for kiter404, viter405 in self.window_to_acked.items():
-                oprot.writeString(kiter404.encode('utf-8') if sys.version_info[0] == 2 else kiter404)
-                oprot.writeI64(viter405)
+            for kiter431, viter432 in self.window_to_acked.items():
+                oprot.writeString(kiter431.encode('utf-8') if sys.version_info[0] == 2 else kiter431)
+                oprot.writeI64(viter432)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         if self.window_to_failed is not None:
             oprot.writeFieldBegin('window_to_failed', TType.MAP, 5)
             oprot.writeMapBegin(TType.STRING, TType.I64, len(self.window_to_failed))
-            for kiter406, viter407 in self.window_to_failed.items():
-                oprot.writeString(kiter406.encode('utf-8') if sys.version_info[0] == 2 else kiter406)
-                oprot.writeI64(viter407)
+            for kiter433, viter434 in self.window_to_failed.items():
+                oprot.writeString(kiter433.encode('utf-8') if sys.version_info[0] == 2 else kiter433)
+                oprot.writeI64(viter434)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         oprot.writeFieldStop()
@@ -4631,11 +4694,11 @@
             elif fid == 7:
                 if ftype == TType.MAP:
                     self.component_to_num_tasks = {}
-                    (_ktype409, _vtype410, _size408) = iprot.readMapBegin()
-                    for _i412 in range(_size408):
-                        _key413 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val414 = iprot.readI64()
-                        self.component_to_num_tasks[_key413] = _val414
+                    (_ktype436, _vtype437, _size435) = iprot.readMapBegin()
+                    for _i439 in range(_size435):
+                        _key440 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val441 = iprot.readI64()
+                        self.component_to_num_tasks[_key440] = _val441
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
@@ -4721,9 +4784,9 @@
         if self.component_to_num_tasks is not None:
             oprot.writeFieldBegin('component_to_num_tasks', TType.MAP, 7)
             oprot.writeMapBegin(TType.STRING, TType.I64, len(self.component_to_num_tasks))
-            for kiter415, viter416 in self.component_to_num_tasks.items():
-                oprot.writeString(kiter415.encode('utf-8') if sys.version_info[0] == 2 else kiter415)
-                oprot.writeI64(viter416)
+            for kiter442, viter443 in self.component_to_num_tasks.items():
+                oprot.writeString(kiter442.encode('utf-8') if sys.version_info[0] == 2 else kiter442)
+                oprot.writeI64(viter443)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         if self.time_secs is not None:
@@ -4805,22 +4868,22 @@
             if fid == 1:
                 if ftype == TType.LIST:
                     self.supervisor_summaries = []
-                    (_etype420, _size417) = iprot.readListBegin()
-                    for _i421 in range(_size417):
-                        _elem422 = SupervisorSummary()
-                        _elem422.read(iprot)
-                        self.supervisor_summaries.append(_elem422)
+                    (_etype447, _size444) = iprot.readListBegin()
+                    for _i448 in range(_size444):
+                        _elem449 = SupervisorSummary()
+                        _elem449.read(iprot)
+                        self.supervisor_summaries.append(_elem449)
                     iprot.readListEnd()
                 else:
                     iprot.skip(ftype)
             elif fid == 2:
                 if ftype == TType.LIST:
                     self.worker_summaries = []
-                    (_etype426, _size423) = iprot.readListBegin()
-                    for _i427 in range(_size423):
-                        _elem428 = WorkerSummary()
-                        _elem428.read(iprot)
-                        self.worker_summaries.append(_elem428)
+                    (_etype453, _size450) = iprot.readListBegin()
+                    for _i454 in range(_size450):
+                        _elem455 = WorkerSummary()
+                        _elem455.read(iprot)
+                        self.worker_summaries.append(_elem455)
                     iprot.readListEnd()
                 else:
                     iprot.skip(ftype)
@@ -4837,15 +4900,15 @@
         if self.supervisor_summaries is not None:
             oprot.writeFieldBegin('supervisor_summaries', TType.LIST, 1)
             oprot.writeListBegin(TType.STRUCT, len(self.supervisor_summaries))
-            for iter429 in self.supervisor_summaries:
-                iter429.write(oprot)
+            for iter456 in self.supervisor_summaries:
+                iter456.write(oprot)
             oprot.writeListEnd()
             oprot.writeFieldEnd()
         if self.worker_summaries is not None:
             oprot.writeFieldBegin('worker_summaries', TType.LIST, 2)
             oprot.writeListBegin(TType.STRUCT, len(self.worker_summaries))
-            for iter430 in self.worker_summaries:
-                iter430.write(oprot)
+            for iter457 in self.worker_summaries:
+                iter457.write(oprot)
             oprot.writeListEnd()
             oprot.writeFieldEnd()
         oprot.writeFieldStop()
@@ -4901,11 +4964,13 @@
      - assigned_shared_on_heap_memory
      - assigned_regular_off_heap_memory
      - assigned_shared_off_heap_memory
+     - requested_generic_resources
+     - assigned_generic_resources
 
     """
 
 
-    def __init__(self, id=None, name=None, uptime_secs=None, status=None, num_tasks=None, num_workers=None, num_executors=None, topology_conf=None, id_to_spout_agg_stats=None, id_to_bolt_agg_stats=None, sched_status=None, topology_stats=None, owner=None, debug_options=None, replication_count=None, workers=None, storm_version=None, topology_version=None, requested_memonheap=None, requested_memoffheap=None, requested_cpu=None, assigned_memonheap=None, assigned_memoffheap=None, assigned_cpu=None, requested_regular_on_heap_memory=None, requested_shared_on_heap_memory=None, requested_regular_off_heap_memory=None, requested_shared_off_heap_memory=None, assigned_regular_on_heap_memory=None, assigned_shared_on_heap_memory=None, assigned_regular_off_heap_memory=None, assigned_shared_off_heap_memory=None,):
+    def __init__(self, id=None, name=None, uptime_secs=None, status=None, num_tasks=None, num_workers=None, num_executors=None, topology_conf=None, id_to_spout_agg_stats=None, id_to_bolt_agg_stats=None, sched_status=None, topology_stats=None, owner=None, debug_options=None, replication_count=None, workers=None, storm_version=None, topology_version=None, requested_memonheap=None, requested_memoffheap=None, requested_cpu=None, assigned_memonheap=None, assigned_memoffheap=None, assigned_cpu=None, requested_regular_on_heap_memory=None, requested_shared_on_heap_memory=None, requested_regular_off_heap_memory=None, requested_shared_off_heap_memory=None, assigned_regular_on_heap_memory=None, assigned_shared_on_heap_memory=None, assigned_regular_off_heap_memory=None, assigned_shared_off_heap_memory=None, requested_generic_resources=None, assigned_generic_resources=None,):
         self.id = id
         self.name = name
         self.uptime_secs = uptime_secs
@@ -4938,6 +5003,8 @@
         self.assigned_shared_on_heap_memory = assigned_shared_on_heap_memory
         self.assigned_regular_off_heap_memory = assigned_regular_off_heap_memory
         self.assigned_shared_off_heap_memory = assigned_shared_off_heap_memory
+        self.requested_generic_resources = requested_generic_resources
+        self.assigned_generic_resources = assigned_generic_resources
 
     def read(self, iprot):
         if iprot._fast_decode is not None and isinstance(iprot.trans, TTransport.CReadableTransport) and self.thrift_spec is not None:
@@ -4991,24 +5058,24 @@
             elif fid == 9:
                 if ftype == TType.MAP:
                     self.id_to_spout_agg_stats = {}
-                    (_ktype432, _vtype433, _size431) = iprot.readMapBegin()
-                    for _i435 in range(_size431):
-                        _key436 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val437 = ComponentAggregateStats()
-                        _val437.read(iprot)
-                        self.id_to_spout_agg_stats[_key436] = _val437
+                    (_ktype459, _vtype460, _size458) = iprot.readMapBegin()
+                    for _i462 in range(_size458):
+                        _key463 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val464 = ComponentAggregateStats()
+                        _val464.read(iprot)
+                        self.id_to_spout_agg_stats[_key463] = _val464
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
             elif fid == 10:
                 if ftype == TType.MAP:
                     self.id_to_bolt_agg_stats = {}
-                    (_ktype439, _vtype440, _size438) = iprot.readMapBegin()
-                    for _i442 in range(_size438):
-                        _key443 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val444 = ComponentAggregateStats()
-                        _val444.read(iprot)
-                        self.id_to_bolt_agg_stats[_key443] = _val444
+                    (_ktype466, _vtype467, _size465) = iprot.readMapBegin()
+                    for _i469 in range(_size465):
+                        _key470 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val471 = ComponentAggregateStats()
+                        _val471.read(iprot)
+                        self.id_to_bolt_agg_stats[_key470] = _val471
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
@@ -5042,11 +5109,11 @@
             elif fid == 16:
                 if ftype == TType.LIST:
                     self.workers = []
-                    (_etype448, _size445) = iprot.readListBegin()
-                    for _i449 in range(_size445):
-                        _elem450 = WorkerSummary()
-                        _elem450.read(iprot)
-                        self.workers.append(_elem450)
+                    (_etype475, _size472) = iprot.readListBegin()
+                    for _i476 in range(_size472):
+                        _elem477 = WorkerSummary()
+                        _elem477.read(iprot)
+                        self.workers.append(_elem477)
                     iprot.readListEnd()
                 else:
                     iprot.skip(ftype)
@@ -5130,6 +5197,28 @@
                     self.assigned_shared_off_heap_memory = iprot.readDouble()
                 else:
                     iprot.skip(ftype)
+            elif fid == 535:
+                if ftype == TType.MAP:
+                    self.requested_generic_resources = {}
+                    (_ktype479, _vtype480, _size478) = iprot.readMapBegin()
+                    for _i482 in range(_size478):
+                        _key483 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val484 = iprot.readDouble()
+                        self.requested_generic_resources[_key483] = _val484
+                    iprot.readMapEnd()
+                else:
+                    iprot.skip(ftype)
+            elif fid == 536:
+                if ftype == TType.MAP:
+                    self.assigned_generic_resources = {}
+                    (_ktype486, _vtype487, _size485) = iprot.readMapBegin()
+                    for _i489 in range(_size485):
+                        _key490 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val491 = iprot.readDouble()
+                        self.assigned_generic_resources[_key490] = _val491
+                    iprot.readMapEnd()
+                else:
+                    iprot.skip(ftype)
             else:
                 iprot.skip(ftype)
             iprot.readFieldEnd()
@@ -5175,17 +5264,17 @@
         if self.id_to_spout_agg_stats is not None:
             oprot.writeFieldBegin('id_to_spout_agg_stats', TType.MAP, 9)
             oprot.writeMapBegin(TType.STRING, TType.STRUCT, len(self.id_to_spout_agg_stats))
-            for kiter451, viter452 in self.id_to_spout_agg_stats.items():
-                oprot.writeString(kiter451.encode('utf-8') if sys.version_info[0] == 2 else kiter451)
-                viter452.write(oprot)
+            for kiter492, viter493 in self.id_to_spout_agg_stats.items():
+                oprot.writeString(kiter492.encode('utf-8') if sys.version_info[0] == 2 else kiter492)
+                viter493.write(oprot)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         if self.id_to_bolt_agg_stats is not None:
             oprot.writeFieldBegin('id_to_bolt_agg_stats', TType.MAP, 10)
             oprot.writeMapBegin(TType.STRING, TType.STRUCT, len(self.id_to_bolt_agg_stats))
-            for kiter453, viter454 in self.id_to_bolt_agg_stats.items():
-                oprot.writeString(kiter453.encode('utf-8') if sys.version_info[0] == 2 else kiter453)
-                viter454.write(oprot)
+            for kiter494, viter495 in self.id_to_bolt_agg_stats.items():
+                oprot.writeString(kiter494.encode('utf-8') if sys.version_info[0] == 2 else kiter494)
+                viter495.write(oprot)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         if self.sched_status is not None:
@@ -5211,8 +5300,8 @@
         if self.workers is not None:
             oprot.writeFieldBegin('workers', TType.LIST, 16)
             oprot.writeListBegin(TType.STRUCT, len(self.workers))
-            for iter455 in self.workers:
-                iter455.write(oprot)
+            for iter496 in self.workers:
+                iter496.write(oprot)
             oprot.writeListEnd()
             oprot.writeFieldEnd()
         if self.storm_version is not None:
@@ -5279,6 +5368,22 @@
             oprot.writeFieldBegin('assigned_shared_off_heap_memory', TType.DOUBLE, 534)
             oprot.writeDouble(self.assigned_shared_off_heap_memory)
             oprot.writeFieldEnd()
+        if self.requested_generic_resources is not None:
+            oprot.writeFieldBegin('requested_generic_resources', TType.MAP, 535)
+            oprot.writeMapBegin(TType.STRING, TType.DOUBLE, len(self.requested_generic_resources))
+            for kiter497, viter498 in self.requested_generic_resources.items():
+                oprot.writeString(kiter497.encode('utf-8') if sys.version_info[0] == 2 else kiter497)
+                oprot.writeDouble(viter498)
+            oprot.writeMapEnd()
+            oprot.writeFieldEnd()
+        if self.assigned_generic_resources is not None:
+            oprot.writeFieldBegin('assigned_generic_resources', TType.MAP, 536)
+            oprot.writeMapBegin(TType.STRING, TType.DOUBLE, len(self.assigned_generic_resources))
+            for kiter499, viter500 in self.assigned_generic_resources.items():
+                oprot.writeString(kiter499.encode('utf-8') if sys.version_info[0] == 2 else kiter499)
+                oprot.writeDouble(viter500)
+            oprot.writeMapEnd()
+            oprot.writeFieldEnd()
         oprot.writeFieldStop()
         oprot.writeStructEnd()
 
@@ -5452,59 +5557,59 @@
             elif fid == 7:
                 if ftype == TType.MAP:
                     self.window_to_stats = {}
-                    (_ktype457, _vtype458, _size456) = iprot.readMapBegin()
-                    for _i460 in range(_size456):
-                        _key461 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val462 = ComponentAggregateStats()
-                        _val462.read(iprot)
-                        self.window_to_stats[_key461] = _val462
+                    (_ktype502, _vtype503, _size501) = iprot.readMapBegin()
+                    for _i505 in range(_size501):
+                        _key506 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val507 = ComponentAggregateStats()
+                        _val507.read(iprot)
+                        self.window_to_stats[_key506] = _val507
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
             elif fid == 8:
                 if ftype == TType.MAP:
                     self.gsid_to_input_stats = {}
-                    (_ktype464, _vtype465, _size463) = iprot.readMapBegin()
-                    for _i467 in range(_size463):
-                        _key468 = GlobalStreamId()
-                        _key468.read(iprot)
-                        _val469 = ComponentAggregateStats()
-                        _val469.read(iprot)
-                        self.gsid_to_input_stats[_key468] = _val469
+                    (_ktype509, _vtype510, _size508) = iprot.readMapBegin()
+                    for _i512 in range(_size508):
+                        _key513 = GlobalStreamId()
+                        _key513.read(iprot)
+                        _val514 = ComponentAggregateStats()
+                        _val514.read(iprot)
+                        self.gsid_to_input_stats[_key513] = _val514
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
             elif fid == 9:
                 if ftype == TType.MAP:
                     self.sid_to_output_stats = {}
-                    (_ktype471, _vtype472, _size470) = iprot.readMapBegin()
-                    for _i474 in range(_size470):
-                        _key475 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val476 = ComponentAggregateStats()
-                        _val476.read(iprot)
-                        self.sid_to_output_stats[_key475] = _val476
+                    (_ktype516, _vtype517, _size515) = iprot.readMapBegin()
+                    for _i519 in range(_size515):
+                        _key520 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val521 = ComponentAggregateStats()
+                        _val521.read(iprot)
+                        self.sid_to_output_stats[_key520] = _val521
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
             elif fid == 10:
                 if ftype == TType.LIST:
                     self.exec_stats = []
-                    (_etype480, _size477) = iprot.readListBegin()
-                    for _i481 in range(_size477):
-                        _elem482 = ExecutorAggregateStats()
-                        _elem482.read(iprot)
-                        self.exec_stats.append(_elem482)
+                    (_etype525, _size522) = iprot.readListBegin()
+                    for _i526 in range(_size522):
+                        _elem527 = ExecutorAggregateStats()
+                        _elem527.read(iprot)
+                        self.exec_stats.append(_elem527)
                     iprot.readListEnd()
                 else:
                     iprot.skip(ftype)
             elif fid == 11:
                 if ftype == TType.LIST:
                     self.errors = []
-                    (_etype486, _size483) = iprot.readListBegin()
-                    for _i487 in range(_size483):
-                        _elem488 = ErrorInfo()
-                        _elem488.read(iprot)
-                        self.errors.append(_elem488)
+                    (_etype531, _size528) = iprot.readListBegin()
+                    for _i532 in range(_size528):
+                        _elem533 = ErrorInfo()
+                        _elem533.read(iprot)
+                        self.errors.append(_elem533)
                     iprot.readListEnd()
                 else:
                     iprot.skip(ftype)
@@ -5532,11 +5637,11 @@
             elif fid == 16:
                 if ftype == TType.MAP:
                     self.resources_map = {}
-                    (_ktype490, _vtype491, _size489) = iprot.readMapBegin()
-                    for _i493 in range(_size489):
-                        _key494 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val495 = iprot.readDouble()
-                        self.resources_map[_key494] = _val495
+                    (_ktype535, _vtype536, _size534) = iprot.readMapBegin()
+                    for _i538 in range(_size534):
+                        _key539 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val540 = iprot.readDouble()
+                        self.resources_map[_key539] = _val540
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
@@ -5577,39 +5682,39 @@
         if self.window_to_stats is not None:
             oprot.writeFieldBegin('window_to_stats', TType.MAP, 7)
             oprot.writeMapBegin(TType.STRING, TType.STRUCT, len(self.window_to_stats))
-            for kiter496, viter497 in self.window_to_stats.items():
-                oprot.writeString(kiter496.encode('utf-8') if sys.version_info[0] == 2 else kiter496)
-                viter497.write(oprot)
+            for kiter541, viter542 in self.window_to_stats.items():
+                oprot.writeString(kiter541.encode('utf-8') if sys.version_info[0] == 2 else kiter541)
+                viter542.write(oprot)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         if self.gsid_to_input_stats is not None:
             oprot.writeFieldBegin('gsid_to_input_stats', TType.MAP, 8)
             oprot.writeMapBegin(TType.STRUCT, TType.STRUCT, len(self.gsid_to_input_stats))
-            for kiter498, viter499 in self.gsid_to_input_stats.items():
-                kiter498.write(oprot)
-                viter499.write(oprot)
+            for kiter543, viter544 in self.gsid_to_input_stats.items():
+                kiter543.write(oprot)
+                viter544.write(oprot)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         if self.sid_to_output_stats is not None:
             oprot.writeFieldBegin('sid_to_output_stats', TType.MAP, 9)
             oprot.writeMapBegin(TType.STRING, TType.STRUCT, len(self.sid_to_output_stats))
-            for kiter500, viter501 in self.sid_to_output_stats.items():
-                oprot.writeString(kiter500.encode('utf-8') if sys.version_info[0] == 2 else kiter500)
-                viter501.write(oprot)
+            for kiter545, viter546 in self.sid_to_output_stats.items():
+                oprot.writeString(kiter545.encode('utf-8') if sys.version_info[0] == 2 else kiter545)
+                viter546.write(oprot)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         if self.exec_stats is not None:
             oprot.writeFieldBegin('exec_stats', TType.LIST, 10)
             oprot.writeListBegin(TType.STRUCT, len(self.exec_stats))
-            for iter502 in self.exec_stats:
-                iter502.write(oprot)
+            for iter547 in self.exec_stats:
+                iter547.write(oprot)
             oprot.writeListEnd()
             oprot.writeFieldEnd()
         if self.errors is not None:
             oprot.writeFieldBegin('errors', TType.LIST, 11)
             oprot.writeListBegin(TType.STRUCT, len(self.errors))
-            for iter503 in self.errors:
-                iter503.write(oprot)
+            for iter548 in self.errors:
+                iter548.write(oprot)
             oprot.writeListEnd()
             oprot.writeFieldEnd()
         if self.eventlog_host is not None:
@@ -5631,9 +5736,9 @@
         if self.resources_map is not None:
             oprot.writeFieldBegin('resources_map', TType.MAP, 16)
             oprot.writeMapBegin(TType.STRING, TType.DOUBLE, len(self.resources_map))
-            for kiter504, viter505 in self.resources_map.items():
-                oprot.writeString(kiter504.encode('utf-8') if sys.version_info[0] == 2 else kiter504)
-                oprot.writeDouble(viter505)
+            for kiter549, viter550 in self.resources_map.items():
+                oprot.writeString(kiter549.encode('utf-8') if sys.version_info[0] == 2 else kiter549)
+                oprot.writeDouble(viter550)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         oprot.writeFieldStop()
@@ -5758,28 +5863,28 @@
             elif fid == 3:
                 if ftype == TType.MAP:
                     self.num_executors = {}
-                    (_ktype507, _vtype508, _size506) = iprot.readMapBegin()
-                    for _i510 in range(_size506):
-                        _key511 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val512 = iprot.readI32()
-                        self.num_executors[_key511] = _val512
+                    (_ktype552, _vtype553, _size551) = iprot.readMapBegin()
+                    for _i555 in range(_size551):
+                        _key556 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val557 = iprot.readI32()
+                        self.num_executors[_key556] = _val557
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
             elif fid == 4:
                 if ftype == TType.MAP:
                     self.topology_resources_overrides = {}
-                    (_ktype514, _vtype515, _size513) = iprot.readMapBegin()
-                    for _i517 in range(_size513):
-                        _key518 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val519 = {}
-                        (_ktype521, _vtype522, _size520) = iprot.readMapBegin()
-                        for _i524 in range(_size520):
-                            _key525 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                            _val526 = iprot.readDouble()
-                            _val519[_key525] = _val526
+                    (_ktype559, _vtype560, _size558) = iprot.readMapBegin()
+                    for _i562 in range(_size558):
+                        _key563 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val564 = {}
+                        (_ktype566, _vtype567, _size565) = iprot.readMapBegin()
+                        for _i569 in range(_size565):
+                            _key570 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                            _val571 = iprot.readDouble()
+                            _val564[_key570] = _val571
                         iprot.readMapEnd()
-                        self.topology_resources_overrides[_key518] = _val519
+                        self.topology_resources_overrides[_key563] = _val564
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
@@ -5814,20 +5919,20 @@
         if self.num_executors is not None:
             oprot.writeFieldBegin('num_executors', TType.MAP, 3)
             oprot.writeMapBegin(TType.STRING, TType.I32, len(self.num_executors))
-            for kiter527, viter528 in self.num_executors.items():
-                oprot.writeString(kiter527.encode('utf-8') if sys.version_info[0] == 2 else kiter527)
-                oprot.writeI32(viter528)
+            for kiter572, viter573 in self.num_executors.items():
+                oprot.writeString(kiter572.encode('utf-8') if sys.version_info[0] == 2 else kiter572)
+                oprot.writeI32(viter573)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         if self.topology_resources_overrides is not None:
             oprot.writeFieldBegin('topology_resources_overrides', TType.MAP, 4)
             oprot.writeMapBegin(TType.STRING, TType.MAP, len(self.topology_resources_overrides))
-            for kiter529, viter530 in self.topology_resources_overrides.items():
-                oprot.writeString(kiter529.encode('utf-8') if sys.version_info[0] == 2 else kiter529)
-                oprot.writeMapBegin(TType.STRING, TType.DOUBLE, len(viter530))
-                for kiter531, viter532 in viter530.items():
-                    oprot.writeString(kiter531.encode('utf-8') if sys.version_info[0] == 2 else kiter531)
-                    oprot.writeDouble(viter532)
+            for kiter574, viter575 in self.topology_resources_overrides.items():
+                oprot.writeString(kiter574.encode('utf-8') if sys.version_info[0] == 2 else kiter574)
+                oprot.writeMapBegin(TType.STRING, TType.DOUBLE, len(viter575))
+                for kiter576, viter577 in viter575.items():
+                    oprot.writeString(kiter576.encode('utf-8') if sys.version_info[0] == 2 else kiter576)
+                    oprot.writeDouble(viter577)
                 oprot.writeMapEnd()
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
@@ -5882,11 +5987,11 @@
             if fid == 1:
                 if ftype == TType.MAP:
                     self.creds = {}
-                    (_ktype534, _vtype535, _size533) = iprot.readMapBegin()
-                    for _i537 in range(_size533):
-                        _key538 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val539 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        self.creds[_key538] = _val539
+                    (_ktype579, _vtype580, _size578) = iprot.readMapBegin()
+                    for _i582 in range(_size578):
+                        _key583 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val584 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        self.creds[_key583] = _val584
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
@@ -5908,9 +6013,9 @@
         if self.creds is not None:
             oprot.writeFieldBegin('creds', TType.MAP, 1)
             oprot.writeMapBegin(TType.STRING, TType.STRING, len(self.creds))
-            for kiter540, viter541 in self.creds.items():
-                oprot.writeString(kiter540.encode('utf-8') if sys.version_info[0] == 2 else kiter540)
-                oprot.writeString(viter541.encode('utf-8') if sys.version_info[0] == 2 else viter541)
+            for kiter585, viter586 in self.creds.items():
+                oprot.writeString(kiter585.encode('utf-8') if sys.version_info[0] == 2 else kiter585)
+                oprot.writeString(viter586.encode('utf-8') if sys.version_info[0] == 2 else viter586)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         if self.topoOwner is not None:
@@ -6116,11 +6221,11 @@
             if fid == 1:
                 if ftype == TType.LIST:
                     self.acl = []
-                    (_etype545, _size542) = iprot.readListBegin()
-                    for _i546 in range(_size542):
-                        _elem547 = AccessControl()
-                        _elem547.read(iprot)
-                        self.acl.append(_elem547)
+                    (_etype590, _size587) = iprot.readListBegin()
+                    for _i591 in range(_size587):
+                        _elem592 = AccessControl()
+                        _elem592.read(iprot)
+                        self.acl.append(_elem592)
                     iprot.readListEnd()
                 else:
                     iprot.skip(ftype)
@@ -6142,8 +6247,8 @@
         if self.acl is not None:
             oprot.writeFieldBegin('acl', TType.LIST, 1)
             oprot.writeListBegin(TType.STRUCT, len(self.acl))
-            for iter548 in self.acl:
-                iter548.write(oprot)
+            for iter593 in self.acl:
+                iter593.write(oprot)
             oprot.writeListEnd()
             oprot.writeFieldEnd()
         if self.replication_factor is not None:
@@ -6268,10 +6373,10 @@
             if fid == 1:
                 if ftype == TType.LIST:
                     self.keys = []
-                    (_etype552, _size549) = iprot.readListBegin()
-                    for _i553 in range(_size549):
-                        _elem554 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        self.keys.append(_elem554)
+                    (_etype597, _size594) = iprot.readListBegin()
+                    for _i598 in range(_size594):
+                        _elem599 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        self.keys.append(_elem599)
                     iprot.readListEnd()
                 else:
                     iprot.skip(ftype)
@@ -6293,8 +6398,8 @@
         if self.keys is not None:
             oprot.writeFieldBegin('keys', TType.LIST, 1)
             oprot.writeListBegin(TType.STRING, len(self.keys))
-            for iter555 in self.keys:
-                oprot.writeString(iter555.encode('utf-8') if sys.version_info[0] == 2 else iter555)
+            for iter600 in self.keys:
+                oprot.writeString(iter600.encode('utf-8') if sys.version_info[0] == 2 else iter600)
             oprot.writeListEnd()
             oprot.writeFieldEnd()
         if self.session is not None:
@@ -6462,31 +6567,31 @@
             elif fid == 4:
                 if ftype == TType.LIST:
                     self.used_ports = []
-                    (_etype559, _size556) = iprot.readListBegin()
-                    for _i560 in range(_size556):
-                        _elem561 = iprot.readI64()
-                        self.used_ports.append(_elem561)
+                    (_etype604, _size601) = iprot.readListBegin()
+                    for _i605 in range(_size601):
+                        _elem606 = iprot.readI64()
+                        self.used_ports.append(_elem606)
                     iprot.readListEnd()
                 else:
                     iprot.skip(ftype)
             elif fid == 5:
                 if ftype == TType.LIST:
                     self.meta = []
-                    (_etype565, _size562) = iprot.readListBegin()
-                    for _i566 in range(_size562):
-                        _elem567 = iprot.readI64()
-                        self.meta.append(_elem567)
+                    (_etype610, _size607) = iprot.readListBegin()
+                    for _i611 in range(_size607):
+                        _elem612 = iprot.readI64()
+                        self.meta.append(_elem612)
                     iprot.readListEnd()
                 else:
                     iprot.skip(ftype)
             elif fid == 6:
                 if ftype == TType.MAP:
                     self.scheduler_meta = {}
-                    (_ktype569, _vtype570, _size568) = iprot.readMapBegin()
-                    for _i572 in range(_size568):
-                        _key573 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val574 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        self.scheduler_meta[_key573] = _val574
+                    (_ktype614, _vtype615, _size613) = iprot.readMapBegin()
+                    for _i617 in range(_size613):
+                        _key618 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val619 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        self.scheduler_meta[_key618] = _val619
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
@@ -6503,11 +6608,11 @@
             elif fid == 9:
                 if ftype == TType.MAP:
                     self.resources_map = {}
-                    (_ktype576, _vtype577, _size575) = iprot.readMapBegin()
-                    for _i579 in range(_size575):
-                        _key580 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val581 = iprot.readDouble()
-                        self.resources_map[_key580] = _val581
+                    (_ktype621, _vtype622, _size620) = iprot.readMapBegin()
+                    for _i624 in range(_size620):
+                        _key625 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val626 = iprot.readDouble()
+                        self.resources_map[_key625] = _val626
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
@@ -6541,23 +6646,23 @@
         if self.used_ports is not None:
             oprot.writeFieldBegin('used_ports', TType.LIST, 4)
             oprot.writeListBegin(TType.I64, len(self.used_ports))
-            for iter582 in self.used_ports:
-                oprot.writeI64(iter582)
+            for iter627 in self.used_ports:
+                oprot.writeI64(iter627)
             oprot.writeListEnd()
             oprot.writeFieldEnd()
         if self.meta is not None:
             oprot.writeFieldBegin('meta', TType.LIST, 5)
             oprot.writeListBegin(TType.I64, len(self.meta))
-            for iter583 in self.meta:
-                oprot.writeI64(iter583)
+            for iter628 in self.meta:
+                oprot.writeI64(iter628)
             oprot.writeListEnd()
             oprot.writeFieldEnd()
         if self.scheduler_meta is not None:
             oprot.writeFieldBegin('scheduler_meta', TType.MAP, 6)
             oprot.writeMapBegin(TType.STRING, TType.STRING, len(self.scheduler_meta))
-            for kiter584, viter585 in self.scheduler_meta.items():
-                oprot.writeString(kiter584.encode('utf-8') if sys.version_info[0] == 2 else kiter584)
-                oprot.writeString(viter585.encode('utf-8') if sys.version_info[0] == 2 else viter585)
+            for kiter629, viter630 in self.scheduler_meta.items():
+                oprot.writeString(kiter629.encode('utf-8') if sys.version_info[0] == 2 else kiter629)
+                oprot.writeString(viter630.encode('utf-8') if sys.version_info[0] == 2 else viter630)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         if self.uptime_secs is not None:
@@ -6571,9 +6676,9 @@
         if self.resources_map is not None:
             oprot.writeFieldBegin('resources_map', TType.MAP, 9)
             oprot.writeMapBegin(TType.STRING, TType.DOUBLE, len(self.resources_map))
-            for kiter586, viter587 in self.resources_map.items():
-                oprot.writeString(kiter586.encode('utf-8') if sys.version_info[0] == 2 else kiter586)
-                oprot.writeDouble(viter587)
+            for kiter631, viter632 in self.resources_map.items():
+                oprot.writeString(kiter631.encode('utf-8') if sys.version_info[0] == 2 else kiter631)
+                oprot.writeDouble(viter632)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         if self.server_port is not None:
@@ -6632,10 +6737,10 @@
             elif fid == 2:
                 if ftype == TType.SET:
                     self.port = set()
-                    (_etype591, _size588) = iprot.readSetBegin()
-                    for _i592 in range(_size588):
-                        _elem593 = iprot.readI64()
-                        self.port.add(_elem593)
+                    (_etype636, _size633) = iprot.readSetBegin()
+                    for _i637 in range(_size633):
+                        _elem638 = iprot.readI64()
+                        self.port.add(_elem638)
                     iprot.readSetEnd()
                 else:
                     iprot.skip(ftype)
@@ -6656,8 +6761,8 @@
         if self.port is not None:
             oprot.writeFieldBegin('port', TType.SET, 2)
             oprot.writeSetBegin(TType.I64, len(self.port))
-            for iter594 in self.port:
-                oprot.writeI64(iter594)
+            for iter639 in self.port:
+                oprot.writeI64(iter639)
             oprot.writeSetEnd()
             oprot.writeFieldEnd()
         oprot.writeFieldStop()
@@ -6742,22 +6847,22 @@
             elif fid == 6:
                 if ftype == TType.MAP:
                     self.resources = {}
-                    (_ktype596, _vtype597, _size595) = iprot.readMapBegin()
-                    for _i599 in range(_size595):
-                        _key600 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val601 = iprot.readDouble()
-                        self.resources[_key600] = _val601
+                    (_ktype641, _vtype642, _size640) = iprot.readMapBegin()
+                    for _i644 in range(_size640):
+                        _key645 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val646 = iprot.readDouble()
+                        self.resources[_key645] = _val646
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
             elif fid == 7:
                 if ftype == TType.MAP:
                     self.shared_resources = {}
-                    (_ktype603, _vtype604, _size602) = iprot.readMapBegin()
-                    for _i606 in range(_size602):
-                        _key607 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val608 = iprot.readDouble()
-                        self.shared_resources[_key607] = _val608
+                    (_ktype648, _vtype649, _size647) = iprot.readMapBegin()
+                    for _i651 in range(_size647):
+                        _key652 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val653 = iprot.readDouble()
+                        self.shared_resources[_key652] = _val653
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
@@ -6794,17 +6899,17 @@
         if self.resources is not None:
             oprot.writeFieldBegin('resources', TType.MAP, 6)
             oprot.writeMapBegin(TType.STRING, TType.DOUBLE, len(self.resources))
-            for kiter609, viter610 in self.resources.items():
-                oprot.writeString(kiter609.encode('utf-8') if sys.version_info[0] == 2 else kiter609)
-                oprot.writeDouble(viter610)
+            for kiter654, viter655 in self.resources.items():
+                oprot.writeString(kiter654.encode('utf-8') if sys.version_info[0] == 2 else kiter654)
+                oprot.writeDouble(viter655)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         if self.shared_resources is not None:
             oprot.writeFieldBegin('shared_resources', TType.MAP, 7)
             oprot.writeMapBegin(TType.STRING, TType.DOUBLE, len(self.shared_resources))
-            for kiter611, viter612 in self.shared_resources.items():
-                oprot.writeString(kiter611.encode('utf-8') if sys.version_info[0] == 2 else kiter611)
-                oprot.writeDouble(viter612)
+            for kiter656, viter657 in self.shared_resources.items():
+                oprot.writeString(kiter656.encode('utf-8') if sys.version_info[0] == 2 else kiter656)
+                oprot.writeDouble(viter657)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         oprot.writeFieldStop()
@@ -6885,68 +6990,68 @@
             elif fid == 2:
                 if ftype == TType.MAP:
                     self.node_host = {}
-                    (_ktype614, _vtype615, _size613) = iprot.readMapBegin()
-                    for _i617 in range(_size613):
-                        _key618 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val619 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        self.node_host[_key618] = _val619
+                    (_ktype659, _vtype660, _size658) = iprot.readMapBegin()
+                    for _i662 in range(_size658):
+                        _key663 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val664 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        self.node_host[_key663] = _val664
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
             elif fid == 3:
                 if ftype == TType.MAP:
                     self.executor_node_port = {}
-                    (_ktype621, _vtype622, _size620) = iprot.readMapBegin()
-                    for _i624 in range(_size620):
-                        _key625 = []
-                        (_etype630, _size627) = iprot.readListBegin()
-                        for _i631 in range(_size627):
-                            _elem632 = iprot.readI64()
-                            _key625.append(_elem632)
+                    (_ktype666, _vtype667, _size665) = iprot.readMapBegin()
+                    for _i669 in range(_size665):
+                        _key670 = []
+                        (_etype675, _size672) = iprot.readListBegin()
+                        for _i676 in range(_size672):
+                            _elem677 = iprot.readI64()
+                            _key670.append(_elem677)
                         iprot.readListEnd()
-                        _val626 = NodeInfo()
-                        _val626.read(iprot)
-                        self.executor_node_port[_key625] = _val626
+                        _val671 = NodeInfo()
+                        _val671.read(iprot)
+                        self.executor_node_port[_key670] = _val671
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
             elif fid == 4:
                 if ftype == TType.MAP:
                     self.executor_start_time_secs = {}
-                    (_ktype634, _vtype635, _size633) = iprot.readMapBegin()
-                    for _i637 in range(_size633):
-                        _key638 = []
-                        (_etype643, _size640) = iprot.readListBegin()
-                        for _i644 in range(_size640):
-                            _elem645 = iprot.readI64()
-                            _key638.append(_elem645)
+                    (_ktype679, _vtype680, _size678) = iprot.readMapBegin()
+                    for _i682 in range(_size678):
+                        _key683 = []
+                        (_etype688, _size685) = iprot.readListBegin()
+                        for _i689 in range(_size685):
+                            _elem690 = iprot.readI64()
+                            _key683.append(_elem690)
                         iprot.readListEnd()
-                        _val639 = iprot.readI64()
-                        self.executor_start_time_secs[_key638] = _val639
+                        _val684 = iprot.readI64()
+                        self.executor_start_time_secs[_key683] = _val684
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
             elif fid == 5:
                 if ftype == TType.MAP:
                     self.worker_resources = {}
-                    (_ktype647, _vtype648, _size646) = iprot.readMapBegin()
-                    for _i650 in range(_size646):
-                        _key651 = NodeInfo()
-                        _key651.read(iprot)
-                        _val652 = WorkerResources()
-                        _val652.read(iprot)
-                        self.worker_resources[_key651] = _val652
+                    (_ktype692, _vtype693, _size691) = iprot.readMapBegin()
+                    for _i695 in range(_size691):
+                        _key696 = NodeInfo()
+                        _key696.read(iprot)
+                        _val697 = WorkerResources()
+                        _val697.read(iprot)
+                        self.worker_resources[_key696] = _val697
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
             elif fid == 6:
                 if ftype == TType.MAP:
                     self.total_shared_off_heap = {}
-                    (_ktype654, _vtype655, _size653) = iprot.readMapBegin()
-                    for _i657 in range(_size653):
-                        _key658 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val659 = iprot.readDouble()
-                        self.total_shared_off_heap[_key658] = _val659
+                    (_ktype699, _vtype700, _size698) = iprot.readMapBegin()
+                    for _i702 in range(_size698):
+                        _key703 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val704 = iprot.readDouble()
+                        self.total_shared_off_heap[_key703] = _val704
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
@@ -6972,47 +7077,47 @@
         if self.node_host is not None:
             oprot.writeFieldBegin('node_host', TType.MAP, 2)
             oprot.writeMapBegin(TType.STRING, TType.STRING, len(self.node_host))
-            for kiter660, viter661 in self.node_host.items():
-                oprot.writeString(kiter660.encode('utf-8') if sys.version_info[0] == 2 else kiter660)
-                oprot.writeString(viter661.encode('utf-8') if sys.version_info[0] == 2 else viter661)
+            for kiter705, viter706 in self.node_host.items():
+                oprot.writeString(kiter705.encode('utf-8') if sys.version_info[0] == 2 else kiter705)
+                oprot.writeString(viter706.encode('utf-8') if sys.version_info[0] == 2 else viter706)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         if self.executor_node_port is not None:
             oprot.writeFieldBegin('executor_node_port', TType.MAP, 3)
             oprot.writeMapBegin(TType.LIST, TType.STRUCT, len(self.executor_node_port))
-            for kiter662, viter663 in self.executor_node_port.items():
-                oprot.writeListBegin(TType.I64, len(kiter662))
-                for iter664 in kiter662:
-                    oprot.writeI64(iter664)
+            for kiter707, viter708 in self.executor_node_port.items():
+                oprot.writeListBegin(TType.I64, len(kiter707))
+                for iter709 in kiter707:
+                    oprot.writeI64(iter709)
                 oprot.writeListEnd()
-                viter663.write(oprot)
+                viter708.write(oprot)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         if self.executor_start_time_secs is not None:
             oprot.writeFieldBegin('executor_start_time_secs', TType.MAP, 4)
             oprot.writeMapBegin(TType.LIST, TType.I64, len(self.executor_start_time_secs))
-            for kiter665, viter666 in self.executor_start_time_secs.items():
-                oprot.writeListBegin(TType.I64, len(kiter665))
-                for iter667 in kiter665:
-                    oprot.writeI64(iter667)
+            for kiter710, viter711 in self.executor_start_time_secs.items():
+                oprot.writeListBegin(TType.I64, len(kiter710))
+                for iter712 in kiter710:
+                    oprot.writeI64(iter712)
                 oprot.writeListEnd()
-                oprot.writeI64(viter666)
+                oprot.writeI64(viter711)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         if self.worker_resources is not None:
             oprot.writeFieldBegin('worker_resources', TType.MAP, 5)
             oprot.writeMapBegin(TType.STRUCT, TType.STRUCT, len(self.worker_resources))
-            for kiter668, viter669 in self.worker_resources.items():
-                kiter668.write(oprot)
-                viter669.write(oprot)
+            for kiter713, viter714 in self.worker_resources.items():
+                kiter713.write(oprot)
+                viter714.write(oprot)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         if self.total_shared_off_heap is not None:
             oprot.writeFieldBegin('total_shared_off_heap', TType.MAP, 6)
             oprot.writeMapBegin(TType.STRING, TType.DOUBLE, len(self.total_shared_off_heap))
-            for kiter670, viter671 in self.total_shared_off_heap.items():
-                oprot.writeString(kiter670.encode('utf-8') if sys.version_info[0] == 2 else kiter670)
-                oprot.writeDouble(viter671)
+            for kiter715, viter716 in self.total_shared_off_heap.items():
+                oprot.writeString(kiter715.encode('utf-8') if sys.version_info[0] == 2 else kiter715)
+                oprot.writeDouble(viter716)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         if self.owner is not None:
@@ -7167,11 +7272,11 @@
             elif fid == 4:
                 if ftype == TType.MAP:
                     self.component_executors = {}
-                    (_ktype673, _vtype674, _size672) = iprot.readMapBegin()
-                    for _i676 in range(_size672):
-                        _key677 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val678 = iprot.readI32()
-                        self.component_executors[_key677] = _val678
+                    (_ktype718, _vtype719, _size717) = iprot.readMapBegin()
+                    for _i721 in range(_size717):
+                        _key722 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val723 = iprot.readI32()
+                        self.component_executors[_key722] = _val723
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
@@ -7199,12 +7304,12 @@
             elif fid == 9:
                 if ftype == TType.MAP:
                     self.component_debug = {}
-                    (_ktype680, _vtype681, _size679) = iprot.readMapBegin()
-                    for _i683 in range(_size679):
-                        _key684 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val685 = DebugOptions()
-                        _val685.read(iprot)
-                        self.component_debug[_key684] = _val685
+                    (_ktype725, _vtype726, _size724) = iprot.readMapBegin()
+                    for _i728 in range(_size724):
+                        _key729 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val730 = DebugOptions()
+                        _val730.read(iprot)
+                        self.component_debug[_key729] = _val730
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
@@ -7243,9 +7348,9 @@
         if self.component_executors is not None:
             oprot.writeFieldBegin('component_executors', TType.MAP, 4)
             oprot.writeMapBegin(TType.STRING, TType.I32, len(self.component_executors))
-            for kiter686, viter687 in self.component_executors.items():
-                oprot.writeString(kiter686.encode('utf-8') if sys.version_info[0] == 2 else kiter686)
-                oprot.writeI32(viter687)
+            for kiter731, viter732 in self.component_executors.items():
+                oprot.writeString(kiter731.encode('utf-8') if sys.version_info[0] == 2 else kiter731)
+                oprot.writeI32(viter732)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         if self.launch_time_secs is not None:
@@ -7267,9 +7372,9 @@
         if self.component_debug is not None:
             oprot.writeFieldBegin('component_debug', TType.MAP, 9)
             oprot.writeMapBegin(TType.STRING, TType.STRUCT, len(self.component_debug))
-            for kiter688, viter689 in self.component_debug.items():
-                oprot.writeString(kiter688.encode('utf-8') if sys.version_info[0] == 2 else kiter688)
-                viter689.write(oprot)
+            for kiter733, viter734 in self.component_debug.items():
+                oprot.writeString(kiter733.encode('utf-8') if sys.version_info[0] == 2 else kiter733)
+                viter734.write(oprot)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         if self.principal is not None:
@@ -7338,13 +7443,13 @@
             elif fid == 2:
                 if ftype == TType.MAP:
                     self.executor_stats = {}
-                    (_ktype691, _vtype692, _size690) = iprot.readMapBegin()
-                    for _i694 in range(_size690):
-                        _key695 = ExecutorInfo()
-                        _key695.read(iprot)
-                        _val696 = ExecutorStats()
-                        _val696.read(iprot)
-                        self.executor_stats[_key695] = _val696
+                    (_ktype736, _vtype737, _size735) = iprot.readMapBegin()
+                    for _i739 in range(_size735):
+                        _key740 = ExecutorInfo()
+                        _key740.read(iprot)
+                        _val741 = ExecutorStats()
+                        _val741.read(iprot)
+                        self.executor_stats[_key740] = _val741
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
@@ -7375,9 +7480,9 @@
         if self.executor_stats is not None:
             oprot.writeFieldBegin('executor_stats', TType.MAP, 2)
             oprot.writeMapBegin(TType.STRUCT, TType.STRUCT, len(self.executor_stats))
-            for kiter697, viter698 in self.executor_stats.items():
-                kiter697.write(oprot)
-                viter698.write(oprot)
+            for kiter742, viter743 in self.executor_stats.items():
+                kiter742.write(oprot)
+                viter743.write(oprot)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         if self.time_secs is not None:
@@ -7509,12 +7614,12 @@
             if fid == 1:
                 if ftype == TType.MAP:
                     self.serialized_parts = {}
-                    (_ktype700, _vtype701, _size699) = iprot.readMapBegin()
-                    for _i703 in range(_size699):
-                        _key704 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val705 = ThriftSerializedObject()
-                        _val705.read(iprot)
-                        self.serialized_parts[_key704] = _val705
+                    (_ktype745, _vtype746, _size744) = iprot.readMapBegin()
+                    for _i748 in range(_size744):
+                        _key749 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val750 = ThriftSerializedObject()
+                        _val750.read(iprot)
+                        self.serialized_parts[_key749] = _val750
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
@@ -7531,9 +7636,9 @@
         if self.serialized_parts is not None:
             oprot.writeFieldBegin('serialized_parts', TType.MAP, 1)
             oprot.writeMapBegin(TType.STRING, TType.STRUCT, len(self.serialized_parts))
-            for kiter706, viter707 in self.serialized_parts.items():
-                oprot.writeString(kiter706.encode('utf-8') if sys.version_info[0] == 2 else kiter706)
-                viter707.write(oprot)
+            for kiter751, viter752 in self.serialized_parts.items():
+                oprot.writeString(kiter751.encode('utf-8') if sys.version_info[0] == 2 else kiter751)
+                viter752.write(oprot)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         oprot.writeFieldStop()
@@ -7592,11 +7697,11 @@
             elif fid == 2:
                 if ftype == TType.LIST:
                     self.executors = []
-                    (_etype711, _size708) = iprot.readListBegin()
-                    for _i712 in range(_size708):
-                        _elem713 = ExecutorInfo()
-                        _elem713.read(iprot)
-                        self.executors.append(_elem713)
+                    (_etype756, _size753) = iprot.readListBegin()
+                    for _i757 in range(_size753):
+                        _elem758 = ExecutorInfo()
+                        _elem758.read(iprot)
+                        self.executors.append(_elem758)
                     iprot.readListEnd()
                 else:
                     iprot.skip(ftype)
@@ -7633,8 +7738,8 @@
         if self.executors is not None:
             oprot.writeFieldBegin('executors', TType.LIST, 2)
             oprot.writeListBegin(TType.STRUCT, len(self.executors))
-            for iter714 in self.executors:
-                iter714.write(oprot)
+            for iter759 in self.executors:
+                iter759.write(oprot)
             oprot.writeListEnd()
             oprot.writeFieldEnd()
         if self.resources is not None:
@@ -7753,11 +7858,11 @@
             if fid == 1:
                 if ftype == TType.MAP:
                     self.approved_workers = {}
-                    (_ktype716, _vtype717, _size715) = iprot.readMapBegin()
-                    for _i719 in range(_size715):
-                        _key720 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val721 = iprot.readI32()
-                        self.approved_workers[_key720] = _val721
+                    (_ktype761, _vtype762, _size760) = iprot.readMapBegin()
+                    for _i764 in range(_size760):
+                        _key765 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val766 = iprot.readI32()
+                        self.approved_workers[_key765] = _val766
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
@@ -7774,9 +7879,9 @@
         if self.approved_workers is not None:
             oprot.writeFieldBegin('approved_workers', TType.MAP, 1)
             oprot.writeMapBegin(TType.STRING, TType.I32, len(self.approved_workers))
-            for kiter722, viter723 in self.approved_workers.items():
-                oprot.writeString(kiter722.encode('utf-8') if sys.version_info[0] == 2 else kiter722)
-                oprot.writeI32(viter723)
+            for kiter767, viter768 in self.approved_workers.items():
+                oprot.writeString(kiter767.encode('utf-8') if sys.version_info[0] == 2 else kiter767)
+                oprot.writeI32(viter768)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         oprot.writeFieldStop()
@@ -7822,12 +7927,12 @@
             if fid == 1:
                 if ftype == TType.MAP:
                     self.assignments = {}
-                    (_ktype725, _vtype726, _size724) = iprot.readMapBegin()
-                    for _i728 in range(_size724):
-                        _key729 = iprot.readI32()
-                        _val730 = LocalAssignment()
-                        _val730.read(iprot)
-                        self.assignments[_key729] = _val730
+                    (_ktype770, _vtype771, _size769) = iprot.readMapBegin()
+                    for _i773 in range(_size769):
+                        _key774 = iprot.readI32()
+                        _val775 = LocalAssignment()
+                        _val775.read(iprot)
+                        self.assignments[_key774] = _val775
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
@@ -7844,9 +7949,9 @@
         if self.assignments is not None:
             oprot.writeFieldBegin('assignments', TType.MAP, 1)
             oprot.writeMapBegin(TType.I32, TType.STRUCT, len(self.assignments))
-            for kiter731, viter732 in self.assignments.items():
-                oprot.writeI32(kiter731)
-                viter732.write(oprot)
+            for kiter776, viter777 in self.assignments.items():
+                oprot.writeI32(kiter776)
+                viter777.write(oprot)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         oprot.writeFieldStop()
@@ -7908,11 +8013,11 @@
             elif fid == 3:
                 if ftype == TType.LIST:
                     self.executors = []
-                    (_etype736, _size733) = iprot.readListBegin()
-                    for _i737 in range(_size733):
-                        _elem738 = ExecutorInfo()
-                        _elem738.read(iprot)
-                        self.executors.append(_elem738)
+                    (_etype781, _size778) = iprot.readListBegin()
+                    for _i782 in range(_size778):
+                        _elem783 = ExecutorInfo()
+                        _elem783.read(iprot)
+                        self.executors.append(_elem783)
                     iprot.readListEnd()
                 else:
                     iprot.skip(ftype)
@@ -7942,8 +8047,8 @@
         if self.executors is not None:
             oprot.writeFieldBegin('executors', TType.LIST, 3)
             oprot.writeListBegin(TType.STRUCT, len(self.executors))
-            for iter739 in self.executors:
-                iter739.write(oprot)
+            for iter784 in self.executors:
+                iter784.write(oprot)
             oprot.writeListEnd()
             oprot.writeFieldEnd()
         if self.port is not None:
@@ -8015,20 +8120,20 @@
             elif fid == 3:
                 if ftype == TType.LIST:
                     self.users = []
-                    (_etype743, _size740) = iprot.readListBegin()
-                    for _i744 in range(_size740):
-                        _elem745 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        self.users.append(_elem745)
+                    (_etype788, _size785) = iprot.readListBegin()
+                    for _i789 in range(_size785):
+                        _elem790 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        self.users.append(_elem790)
                     iprot.readListEnd()
                 else:
                     iprot.skip(ftype)
             elif fid == 4:
                 if ftype == TType.LIST:
                     self.groups = []
-                    (_etype749, _size746) = iprot.readListBegin()
-                    for _i750 in range(_size746):
-                        _elem751 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        self.groups.append(_elem751)
+                    (_etype794, _size791) = iprot.readListBegin()
+                    for _i795 in range(_size791):
+                        _elem796 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        self.groups.append(_elem796)
                     iprot.readListEnd()
                 else:
                     iprot.skip(ftype)
@@ -8053,15 +8158,15 @@
         if self.users is not None:
             oprot.writeFieldBegin('users', TType.LIST, 3)
             oprot.writeListBegin(TType.STRING, len(self.users))
-            for iter752 in self.users:
-                oprot.writeString(iter752.encode('utf-8') if sys.version_info[0] == 2 else iter752)
+            for iter797 in self.users:
+                oprot.writeString(iter797.encode('utf-8') if sys.version_info[0] == 2 else iter797)
             oprot.writeListEnd()
             oprot.writeFieldEnd()
         if self.groups is not None:
             oprot.writeFieldBegin('groups', TType.LIST, 4)
             oprot.writeListBegin(TType.STRING, len(self.groups))
-            for iter753 in self.groups:
-                oprot.writeString(iter753.encode('utf-8') if sys.version_info[0] == 2 else iter753)
+            for iter798 in self.groups:
+                oprot.writeString(iter798.encode('utf-8') if sys.version_info[0] == 2 else iter798)
             oprot.writeListEnd()
             oprot.writeFieldEnd()
         oprot.writeFieldStop()
@@ -8113,11 +8218,11 @@
             if fid == 1:
                 if ftype == TType.LIST:
                     self.topo_history = []
-                    (_etype757, _size754) = iprot.readListBegin()
-                    for _i758 in range(_size754):
-                        _elem759 = LSTopoHistory()
-                        _elem759.read(iprot)
-                        self.topo_history.append(_elem759)
+                    (_etype802, _size799) = iprot.readListBegin()
+                    for _i803 in range(_size799):
+                        _elem804 = LSTopoHistory()
+                        _elem804.read(iprot)
+                        self.topo_history.append(_elem804)
                     iprot.readListEnd()
                 else:
                     iprot.skip(ftype)
@@ -8134,8 +8239,8 @@
         if self.topo_history is not None:
             oprot.writeFieldBegin('topo_history', TType.LIST, 1)
             oprot.writeListBegin(TType.STRUCT, len(self.topo_history))
-            for iter760 in self.topo_history:
-                iter760.write(oprot)
+            for iter805 in self.topo_history:
+                iter805.write(oprot)
             oprot.writeListEnd()
             oprot.writeFieldEnd()
         oprot.writeFieldStop()
@@ -8425,12 +8530,12 @@
             if fid == 2:
                 if ftype == TType.MAP:
                     self.named_logger_level = {}
-                    (_ktype762, _vtype763, _size761) = iprot.readMapBegin()
-                    for _i765 in range(_size761):
-                        _key766 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val767 = LogLevel()
-                        _val767.read(iprot)
-                        self.named_logger_level[_key766] = _val767
+                    (_ktype807, _vtype808, _size806) = iprot.readMapBegin()
+                    for _i810 in range(_size806):
+                        _key811 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val812 = LogLevel()
+                        _val812.read(iprot)
+                        self.named_logger_level[_key811] = _val812
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
@@ -8447,9 +8552,9 @@
         if self.named_logger_level is not None:
             oprot.writeFieldBegin('named_logger_level', TType.MAP, 2)
             oprot.writeMapBegin(TType.STRING, TType.STRUCT, len(self.named_logger_level))
-            for kiter768, viter769 in self.named_logger_level.items():
-                oprot.writeString(kiter768.encode('utf-8') if sys.version_info[0] == 2 else kiter768)
-                viter769.write(oprot)
+            for kiter813, viter814 in self.named_logger_level.items():
+                oprot.writeString(kiter813.encode('utf-8') if sys.version_info[0] == 2 else kiter813)
+                viter814.write(oprot)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         oprot.writeFieldStop()
@@ -8493,10 +8598,10 @@
             if fid == 1:
                 if ftype == TType.LIST:
                     self.topo_ids = []
-                    (_etype773, _size770) = iprot.readListBegin()
-                    for _i774 in range(_size770):
-                        _elem775 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        self.topo_ids.append(_elem775)
+                    (_etype818, _size815) = iprot.readListBegin()
+                    for _i819 in range(_size815):
+                        _elem820 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        self.topo_ids.append(_elem820)
                     iprot.readListEnd()
                 else:
                     iprot.skip(ftype)
@@ -8513,8 +8618,8 @@
         if self.topo_ids is not None:
             oprot.writeFieldBegin('topo_ids', TType.LIST, 1)
             oprot.writeListBegin(TType.STRING, len(self.topo_ids))
-            for iter776 in self.topo_ids:
-                oprot.writeString(iter776.encode('utf-8') if sys.version_info[0] == 2 else iter776)
+            for iter821 in self.topo_ids:
+                oprot.writeString(iter821.encode('utf-8') if sys.version_info[0] == 2 else iter821)
             oprot.writeListEnd()
             oprot.writeFieldEnd()
         oprot.writeFieldStop()
@@ -8813,11 +8918,11 @@
             elif fid == 2:
                 if ftype == TType.LIST:
                     self.executors = []
-                    (_etype780, _size777) = iprot.readListBegin()
-                    for _i781 in range(_size777):
-                        _elem782 = ExecutorInfo()
-                        _elem782.read(iprot)
-                        self.executors.append(_elem782)
+                    (_etype825, _size822) = iprot.readListBegin()
+                    for _i826 in range(_size822):
+                        _elem827 = ExecutorInfo()
+                        _elem827.read(iprot)
+                        self.executors.append(_elem827)
                     iprot.readListEnd()
                 else:
                     iprot.skip(ftype)
@@ -8843,8 +8948,8 @@
         if self.executors is not None:
             oprot.writeFieldBegin('executors', TType.LIST, 2)
             oprot.writeListBegin(TType.STRUCT, len(self.executors))
-            for iter783 in self.executors:
-                iter783.write(oprot)
+            for iter828 in self.executors:
+                iter828.write(oprot)
             oprot.writeListEnd()
             oprot.writeFieldEnd()
         if self.time_secs is not None:
@@ -8905,11 +9010,11 @@
             elif fid == 2:
                 if ftype == TType.LIST:
                     self.worker_heartbeats = []
-                    (_etype787, _size784) = iprot.readListBegin()
-                    for _i788 in range(_size784):
-                        _elem789 = SupervisorWorkerHeartbeat()
-                        _elem789.read(iprot)
-                        self.worker_heartbeats.append(_elem789)
+                    (_etype832, _size829) = iprot.readListBegin()
+                    for _i833 in range(_size829):
+                        _elem834 = SupervisorWorkerHeartbeat()
+                        _elem834.read(iprot)
+                        self.worker_heartbeats.append(_elem834)
                     iprot.readListEnd()
                 else:
                     iprot.skip(ftype)
@@ -8930,8 +9035,8 @@
         if self.worker_heartbeats is not None:
             oprot.writeFieldBegin('worker_heartbeats', TType.LIST, 2)
             oprot.writeListBegin(TType.STRUCT, len(self.worker_heartbeats))
-            for iter790 in self.worker_heartbeats:
-                iter790.write(oprot)
+            for iter835 in self.worker_heartbeats:
+                iter835.write(oprot)
             oprot.writeListEnd()
             oprot.writeFieldEnd()
         oprot.writeFieldStop()
@@ -8983,12 +9088,12 @@
             if fid == 1:
                 if ftype == TType.MAP:
                     self.storm_assignment = {}
-                    (_ktype792, _vtype793, _size791) = iprot.readMapBegin()
-                    for _i795 in range(_size791):
-                        _key796 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        _val797 = Assignment()
-                        _val797.read(iprot)
-                        self.storm_assignment[_key796] = _val797
+                    (_ktype837, _vtype838, _size836) = iprot.readMapBegin()
+                    for _i840 in range(_size836):
+                        _key841 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        _val842 = Assignment()
+                        _val842.read(iprot)
+                        self.storm_assignment[_key841] = _val842
                     iprot.readMapEnd()
                 else:
                     iprot.skip(ftype)
@@ -9005,9 +9110,9 @@
         if self.storm_assignment is not None:
             oprot.writeFieldBegin('storm_assignment', TType.MAP, 1)
             oprot.writeMapBegin(TType.STRING, TType.STRUCT, len(self.storm_assignment))
-            for kiter798, viter799 in self.storm_assignment.items():
-                oprot.writeString(kiter798.encode('utf-8') if sys.version_info[0] == 2 else kiter798)
-                viter799.write(oprot)
+            for kiter843, viter844 in self.storm_assignment.items():
+                oprot.writeString(kiter843.encode('utf-8') if sys.version_info[0] == 2 else kiter843)
+                viter844.write(oprot)
             oprot.writeMapEnd()
             oprot.writeFieldEnd()
         oprot.writeFieldStop()
@@ -9175,11 +9280,11 @@
             if fid == 1:
                 if ftype == TType.LIST:
                     self.metrics = []
-                    (_etype803, _size800) = iprot.readListBegin()
-                    for _i804 in range(_size800):
-                        _elem805 = WorkerMetricPoint()
-                        _elem805.read(iprot)
-                        self.metrics.append(_elem805)
+                    (_etype848, _size845) = iprot.readListBegin()
+                    for _i849 in range(_size845):
+                        _elem850 = WorkerMetricPoint()
+                        _elem850.read(iprot)
+                        self.metrics.append(_elem850)
                     iprot.readListEnd()
                 else:
                     iprot.skip(ftype)
@@ -9196,8 +9301,8 @@
         if self.metrics is not None:
             oprot.writeFieldBegin('metrics', TType.LIST, 1)
             oprot.writeListBegin(TType.STRUCT, len(self.metrics))
-            for iter806 in self.metrics:
-                iter806.write(oprot)
+            for iter851 in self.metrics:
+                iter851.write(oprot)
             oprot.writeListEnd()
             oprot.writeFieldEnd()
         oprot.writeFieldStop()
@@ -9555,11 +9660,11 @@
             if fid == 1:
                 if ftype == TType.LIST:
                     self.pulses = []
-                    (_etype810, _size807) = iprot.readListBegin()
-                    for _i811 in range(_size807):
-                        _elem812 = HBPulse()
-                        _elem812.read(iprot)
-                        self.pulses.append(_elem812)
+                    (_etype855, _size852) = iprot.readListBegin()
+                    for _i856 in range(_size852):
+                        _elem857 = HBPulse()
+                        _elem857.read(iprot)
+                        self.pulses.append(_elem857)
                     iprot.readListEnd()
                 else:
                     iprot.skip(ftype)
@@ -9576,8 +9681,8 @@
         if self.pulses is not None:
             oprot.writeFieldBegin('pulses', TType.LIST, 1)
             oprot.writeListBegin(TType.STRUCT, len(self.pulses))
-            for iter813 in self.pulses:
-                iter813.write(oprot)
+            for iter858 in self.pulses:
+                iter858.write(oprot)
             oprot.writeListEnd()
             oprot.writeFieldEnd()
         oprot.writeFieldStop()
@@ -9621,10 +9726,10 @@
             if fid == 1:
                 if ftype == TType.LIST:
                     self.pulseIds = []
-                    (_etype817, _size814) = iprot.readListBegin()
-                    for _i818 in range(_size814):
-                        _elem819 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
-                        self.pulseIds.append(_elem819)
+                    (_etype862, _size859) = iprot.readListBegin()
+                    for _i863 in range(_size859):
+                        _elem864 = iprot.readString().decode('utf-8') if sys.version_info[0] == 2 else iprot.readString()
+                        self.pulseIds.append(_elem864)
                     iprot.readListEnd()
                 else:
                     iprot.skip(ftype)
@@ -9641,8 +9746,8 @@
         if self.pulseIds is not None:
             oprot.writeFieldBegin('pulseIds', TType.LIST, 1)
             oprot.writeListBegin(TType.STRING, len(self.pulseIds))
-            for iter820 in self.pulseIds:
-                oprot.writeString(iter820.encode('utf-8') if sys.version_info[0] == 2 else iter820)
+            for iter865 in self.pulseIds:
+                oprot.writeString(iter865.encode('utf-8') if sys.version_info[0] == 2 else iter865)
             oprot.writeListEnd()
             oprot.writeFieldEnd()
         oprot.writeFieldStop()
@@ -10916,6 +11021,8 @@
     (524, TType.DOUBLE, 'assigned_memonheap', None, None, ),  # 524
     (525, TType.DOUBLE, 'assigned_memoffheap', None, None, ),  # 525
     (526, TType.DOUBLE, 'assigned_cpu', None, None, ),  # 526
+    (527, TType.MAP, 'requested_generic_resources', (TType.STRING, 'UTF8', TType.DOUBLE, None, False), None, ),  # 527
+    (528, TType.MAP, 'assigned_generic_resources', (TType.STRING, 'UTF8', TType.DOUBLE, None, False), None, ),  # 528
 )
 all_structs.append(SupervisorSummary)
 SupervisorSummary.thrift_spec = (
@@ -10932,6 +11039,7 @@
     (10, TType.DOUBLE, 'fragmented_mem', None, None, ),  # 10
     (11, TType.DOUBLE, 'fragmented_cpu', None, None, ),  # 11
     (12, TType.BOOL, 'blacklisted', None, None, ),  # 12
+    (13, TType.MAP, 'used_generic_resources', (TType.STRING, 'UTF8', TType.DOUBLE, None, False), None, ),  # 13
 )
 all_structs.append(NimbusSummary)
 NimbusSummary.thrift_spec = (
@@ -12662,6 +12770,8 @@
     (532, TType.DOUBLE, 'assigned_shared_on_heap_memory', None, None, ),  # 532
     (533, TType.DOUBLE, 'assigned_regular_off_heap_memory', None, None, ),  # 533
     (534, TType.DOUBLE, 'assigned_shared_off_heap_memory', None, None, ),  # 534
+    (535, TType.MAP, 'requested_generic_resources', (TType.STRING, 'UTF8', TType.DOUBLE, None, False), None, ),  # 535
+    (536, TType.MAP, 'assigned_generic_resources', (TType.STRING, 'UTF8', TType.DOUBLE, None, False), None, ),  # 536
 )
 all_structs.append(ExecutorAggregateStats)
 ExecutorAggregateStats.thrift_spec = (
diff --git a/storm-client/src/storm.thrift b/storm-client/src/storm.thrift
index 401a69b..d451614 100644
--- a/storm-client/src/storm.thrift
+++ b/storm-client/src/storm.thrift
@@ -180,6 +180,8 @@
 524: optional double assigned_memonheap;
 525: optional double assigned_memoffheap;
 526: optional double assigned_cpu;
+527: optional map<string, double> requested_generic_resources;
+528: optional map<string, double> assigned_generic_resources;
 }
 
 struct SupervisorSummary {
@@ -195,6 +197,7 @@
   10: optional double fragmented_mem;
   11: optional double fragmented_cpu;
   12: optional bool blacklisted;
+  13: optional map<string, double> used_generic_resources;
 }
 
 struct NimbusSummary {
@@ -389,6 +392,8 @@
 532: optional double assigned_shared_on_heap_memory;
 533: optional double assigned_regular_off_heap_memory;
 534: optional double assigned_shared_off_heap_memory;
+535: optional map<string, double> requested_generic_resources;
+536: optional map<string, double> assigned_generic_resources;
 }
 
 struct ExecutorAggregateStats {
diff --git a/storm-client/test/jvm/org/apache/storm/daemon/worker/BackPressureTrackerTest.java b/storm-client/test/jvm/org/apache/storm/daemon/worker/BackPressureTrackerTest.java
index 7e891b5..f642c54 100644
--- a/storm-client/test/jvm/org/apache/storm/daemon/worker/BackPressureTrackerTest.java
+++ b/storm-client/test/jvm/org/apache/storm/daemon/worker/BackPressureTrackerTest.java
@@ -24,6 +24,8 @@
 import static org.mockito.Mockito.when;
 
 import java.util.Collections;
+
+import org.apache.storm.daemon.worker.BackPressureTracker.BackpressureState;
 import org.apache.storm.messaging.netty.BackPressureStatus;
 import org.apache.storm.shade.org.apache.curator.shaded.com.google.common.collect.ImmutableMap;
 import org.apache.storm.utils.JCQueue;
@@ -38,7 +40,7 @@
         int taskIdNoBackPressure = 1;
         JCQueue noBackPressureQueue = mock(JCQueue.class);
         BackPressureTracker tracker = new BackPressureTracker(WORKER_ID,
-            Collections.singletonMap(taskIdNoBackPressure, noBackPressureQueue));
+                Collections.singletonMap(taskIdNoBackPressure, noBackPressureQueue));
 
         BackPressureStatus status = tracker.getCurrStatus();
 
@@ -57,7 +59,8 @@
             taskIdNoBackPressure, noBackPressureQueue,
             taskIdBackPressure, backPressureQueue));
 
-        boolean backpressureChanged = tracker.recordBackPressure(taskIdBackPressure);
+        BackpressureState state = tracker.getBackpressureState(taskIdBackPressure);
+        boolean backpressureChanged = tracker.recordBackPressure(state);
         BackPressureStatus status = tracker.getCurrStatus();
 
         assertThat(backpressureChanged, is(true));
@@ -72,9 +75,10 @@
         JCQueue queue = mock(JCQueue.class);
         BackPressureTracker tracker = new BackPressureTracker(WORKER_ID, ImmutableMap.of(
             taskId, queue));
-        tracker.recordBackPressure(taskId);
+        BackpressureState state = tracker.getBackpressureState(taskId);
+        tracker.recordBackPressure(state);
 
-        boolean backpressureChanged = tracker.recordBackPressure(taskId);
+        boolean backpressureChanged = tracker.recordBackPressure(state);
         BackPressureStatus status = tracker.getCurrStatus();
 
         assertThat(backpressureChanged, is(false));
@@ -89,7 +93,8 @@
         when(queue.isEmptyOverflow()).thenReturn(true);
         BackPressureTracker tracker = new BackPressureTracker(WORKER_ID, ImmutableMap.of(
             taskId, queue));
-        tracker.recordBackPressure(taskId);
+        BackpressureState state = tracker.getBackpressureState(taskId);
+        tracker.recordBackPressure(state);
 
         boolean backpressureChanged = tracker.refreshBpTaskList();
         BackPressureStatus status = tracker.getCurrStatus();
@@ -106,7 +111,8 @@
         when(queue.isEmptyOverflow()).thenReturn(false);
         BackPressureTracker tracker = new BackPressureTracker(WORKER_ID, ImmutableMap.of(
             taskId, queue));
-        tracker.recordBackPressure(taskId);
+        BackpressureState state = tracker.getBackpressureState(taskId);
+        tracker.recordBackPressure(state);
 
         boolean backpressureChanged = tracker.refreshBpTaskList();
         BackPressureStatus status = tracker.getCurrStatus();
@@ -116,4 +122,21 @@
         assertThat(status.bpTasks, contains(taskId));
     }
 
+    @Test
+    public void testSetLastOverflowCount() {
+        int taskId = 1;
+        int overflow = 5;
+        JCQueue queue = mock(JCQueue.class);
+        BackPressureTracker tracker = new BackPressureTracker(WORKER_ID, ImmutableMap.of(
+            taskId, queue));
+        BackpressureState state = tracker.getBackpressureState(taskId);
+        tracker.recordBackPressure(state);
+        tracker.setLastOverflowCount(state, overflow);
+
+        BackpressureState retrievedState = tracker.getBackpressureState(taskId);
+        int lastOverflowCount = tracker.getLastOverflowCount(retrievedState);
+
+        assertThat(lastOverflowCount, is(overflow));
+    }
+
 }
diff --git a/storm-client/test/jvm/org/apache/storm/daemon/worker/LogConfigManagerTest.java b/storm-client/test/jvm/org/apache/storm/daemon/worker/LogConfigManagerTest.java
index bf8ded8..673458f 100644
--- a/storm-client/test/jvm/org/apache/storm/daemon/worker/LogConfigManagerTest.java
+++ b/storm-client/test/jvm/org/apache/storm/daemon/worker/LogConfigManagerTest.java
@@ -14,6 +14,7 @@
 
 import java.util.TreeMap;
 import java.util.concurrent.atomic.AtomicReference;
+
 import org.apache.logging.log4j.LogManager;
 import org.apache.logging.log4j.core.LoggerContext;
 import org.apache.storm.generated.LogConfig;
@@ -27,6 +28,7 @@
 
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertNotNull;
+import static org.junit.jupiter.api.Assertions.assertThrows;
 import static org.mockito.Mockito.anyObject;
 import static org.mockito.Mockito.anyString;
 import static org.mockito.Mockito.eq;
@@ -203,6 +205,32 @@
     }
 
     @Test
+    public void testProcessLogConfigChangeThrowsNullPointerExceptionWhenTargetLogLevelIsNotSet() {
+        LogConfigManager logConfigManager = new LogConfigManager();
+
+        LogConfig logConfig = new LogConfig();
+        LogLevel logLevel = new LogLevel();
+        logLevel.set_action(LogLevelAction.UPDATE);
+        logLevel.set_reset_log_level("INFO");
+        logConfig.put_to_named_logger_level("RESET_LOG", logLevel);
+
+        assertThrows(NullPointerException.class, () -> logConfigManager.processLogConfigChange(logConfig));
+    }
+
+    @Test
+    public void testProcessLogConfigChangeExecutesSuccessfullyWhenTargetLogLevelIsSet() {
+        LogConfigManager logConfigManager = new LogConfigManager();
+
+        LogConfig logConfig = new LogConfig();
+        LogLevel logLevel = new LogLevel();
+        logLevel.set_action(LogLevelAction.UPDATE);
+        logLevel.set_target_log_level("DEBUG");
+        logConfig.put_to_named_logger_level("TARGET_LOG", logLevel);
+
+        logConfigManager.processLogConfigChange(logConfig);
+    }
+
+    @Test
     public void testProcessRootLogLevelToDebugSetsLoggerAndTimeout() {
         try (SimulatedTime t = new SimulatedTime()) {
             LogConfig mockConfig = new LogConfig();
diff --git a/storm-client/test/jvm/org/apache/storm/security/auth/ClientAuthUtilsTest.java b/storm-client/test/jvm/org/apache/storm/security/auth/ClientAuthUtilsTest.java
index 9f09ee4..1aa854b 100644
--- a/storm-client/test/jvm/org/apache/storm/security/auth/ClientAuthUtilsTest.java
+++ b/storm-client/test/jvm/org/apache/storm/security/auth/ClientAuthUtilsTest.java
@@ -95,8 +95,9 @@
 
     @Test
     public void objGettersReturnNullWithNullConfigTest() throws IOException {
-        Assert.assertNull(ClientAuthUtils.pullConfig(null, "foo"));
-        Assert.assertNull(ClientAuthUtils.get(null, "foo", "bar"));
+        Map<String, Object> topoConf = new HashMap<>();
+        Assert.assertNull(ClientAuthUtils.pullConfig(topoConf, "foo"));
+        Assert.assertNull(ClientAuthUtils.get(topoConf, "foo", "bar"));
 
         Assert.assertNull(ClientAuthUtils.getConfiguration(Collections.emptyMap()));
     }
@@ -141,39 +142,6 @@
         Mockito.verify(autoCred, Mockito.times(1)).populateSubject(subject, cred);
     }
 
-    @Test
-    public void makeDigestPayloadTest() throws NoSuchAlgorithmException {
-        String section = "user-pass-section";
-        Map<String, String> optionMap = new HashMap<String, String>();
-        String user = "user";
-        String pass = "pass";
-        optionMap.put("username", user);
-        optionMap.put("password", pass);
-        AppConfigurationEntry entry = Mockito.mock(AppConfigurationEntry.class);
-
-        Mockito.<Map<String, ?>>when(entry.getOptions()).thenReturn(optionMap);
-        Configuration mockConfig = Mockito.mock(Configuration.class);
-        Mockito.when(mockConfig.getAppConfigurationEntry(section))
-               .thenReturn(new AppConfigurationEntry[]{ entry });
-
-        MessageDigest digest = MessageDigest.getInstance("SHA-512");
-        byte[] output = digest.digest((user + ":" + pass).getBytes());
-        String sha = Hex.encodeHexString(output);
-
-        // previous code used this method to generate the string, ensure the two match
-        StringBuilder builder = new StringBuilder();
-        for (byte b : output) {
-            builder.append(String.format("%02x", b));
-        }
-        String stringFormatMethod = builder.toString();
-
-        Assert.assertEquals(
-            ClientAuthUtils.makeDigestPayload(mockConfig, "user-pass-section"),
-            sha);
-
-        Assert.assertEquals(sha, stringFormatMethod);
-    }
-
     @Test(expected = RuntimeException.class)
     public void invalidConfigResultsInIOException() throws RuntimeException {
         HashMap<String, Object> conf = new HashMap<>();
diff --git a/storm-client/test/jvm/org/apache/storm/topology/TopologyBuilderTest.java b/storm-client/test/jvm/org/apache/storm/topology/TopologyBuilderTest.java
index 3e8a86d..47fe12f 100644
--- a/storm-client/test/jvm/org/apache/storm/topology/TopologyBuilderTest.java
+++ b/storm-client/test/jvm/org/apache/storm/topology/TopologyBuilderTest.java
@@ -51,12 +51,6 @@
         builder.addWorkerHook(null);
     }
 
-    // TODO enable if setStateSpout gets implemented
-    //    @Test(expected = IllegalArgumentException.class)
-    //    public void testSetStateSpout() {
-    //        builder.setStateSpout("stateSpout", mock(IRichStateSpout.class), 0);
-    //    }
-
     @Test
     public void testStatefulTopology() {
         builder.setSpout("spout1", makeDummySpout());
diff --git a/storm-client/test/jvm/org/apache/storm/topology/WindowedBoltExecutorTest.java b/storm-client/test/jvm/org/apache/storm/topology/WindowedBoltExecutorTest.java
index 5162cc6..0e38838 100644
--- a/storm-client/test/jvm/org/apache/storm/topology/WindowedBoltExecutorTest.java
+++ b/storm-client/test/jvm/org/apache/storm/topology/WindowedBoltExecutorTest.java
@@ -39,6 +39,7 @@
 import static org.junit.Assert.assertArrayEquals;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertThat;
+import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
 /**
@@ -214,6 +215,14 @@
         Mockito.verify(outputCollector).emit("$late", Arrays.asList(tuple), new Values(tuple));
     }
 
+    @Test
+    public void testEmptyConfigOnWrappedBolt() {
+        IWindowedBolt wrappedBolt = Mockito.mock(IWindowedBolt.class);
+        Mockito.when(wrappedBolt.getComponentConfiguration()).thenReturn(null);
+        executor = new WindowedBoltExecutor(wrappedBolt);
+        assertTrue("Configuration is not empty", executor.getComponentConfiguration().isEmpty());
+    }
+
     private static class TestWindowedBolt extends BaseWindowedBolt {
         List<TupleWindow> tupleWindows = new ArrayList<>();
 
diff --git a/storm-client/test/py/test_storm_cli.py b/storm-client/test/py/test_storm_cli.py
index 1069a6b..5f46475 100644
--- a/storm-client/test/py/test_storm_cli.py
+++ b/storm-client/test/py/test_storm_cli.py
@@ -42,10 +42,11 @@
         )
 
     def base_test(self, command_invocation, mock_shell_interface, expected_output):
+        print(command_invocation)
         with mock.patch.object(sys, "argv", command_invocation):
             self.cli_main()
         if expected_output not in mock_shell_interface.call_args_list:
-            print("Expected:"  + str(expected_output))
+            print("Expected:" + str(expected_output))
             print("Got:" + str(mock_shell_interface.call_args_list[-1]))
         assert expected_output in mock_shell_interface.call_args_list
 
@@ -58,19 +59,41 @@
             './external/storm-redis/storm-redis-1.1.0.jar,./external/storm-kafka-client/storm-kafka-client-1.1.0.jar"', '--artifacts', '"redis.clients:jedis:2.9.0,org.apache.kafka:kafka-clients:1.0.0^org.slf4j:slf4j-api"', '--artifactRepositories', '"jboss-repository^http://repository.jboss.com/maven2,HDPRepo^http://repo.hortonworks.com/content/groups/public/'
         ], self.mock_execvp, mock.call(
             self.java_cmd, [
-                self.java_cmd, '-client','-Ddaemon.name=', '-Dstorm.options=+topology.blobstore.map%3D%27%7B%22key1%22%3A%7B%22localname%22%3A%22blob_file%22%2C+%22uncompress%22%3Afalse%7D%2C%22key2%22%3A%7B%7D%7D%27',
-                '-Dstorm.home=' + self.storm_dir + '', '-Dstorm.log.dir=',
+                self.java_cmd, '-client', '-Ddaemon.name=', '-Dstorm.options=+topology.blobstore.map%3D%27%7B%22key1%22%3A%7B%22localname%22%3A%22blob_file%22%2C+%22uncompress%22%3Afalse%7D%2C%22key2%22%3A%7B%7D%7D%27',
+                '-Dstorm.home=' + self.storm_dir, '-Dstorm.log.dir=' + self.storm_dir + "/logs",
                 '-Djava.library.path=', '-Dstorm.conf.file=', '-cp',
-                '' + self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir + '/extlib:example/storm-starter/storm-starter-topologies-*.jar:' + self.storm_dir + '/conf:' + self.storm_dir + '/bin:./external/storm-redis/storm-redis-1.1.0.jar:./external/storm-kafka-client/storm-kafka-client-1.1.0.jar"', '-Dstorm.jar=example/storm-starter/storm-starter-topologies-*.jar', '-Dstorm.dependency.jars=./external/storm-redis/storm-redis-1.1.0.jar,./external/storm-kafka-client/storm-kafka-client-1.1.0.jar"', '-Dstorm.dependency.artifacts={}',
+                self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir
+                + '/extlib:example/storm-starter/storm-starter-topologies-*.jar:' + self.storm_dir + '/conf:'
+                + self.storm_dir + '/bin:./external/storm-redis/storm-redis-1.1.0.jar:./external/storm-kafka-client/storm-kafka-client-1.1.0.jar"', '-Dstorm.jar=example/storm-starter/storm-starter-topologies-*.jar', '-Dstorm.dependency.jars=./external/storm-redis/storm-redis-1.1.0.jar,./external/storm-kafka-client/storm-kafka-client-1.1.0.jar"', '-Dstorm.dependency.artifacts={}',
                 'org.apache.storm.starter.RollingTopWords', 'blobstore-remote2', 'remote'
             ])
         )
 
+        self.mock_execvp.reset_mock()
+
+        self.base_test([
+            'storm', 'jar', '/path/to/jar.jar', 'some.Topology.Class',
+            '-name', 'run-topology', 'randomArgument', '-randomFlag', 'randomFlagValue', '-rotateSize', '0.0001',
+            '--hdfsConf', 'someOtherHdfsConf', 'dfs.namenode.kerberos.principal.pattern=hdfs/*.EV..COM'
+        ], self.mock_execvp, mock.call(
+            self.java_cmd, [
+                self.java_cmd, '-client', '-Ddaemon.name=', '-Dstorm.options=',
+                '-Dstorm.home=' + self.storm_dir, '-Dstorm.log.dir=' + self.storm_dir + "/logs",
+                '-Djava.library.path=', '-Dstorm.conf.file=', '-cp',
+                self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir
+                + '/extlib:/path/to/jar.jar:' + self.storm_dir + '/conf:' + self.storm_dir + '/bin:',
+                '-Dstorm.jar=/path/to/jar.jar', '-Dstorm.dependency.jars=', '-Dstorm.dependency.artifacts={}',
+                'some.Topology.Class', '-name', 'run-topology', 'randomArgument', '-randomFlag', 'randomFlagValue',
+                '-rotateSize', '0.0001', '--hdfsConf', 'someOtherHdfsConf',
+                'dfs.namenode.kerberos.principal.pattern=hdfs/*.EV..COM'
+            ])
+        )
+
     def test_localconfvalue_command(self):
         self.base_test(
             ["storm", "localconfvalue", "conf_name"], self.mock_popen, mock.call([
              self.java_cmd, '-client', '-Dstorm.options=',
-             '-Dstorm.conf.file=', '-cp', '' + self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir +'/extlib:' + self.storm_dir + '/extlib-daemon:' + self.storm_dir + '/conf',
+             '-Dstorm.conf.file=', '-cp',  self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir +'/extlib:' + self.storm_dir + '/extlib-daemon:' + self.storm_dir + '/conf',
              'org.apache.storm.command.ConfigValue', 'conf_name'
              ], stdout=-1
             )
@@ -80,7 +103,7 @@
         self.base_test(
             ["storm", "remoteconfvalue", "conf_name"], self.mock_popen, mock.call([
              self.java_cmd, '-client', '-Dstorm.options=',
-             '-Dstorm.conf.file=', '-cp', '' + self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir + '/extlib:' + self.storm_dir + '/extlib-daemon:' + self.storm_dir + '/conf',
+             '-Dstorm.conf.file=', '-cp',  self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir + '/extlib:' + self.storm_dir + '/extlib-daemon:' + self.storm_dir + '/conf',
              'org.apache.storm.command.ConfigValue', 'conf_name'
              ], stdout=-1
             )
@@ -99,9 +122,9 @@
             ], self.mock_execvp, mock.call(
             self.java_cmd, [
                 self.java_cmd, '-client','-Ddaemon.name=', '-Dstorm.options=',
-                '-Dstorm.home=' + self.storm_dir + '', '-Dstorm.log.dir=',
+                '-Dstorm.home=' + self.storm_dir, '-Dstorm.log.dir=' + self.storm_dir + "/logs",
                 '-Djava.library.path=', '-Dstorm.conf.file=', '-cp',
-                '' + self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir +
+                self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir +
                 '/extlib:example/storm-starter/storm-starter-topologies-*.jar:' + self.storm_dir +
                 '/conf:' + self.storm_dir +
                 '/bin:./external/storm-redis/storm-redis-1.1.0.jar:./external/storm-kafka-client/storm-kafka-client-1.1.0.jar"',
@@ -120,9 +143,9 @@
             ], self.mock_execvp, mock.call(
                 self.java_cmd,
                 [self.java_cmd, '-client', '-Ddaemon.name=', '-Dstorm.options=',
-                 '-Dstorm.home=' + self.storm_dir + '', '-Dstorm.log.dir=',
+                 '-Dstorm.home=' + self.storm_dir, '-Dstorm.log.dir=' + self.storm_dir + "/logs",
                  '-Djava.library.path=', '-Dstorm.conf.file=', '-cp',
-                 '' + self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir + '/extlib:' +
+                 self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir + '/extlib:' +
                  self.storm_dir +
                  '/conf:' + self.storm_dir + '/bin:' + self.storm_dir + '/lib-tools/sql/core',\
                  '-Dstorm.dependency.jars=', '-Dstorm.dependency.artifacts={}',
@@ -137,26 +160,27 @@
         ], self.mock_execvp, mock.call(
             self.java_cmd, [
                 self.java_cmd, '-client', '-Ddaemon.name=', '-Dstorm.options=',
-                '-Dstorm.home=' + self.storm_dir + '', '-Dstorm.log.dir=',
+                '-Dstorm.home=' + self.storm_dir, '-Dstorm.log.dir=' + self.storm_dir + "/logs",
                 '-Djava.library.path=', '-Dstorm.conf.file=', '-cp',
-                '' + self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir + '/extlib:' + self.storm_dir +
+                self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir + '/extlib:' + self.storm_dir +
                 '/extlib-daemon:' + self.storm_dir + '/conf:' + self.storm_dir + '/bin', 'org.apache.storm.command.KillTopology', 'doomed_topology'
             ])
         )
 
     def test_upload_credentials_command(self):
         self.base_test([
-            'storm', 'upload-credentials', 'my-topology-name', 'appids role.name1,role.name2"'
+            'storm', 'upload-credentials', '--config', '/some/other/storm.yaml', '-c', 'test=test', 'my-topology-name', 'appids', 'role.name1,role.name2'
         ], self.mock_execvp, mock.call(
             self.java_cmd, [
-                self.java_cmd,  '-client', '-Ddaemon.name=', '-Dstorm.options=',
-                '-Dstorm.home=' + self.storm_dir + '', '-Dstorm.log.dir=', '-Djava.library.path=',
-                '-Dstorm.conf.file=', '-cp', '' + self.storm_dir + '/*:' + self.storm_dir + '/lib:' +
+                self.java_cmd,  '-client', '-Ddaemon.name=', '-Dstorm.options=test%3Dtest',
+                '-Dstorm.home=' + self.storm_dir, '-Dstorm.log.dir=' + self.storm_dir + "/logs",
+                '-Djava.library.path=', '-Dstorm.conf.file=/some/other/storm.yaml',
+                '-cp', self.storm_dir + '/*:' + self.storm_dir + '/lib:' +
                                              self.storm_dir +
                                              '/extlib:' + self.storm_dir + '/extlib-daemon:' +
                                              self.storm_dir + '/conf:' + self.storm_dir +
                                              '/bin', 'org.apache.storm.command.UploadCredentials',
-                'my-topology-name', 'appids role.name1,role.name2"'])
+                'my-topology-name', 'appids', 'role.name1,role.name2'])
         )
 
     def test_blobstore_command(self):
@@ -165,9 +189,9 @@
         ], self.mock_execvp, mock.call(
             self.java_cmd, [
                 self.java_cmd, '-client', '-Ddaemon.name=', '-Dstorm.options=',
-                '-Dstorm.home=' + self.storm_dir + '', '-Dstorm.log.dir=',
+                '-Dstorm.home=' + self.storm_dir, '-Dstorm.log.dir=' + self.storm_dir + "/logs",
                 '-Djava.library.path=', '-Dstorm.conf.file=', '-cp',
-                '' + self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir +
+                self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir +
                 '/extlib:' + self.storm_dir + '/extlib-daemon:' + self.storm_dir + '/conf:' +
                 self.storm_dir + '/bin', 'org.apache.storm.command.Blobstore', 'create',
                 'mytopo:data.tgz', '-f', 'data.tgz', '-a', 'u:alice:rwa,u:bob:rw,o::r'])
@@ -175,14 +199,28 @@
         self.mock_execvp.reset_mock()
 
         self.base_test([
+            'storm', 'blobstore', 'list'
+        ], self.mock_execvp, mock.call(
+            self.java_cmd, [
+                self.java_cmd, '-client', '-Ddaemon.name=', '-Dstorm.options=',
+                '-Dstorm.home=' + self.storm_dir,
+                '-Dstorm.log.dir=' + self.storm_dir + "/logs", '-Djava.library.path=',
+                '-Dstorm.conf.file=', '-cp',
+                self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir +
+                '/extlib:' + self.storm_dir + '/extlib-daemon:' + self.storm_dir + '/conf:' +
+                self.storm_dir + '/bin', 'org.apache.storm.command.Blobstore', 'list'])
+        )
+        self.mock_execvp.reset_mock()
+
+        self.base_test([
             'storm', 'blobstore', 'list', 'wordstotrack'
         ], self.mock_execvp, mock.call(
             self.java_cmd, [
                 self.java_cmd, '-client', '-Ddaemon.name=', '-Dstorm.options=',
-                '-Dstorm.home=' + self.storm_dir + '',
-                '-Dstorm.log.dir=', '-Djava.library.path=',
+                '-Dstorm.home=' + self.storm_dir,
+                '-Dstorm.log.dir=' + self.storm_dir + "/logs", '-Djava.library.path=',
                 '-Dstorm.conf.file=', '-cp',
-                '' + self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir +
+                self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir +
                 '/extlib:' + self.storm_dir + '/extlib-daemon:' + self.storm_dir + '/conf:' +
                 self.storm_dir + '/bin', 'org.apache.storm.command.Blobstore', 'list', 'wordstotrack'])
         )
@@ -193,9 +231,9 @@
         ], self.mock_execvp, mock.call(
             self.java_cmd, [
                 self.java_cmd, '-client', '-Ddaemon.name=', '-Dstorm.options=',
-                '-Dstorm.home=' + self.storm_dir + '', '-Dstorm.log.dir=',
+                '-Dstorm.home=' + self.storm_dir, '-Dstorm.log.dir=' + self.storm_dir + "/logs",
                 '-Djava.library.path=', '-Dstorm.conf.file=', '-cp',
-                '' + self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir +
+                self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir +
                 '/extlib:' + self.storm_dir + '/extlib-daemon:' + self.storm_dir + '/conf:' +
                 self.storm_dir + '/bin', 'org.apache.storm.command.Blobstore', 'update', '-f',
                 '/wordsToTrack.list', 'wordstotrack'])
@@ -207,9 +245,9 @@
         ], self.mock_execvp, mock.call(
             self.java_cmd, [
                 self.java_cmd, '-client', '-Ddaemon.name=', '-Dstorm.options=',
-                '-Dstorm.home=' + self.storm_dir + '', '-Dstorm.log.dir=',
+                '-Dstorm.home=' + self.storm_dir, '-Dstorm.log.dir=' + self.storm_dir + "/logs",
                 '-Djava.library.path=', '-Dstorm.conf.file=', '-cp',
-                '' + self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir +
+                self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir +
                 '/extlib:' + self.storm_dir + '/extlib-daemon:' + self.storm_dir + '/conf:' +
                 self.storm_dir + '/bin', 'org.apache.storm.command.Blobstore', 'cat', 'wordstotrack'])
         )
@@ -220,9 +258,9 @@
         ], self.mock_execvp, mock.call(
             self.java_cmd, [
                 self.java_cmd, '-client', '-Ddaemon.name=', '-Dstorm.options=',
-                '-Dstorm.home=' + self.storm_dir + '', '-Dstorm.log.dir=',
+                '-Dstorm.home=' + self.storm_dir, '-Dstorm.log.dir=' + self.storm_dir + "/logs",
                 '-Djava.library.path=', '-Dstorm.conf.file=', '-cp',
-                '' + self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir + '/extlib:' + self.storm_dir +
+                self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir + '/extlib:' + self.storm_dir +
                 '/extlib-daemon:' + self.storm_dir + '/conf:' + self.storm_dir + '/bin',
                 'org.apache.storm.command.Activate', 'doomed_topology'
             ])
@@ -234,9 +272,9 @@
         ], self.mock_execvp, mock.call(
             self.java_cmd, [
                 self.java_cmd, '-client', '-Ddaemon.name=', '-Dstorm.options=',
-                '-Dstorm.home=' + self.storm_dir + '', '-Dstorm.log.dir=',
+                '-Dstorm.home=' + self.storm_dir, '-Dstorm.log.dir=' + self.storm_dir + "/logs",
                 '-Djava.library.path=', '-Dstorm.conf.file=', '-cp',
-                '' + self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir + '/extlib:' + self.storm_dir +
+                self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir + '/extlib:' + self.storm_dir +
                 '/extlib-daemon:' + self.storm_dir + '/conf:' + self.storm_dir +
                 '/bin', 'org.apache.storm.command.Deactivate', 'doomed_topology'
             ])
@@ -248,9 +286,9 @@
         ], self.mock_execvp, mock.call(
             self.java_cmd, [
                 self.java_cmd, '-client', '-Ddaemon.name=', '-Dstorm.options=',
-                '-Dstorm.home=' + self.storm_dir + '', '-Dstorm.log.dir=',
+                '-Dstorm.home=' + self.storm_dir, '-Dstorm.log.dir=' + self.storm_dir + "/logs",
                 '-Djava.library.path=', '-Dstorm.conf.file=', '-cp',
-                '' + self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir + '/extlib:' + self.storm_dir +
+                self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir + '/extlib:' + self.storm_dir +
                 '/extlib-daemon:' + self.storm_dir + '/conf:' + self.storm_dir +
                 '/bin', 'org.apache.storm.command.Rebalance', 'doomed_topology'
             ])
@@ -262,9 +300,9 @@
         ], self.mock_execvp, mock.call(
             self.java_cmd, [
                 self.java_cmd, '-client', '-Ddaemon.name=', '-Dstorm.options=',
-                '-Dstorm.home=' + self.storm_dir + '', '-Dstorm.log.dir=',
+                '-Dstorm.home=' + self.storm_dir, '-Dstorm.log.dir=' + self.storm_dir + "/logs",
                 '-Djava.library.path=', '-Dstorm.conf.file=', '-cp',
-                '' + self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir + '/extlib:' + self.storm_dir +
+                self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir + '/extlib:' + self.storm_dir +
                 '/extlib-daemon:' + self.storm_dir + '/conf:' + self.storm_dir +
                 '/bin', 'org.apache.storm.command.ListTopologies'
             ])
@@ -276,9 +314,9 @@
         ], self.mock_execvp, mock.call(
             self.java_cmd, [
                 self.java_cmd, '-server', '-Ddaemon.name=nimbus', '-Dstorm.options=',
-                '-Dstorm.home=' + self.storm_dir + '', '-Dstorm.log.dir=',
+                '-Dstorm.home=' + self.storm_dir, '-Dstorm.log.dir=' + self.storm_dir + "/logs",
                 '-Djava.library.path=', '-Dstorm.conf.file=', '-cp',
-                '' + self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir +
+                self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir +
                 '/extlib:' + self.storm_dir + '/extlib-daemon:' + self.storm_dir + '/conf',
                 '-Djava.deserialization.disabled=true', '-Dlogfile.name=nimbus.log',
                 '-Dlog4j.configurationFile=' + self.storm_dir + '/log4j2/cluster.xml',
@@ -292,9 +330,9 @@
         ], self.mock_execvp, mock.call(
             self.java_cmd, [
                 self.java_cmd, '-server', '-Ddaemon.name=supervisor', '-Dstorm.options=',
-                '-Dstorm.home=' + self.storm_dir + '', '-Dstorm.log.dir=',
+                '-Dstorm.home=' + self.storm_dir, '-Dstorm.log.dir=' + self.storm_dir + "/logs",
                 '-Djava.library.path=', '-Dstorm.conf.file=', '-cp',
-                '' + self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir +
+                self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir +
                 '/extlib:' + self.storm_dir + '/extlib-daemon:' + self.storm_dir + '/conf',
                 '-Djava.deserialization.disabled=true', '-Dlogfile.name=supervisor.log',
                 '-Dlog4j.configurationFile=' + self.storm_dir + '/log4j2/cluster.xml',
@@ -308,9 +346,9 @@
         ], self.mock_execvp, mock.call(
             self.java_cmd, [
                 self.java_cmd, '-server', '-Ddaemon.name=pacemaker', '-Dstorm.options=',
-                '-Dstorm.home=' + self.storm_dir + '', '-Dstorm.log.dir=',
+                '-Dstorm.home=' + self.storm_dir, '-Dstorm.log.dir=' + self.storm_dir + "/logs",
                 '-Djava.library.path=', '-Dstorm.conf.file=', '-cp',
-                '' + self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir +
+                self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir +
                 '/extlib:' + self.storm_dir + '/extlib-daemon:' + self.storm_dir + '/conf',
                 '-Djava.deserialization.disabled=true', '-Dlogfile.name=pacemaker.log',
                 '-Dlog4j.configurationFile=' + self.storm_dir + '/log4j2/cluster.xml',
@@ -324,9 +362,9 @@
         ], self.mock_execvp, mock.call(
             self.java_cmd, [
                 self.java_cmd, '-server', '-Ddaemon.name=ui', '-Dstorm.options=',
-                '-Dstorm.home=' + self.storm_dir + '', '-Dstorm.log.dir=',
+                '-Dstorm.home=' + self.storm_dir, '-Dstorm.log.dir=' + self.storm_dir + "/logs",
                 '-Djava.library.path=', '-Dstorm.conf.file=', '-cp',
-                '' + self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir +
+                self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir +
                 '/extlib:' + self.storm_dir + '/extlib-daemon:' + self.storm_dir +
                 '/lib-webapp:' + self.storm_dir + '/conf',
                 '-Djava.deserialization.disabled=true', '-Dlogfile.name=ui.log',
@@ -341,9 +379,9 @@
         ], self.mock_execvp, mock.call(
             self.java_cmd, [
                 self.java_cmd, '-server', '-Ddaemon.name=logviewer', '-Dstorm.options=',
-                '-Dstorm.home=' + self.storm_dir + '', '-Dstorm.log.dir=',
+                '-Dstorm.home=' + self.storm_dir, '-Dstorm.log.dir=' + self.storm_dir + "/logs",
                 '-Djava.library.path=', '-Dstorm.conf.file=', '-cp',
-                '' + self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir +
+                self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir +
                 '/extlib:' + self.storm_dir + '/extlib-daemon:' + self.storm_dir +
                 '/lib-webapp:' + self.storm_dir + '/conf',
                 '-Djava.deserialization.disabled=true', '-Dlogfile.name=logviewer.log',
@@ -358,9 +396,9 @@
         ], self.mock_execvp, mock.call(
             self.java_cmd, [
                 self.java_cmd, '-server', '-Ddaemon.name=drpc', '-Dstorm.options=',
-                '-Dstorm.home=' + self.storm_dir + '', '-Dstorm.log.dir=',
+                '-Dstorm.home=' + self.storm_dir, '-Dstorm.log.dir=' + self.storm_dir + "/logs",
                 '-Djava.library.path=', '-Dstorm.conf.file=', '-cp',
-                '' + self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir +
+                self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir +
                 '/extlib:' + self.storm_dir + '/extlib-daemon:' + self.storm_dir +
                 '/lib-webapp:' + self.storm_dir + '/conf',
                 '-Djava.deserialization.disabled=true', '-Dlogfile.name=drpc.log',
@@ -369,14 +407,40 @@
             ])
         )
 
+    def test_drpc_client_command(self):
+        self.base_test([
+            'storm', 'drpc-client', 'exclaim', 'a', 'exclaim', 'b', 'test', 'bar'
+        ], self.mock_execvp, mock.call(
+            self.java_cmd, [
+                self.java_cmd, '-client', '-Ddaemon.name=', '-Dstorm.options=',
+                '-Dstorm.home=' + self.storm_dir, '-Dstorm.log.dir=' + self.storm_dir + "/logs",
+                '-Djava.library.path=', '-Dstorm.conf.file=', '-cp',
+                self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir + '/extlib:' + self.storm_dir +
+                '/extlib-daemon:' + self.storm_dir + '/conf:' + self.storm_dir +
+                '/bin', 'org.apache.storm.command.BasicDrpcClient', 'exclaim', 'a', 'exclaim', 'b', 'test', 'bar'
+            ])
+        )
+        self.base_test([
+            'storm', 'drpc-client', '-f', 'exclaim', 'a', 'b'
+        ], self.mock_execvp, mock.call(
+            self.java_cmd, [
+                self.java_cmd, '-client', '-Ddaemon.name=', '-Dstorm.options=',
+                '-Dstorm.home=' + self.storm_dir, '-Dstorm.log.dir=' + self.storm_dir + "/logs",
+                '-Djava.library.path=', '-Dstorm.conf.file=', '-cp',
+                self.storm_dir + '/*:' + self.storm_dir + '/lib:' + self.storm_dir + '/extlib:' + self.storm_dir +
+                '/extlib-daemon:' + self.storm_dir + '/conf:' + self.storm_dir +
+                '/bin', 'org.apache.storm.command.BasicDrpcClient', '-f', 'exclaim', 'a', 'b'
+            ])
+        )
+
     def test_healthcheck_command(self):
         self.base_test([
             'storm', 'node-health-check'
         ], self.mock_execvp, mock.call(
             self.java_cmd, [
                 self.java_cmd, '-client', '-Ddaemon.name=', '-Dstorm.options=',
-                '-Dstorm.home=' + self.storm_dir + '', '-Dstorm.log.dir=', '-Djava.library.path=',
-                '-Dstorm.conf.file=', '-cp', '' + self.storm_dir + '/*:' + self.storm_dir + '/lib:' +
+                '-Dstorm.home=' + self.storm_dir, '-Dstorm.log.dir=' + self.storm_dir + "/logs", '-Djava.library.path=',
+                '-Dstorm.conf.file=', '-cp', self.storm_dir + '/*:' + self.storm_dir + '/lib:' +
                 self.storm_dir + '/extlib:' + self.storm_dir + '/extlib-daemon:' + self.storm_dir + '/conf:' +
                 self.storm_dir + '/bin', 'org.apache.storm.command.HealthCheck'
             ])
diff --git a/storm-core/src/jvm/org/apache/storm/command/SetLogLevel.java b/storm-core/src/jvm/org/apache/storm/command/SetLogLevel.java
index 411c586..c6e2fa7 100644
--- a/storm-core/src/jvm/org/apache/storm/command/SetLogLevel.java
+++ b/storm-core/src/jvm/org/apache/storm/command/SetLogLevel.java
@@ -92,7 +92,7 @@
                 splits = splits[1].split(":");
                 Integer timeout = 0;
                 Level level = Level.valueOf(splits[0]);
-                logLevel.set_reset_log_level(level.toString());
+                logLevel.set_target_log_level(level.toString());
                 if (splits.length > 1) {
                     timeout = Integer.parseInt(splits[1]);
                 }
diff --git a/storm-core/test/jvm/org/apache/storm/command/SetLogLevelTest.java b/storm-core/test/jvm/org/apache/storm/command/SetLogLevelTest.java
index 5951752..af6aefc 100644
--- a/storm-core/test/jvm/org/apache/storm/command/SetLogLevelTest.java
+++ b/storm-core/test/jvm/org/apache/storm/command/SetLogLevelTest.java
@@ -25,11 +25,11 @@
         SetLogLevel.LogLevelsParser logLevelsParser = new SetLogLevel.LogLevelsParser(LogLevelAction.UPDATE);
         LogLevel logLevel = ((Map<String, LogLevel>) logLevelsParser.parse("com.foo.one=warn")).get("com.foo.one");
         Assert.assertEquals(0, logLevel.get_reset_log_level_timeout_secs());
-        Assert.assertEquals("WARN", logLevel.get_reset_log_level());
+        Assert.assertEquals("WARN", logLevel.get_target_log_level());
 
         logLevel = ((Map<String, LogLevel>) logLevelsParser.parse("com.foo.two=DEBUG:10")).get("com.foo.two");
         Assert.assertEquals(10, logLevel.get_reset_log_level_timeout_secs());
-        Assert.assertEquals("DEBUG", logLevel.get_reset_log_level());
+        Assert.assertEquals("DEBUG", logLevel.get_target_log_level());
     }
 
     @Test(expected = NumberFormatException.class)
diff --git a/storm-server/src/main/java/org/apache/storm/DaemonConfig.java b/storm-server/src/main/java/org/apache/storm/DaemonConfig.java
index def49ae..28076f4 100644
--- a/storm-server/src/main/java/org/apache/storm/DaemonConfig.java
+++ b/storm-server/src/main/java/org/apache/storm/DaemonConfig.java
@@ -186,12 +186,12 @@
 
     /**
      * How long without heartbeating a task can go before nimbus will consider the task dead and reassign it to another location.
+     * Can be exceeded when {@link Config#TOPOLOGY_WORKER_TIMEOUT_SECS} is set.
      */
     @IsInteger
     @IsPositiveNumber
     public static final String NIMBUS_TASK_TIMEOUT_SECS = "nimbus.task.timeout.secs";
 
-
     /**
      * How often nimbus should wake up to check heartbeats and do reassignments. Note that if a machine ever goes down Nimbus will
      * immediately wake up and take action. This parameter is for checking for failures when there's no explicit event like that occurring.
@@ -234,6 +234,7 @@
      *
      * <p>A separate timeout exists for launch because there can be quite a bit of overhead
      * to launching new JVM's and configuring them.</p>
+     * Can be exceeded when {@link Config#TOPOLOGY_WORKER_TIMEOUT_SECS} is set.
      */
     @IsInteger
     @IsPositiveNumber
@@ -794,6 +795,7 @@
      * How long a worker can go without heartbeating during the initial launch before the supervisor tries to restart the worker process.
      * This value override supervisor.worker.timeout.secs during launch because there is additional overhead to starting and configuring the
      * JVM on launch.
+     * Can be exceeded when {@link Config#TOPOLOGY_WORKER_TIMEOUT_SECS} is set.
      */
     @IsInteger
     @IsPositiveNumber
diff --git a/storm-server/src/main/java/org/apache/storm/daemon/nimbus/Nimbus.java b/storm-server/src/main/java/org/apache/storm/daemon/nimbus/Nimbus.java
index 45add73..6ab3af5 100644
--- a/storm-server/src/main/java/org/apache/storm/daemon/nimbus/Nimbus.java
+++ b/storm-server/src/main/java/org/apache/storm/daemon/nimbus/Nimbus.java
@@ -268,6 +268,7 @@
     private final Meter shutdownCalls;
     private final Meter processWorkerMetricsCalls;
     private final Meter mkAssignmentsErrors;
+    private final Meter sendAssignmentExceptions;   // used in AssignmentDistributionService.java
 
     //Timer
     private final Timer fileUploadDuration;
@@ -305,7 +306,7 @@
         IStormClusterState state = nimbus.getStormClusterState();
         Assignment oldAssignment = state.assignmentInfo(topoId, null);
         state.removeStorm(topoId);
-        notifySupervisorsAsKilled(state, oldAssignment, nimbus.getAssignmentsDistributer());
+        notifySupervisorsAsKilled(state, oldAssignment, nimbus.getAssignmentsDistributer(), nimbus.getMetricsRegistry());
         nimbus.heartbeatsCache.removeTopo(topoId);
         nimbus.getIdToExecutors().getAndUpdate(new Dissoc<>(topoId));
         return null;
@@ -360,12 +361,12 @@
 
         return sb;
     };
-    private static final TopologyStateTransition STARTUP_WHEN_KILLED_TRANSITION = (args, nimbus, topoId, base) -> {
+    private static final TopologyStateTransition GAIN_LEADERSHIP_WHEN_KILLED_TRANSITION = (args, nimbus, topoId, base) -> {
         int delay = base.get_topology_action_options().get_kill_options().get_wait_secs();
         nimbus.delayEvent(topoId, delay, TopologyActions.REMOVE, null);
         return null;
     };
-    private static final TopologyStateTransition STARTUP_WHEN_REBALANCING_TRANSITION = (args, nimbus, topoId, base) -> {
+    private static final TopologyStateTransition GAIN_LEADERSHIP_WHEN_REBALANCING_TRANSITION = (args, nimbus, topoId, base) -> {
         int delay = base.get_topology_action_options().get_rebalance_options().get_wait_secs();
         nimbus.delayEvent(topoId, delay, TopologyActions.DO_REBALANCE, null);
         return null;
@@ -385,12 +386,12 @@
                 .put(TopologyActions.KILL, KILL_TRANSITION)
                 .build())
             .put(TopologyStatus.KILLED, new ImmutableMap.Builder<TopologyActions, TopologyStateTransition>()
-                .put(TopologyActions.STARTUP, STARTUP_WHEN_KILLED_TRANSITION)
+                .put(TopologyActions.GAIN_LEADERSHIP, GAIN_LEADERSHIP_WHEN_KILLED_TRANSITION)
                 .put(TopologyActions.KILL, KILL_TRANSITION)
                 .put(TopologyActions.REMOVE, REMOVE_TRANSITION)
                 .build())
             .put(TopologyStatus.REBALANCING, new ImmutableMap.Builder<TopologyActions, TopologyStateTransition>()
-                .put(TopologyActions.STARTUP, STARTUP_WHEN_REBALANCING_TRANSITION)
+                .put(TopologyActions.GAIN_LEADERSHIP, GAIN_LEADERSHIP_WHEN_REBALANCING_TRANSITION)
                 .put(TopologyActions.KILL, KILL_TRANSITION)
                 .put(TopologyActions.DO_REBALANCE, DO_REBALANCE_TRANSITION)
                 .build())
@@ -462,6 +463,7 @@
     private AtomicReference<Map<String, Set<List<Integer>>>> idToExecutors;
     //May be null if worker tokens are not supported by the thrift transport.
     private WorkerTokenManager workerTokenManager;
+    private boolean wasLeader = false;
 
     public Nimbus(Map<String, Object> conf, INimbus inimbus, StormMetricsRegistry metricsRegistry) throws Exception {
         this(conf, inimbus, null, null, null, null, null, metricsRegistry);
@@ -516,6 +518,7 @@
         this.shutdownCalls = metricsRegistry.registerMeter("nimbus:num-shutdown-calls");
         this.processWorkerMetricsCalls = metricsRegistry.registerMeter("nimbus:process-worker-metric-calls");
         this.mkAssignmentsErrors = metricsRegistry.registerMeter("nimbus:mkAssignments-Errors");
+        this.sendAssignmentExceptions = metricsRegistry.registerMeter(Constants.NIMBUS_SEND_ASSIGNMENT_EXCEPTIONS);
         this.fileUploadDuration = metricsRegistry.registerTimer("nimbus:files-upload-duration-ms");
         this.schedulingDuration = metricsRegistry.registerTimer("nimbus:topology-scheduling-duration-ms");
         this.numAddedExecPerScheduling = metricsRegistry.registerHistogram("nimbus:num-added-executors-per-scheduling");
@@ -1086,6 +1089,13 @@
     }
 
     @SuppressWarnings("unchecked")
+    /**
+     * Create a normalized topology conf.
+     *
+     * @param conf  the nimbus conf
+     * @param topoConf initial topology conf
+     * @param topology  the Storm topology
+     */
     private static Map<String, Object> normalizeConf(Map<String, Object> conf, Map<String, Object> topoConf, StormTopology topology) {
         //ensure that serializations are same for all tasks no matter what's on
         // the supervisors. this also allows you to declare the serializations as a sequence
@@ -1113,6 +1123,31 @@
         ret.put(Config.TOPOLOGY_ACKER_EXECUTORS, mergedConf.get(Config.TOPOLOGY_ACKER_EXECUTORS));
         ret.put(Config.TOPOLOGY_EVENTLOGGER_EXECUTORS, mergedConf.get(Config.TOPOLOGY_EVENTLOGGER_EXECUTORS));
         ret.put(Config.TOPOLOGY_MAX_TASK_PARALLELISM, mergedConf.get(Config.TOPOLOGY_MAX_TASK_PARALLELISM));
+
+        // Don't allow topoConf to override various cluster-specific properties.
+        // Specifically adding the cluster settings to the topoConf here will make sure these settings
+        // also override the subsequently generated conf picked up locally on the classpath.
+        //
+        // We will be dealing with 3 confs:
+        // 1) the submitted topoConf created here
+        // 2) the combined classpath conf with the topoConf added on top
+        // 3) the nimbus conf with conf 2 above added on top.
+        //
+        // By first forcing the topology conf to contain the nimbus settings, we guarantee all three confs
+        // will have the correct settings that cannot be overriden by the submitter.
+        ret.put(Config.STORM_CGROUP_HIERARCHY_DIR, conf.get(Config.STORM_CGROUP_HIERARCHY_DIR));
+        ret.put(Config.WORKER_METRICS, conf.get(Config.WORKER_METRICS));
+
+        if (mergedConf.containsKey(Config.TOPOLOGY_WORKER_TIMEOUT_SECS)) {
+            int workerTimeoutSecs = (Integer) ObjectReader.getInt(mergedConf.get(Config.TOPOLOGY_WORKER_TIMEOUT_SECS));
+            int workerMaxTimeoutSecs = (Integer) ObjectReader.getInt(mergedConf.get(Config.WORKER_MAX_TIMEOUT_SECS));
+            if (workerTimeoutSecs > workerMaxTimeoutSecs) {
+                ret.put(Config.TOPOLOGY_WORKER_TIMEOUT_SECS, workerMaxTimeoutSecs);
+                String topoId = (String) mergedConf.get(Config.STORM_ID);
+                LOG.warn("Topology {} topology.worker.timeout.secs is too large. Reducing from {} to {}",
+                    topoId, workerTimeoutSecs, workerMaxTimeoutSecs);
+            }
+        }
         return ret;
     }
 
@@ -1296,12 +1331,24 @@
                 exec.prepare();
             }
 
-            if (isLeader()) {
-                for (String topoId : state.activeStorms()) {
-                    transition(topoId, TopologyActions.STARTUP, null);
-                }
-                clusterMetricSet.setActive(true);
-            }
+            // Leadership coordination may be incomplete when launchServer is called. Previous behavior did a one time check
+            // which could cause Nimbus to not process TopologyActions.GAIN_LEADERSHIP transitions. Similar problem exists for
+            // HA Nimbus on being newly elected as leader. Change to a recurring pattern addresses these problems.
+            timer.scheduleRecurring(3, 5,
+                () -> {
+                    try {
+                        boolean isLeader = isLeader();
+                        if (isLeader && !wasLeader) {
+                            for (String topoId : state.activeStorms()) {
+                                transition(topoId, TopologyActions.GAIN_LEADERSHIP, null);
+                            }
+                            clusterMetricSet.setActive(true);
+                        }
+                        wasLeader = isLeader;
+                    } catch (Exception e) {
+                        throw  new RuntimeException(e);
+                    }
+                });
 
             final boolean doNotReassign = (Boolean) conf.getOrDefault(ServerConfigUtils.NIMBUS_DO_NOT_REASSIGN, false);
             timer.scheduleRecurring(0, ObjectReader.getInt(conf.get(DaemonConfig.NIMBUS_MONITOR_FREQ_SECS)),
@@ -1533,7 +1580,8 @@
      */
     private static void notifySupervisorsAssignments(Map<String, Assignment> assignments,
                                                      AssignmentDistributionService service, Map<String, String> nodeHost,
-                                                     Map<String, SupervisorDetails> supervisorDetails) {
+                                                     Map<String, SupervisorDetails> supervisorDetails,
+                                                     StormMetricsRegistry metricsRegistry) {
         for (Map.Entry<String, String> nodeEntry : nodeHost.entrySet()) {
             try {
                 String nodeId = nodeEntry.getKey();
@@ -1541,7 +1589,7 @@
                 supervisorAssignments.set_storm_assignment(assignmentsForNode(assignments, nodeEntry.getKey()));
                 SupervisorDetails details = supervisorDetails.get(nodeId);
                 Integer serverPort = details != null ? details.getServerPort() : null;
-                service.addAssignmentsForNode(nodeId, nodeEntry.getValue(), serverPort, supervisorAssignments);
+                service.addAssignmentsForNode(nodeId, nodeEntry.getValue(), serverPort, supervisorAssignments, metricsRegistry);
             } catch (Throwable tr1) {
                 //just skip when any error happens wait for next round assignments reassign
                 LOG.error("Exception when add assignments distribution task for node {}", nodeEntry.getKey());
@@ -1550,10 +1598,10 @@
     }
 
     private static void notifySupervisorsAsKilled(IStormClusterState clusterState, Assignment oldAss,
-                                                  AssignmentDistributionService service) {
+                                                  AssignmentDistributionService service, StormMetricsRegistry metricsRegistry) {
         Map<String, String> nodeHost = assignmentChangedNodes(oldAss, null);
         notifySupervisorsAssignments(clusterState.assignmentsInfo(), service, nodeHost,
-                                     basicSupervisorDetailsMap(clusterState));
+                                     basicSupervisorDetailsMap(clusterState), metricsRegistry);
     }
 
     @VisibleForTesting
@@ -1609,6 +1657,10 @@
         return assignmentsDistributer;
     }
 
+    private StormMetricsRegistry getMetricsRegistry() {
+        return metricsRegistry;
+    }
+
     @VisibleForTesting
     public HeartbeatCache getHeartbeatsCache() {
         return heartbeatsCache;
@@ -1729,8 +1781,8 @@
                         throw new RuntimeException(message);
                     }
 
-                    if (TopologyActions.STARTUP != event) {
-                        //STARTUP is a system event so don't log an issue
+                    if (TopologyActions.GAIN_LEADERSHIP != event) {
+                        //GAIN_LEADERSHIP is a system event so don't log an issue
                         LOG.info(message);
                     }
                     transition = NOOP_TRANSITION;
@@ -1862,8 +1914,7 @@
         IStormClusterState state = stormClusterState;
         Map<List<Integer>, Map<String, Object>> executorBeats =
             StatsUtil.convertExecutorBeats(state.executorBeats(topoId, existingAssignment.get_executor_node_port()));
-        heartbeatsCache.updateFromZkHeartbeat(topoId, executorBeats, allExecutors,
-            ObjectReader.getInt(conf.get(DaemonConfig.NIMBUS_TASK_TIMEOUT_SECS)));
+        heartbeatsCache.updateFromZkHeartbeat(topoId, executorBeats, allExecutors, getTopologyHeartbeatTimeoutSecs(topoId));
     }
 
     /**
@@ -1880,17 +1931,21 @@
                 updateHeartbeatsFromZkHeartbeat(topoId, topologyToExecutors.get(topoId), entry.getValue());
             } else {
                 LOG.debug("Timing out old heartbeats for {}", topoId);
-                heartbeatsCache.timeoutOldHeartbeats(topoId, ObjectReader.getInt(conf.get(DaemonConfig.NIMBUS_TASK_TIMEOUT_SECS)));
+                heartbeatsCache.timeoutOldHeartbeats(topoId, getTopologyHeartbeatTimeoutSecs(topoId));
             }
         }
     }
 
-    private void updateCachedHeartbeatsFromWorker(SupervisorWorkerHeartbeat workerHeartbeat) {
-        heartbeatsCache.updateHeartbeat(workerHeartbeat, ObjectReader.getInt(conf.get(DaemonConfig.NIMBUS_TASK_TIMEOUT_SECS)));
+    private void updateCachedHeartbeatsFromWorker(SupervisorWorkerHeartbeat workerHeartbeat, int heartbeatTimeoutSecs) {
+        heartbeatsCache.updateHeartbeat(workerHeartbeat, heartbeatTimeoutSecs);
     }
 
     private void updateCachedHeartbeatsFromSupervisor(SupervisorWorkerHeartbeats workerHeartbeats) {
-        workerHeartbeats.get_worker_heartbeats().forEach(this::updateCachedHeartbeatsFromWorker);
+        for (SupervisorWorkerHeartbeat hb : workerHeartbeats.get_worker_heartbeats()) {
+            String topoId = hb.get_storm_id();
+            int heartbeatTimeoutSecs = getTopologyHeartbeatTimeoutSecs(topoId);
+            updateCachedHeartbeatsFromWorker(hb, heartbeatTimeoutSecs);
+        }
         if (!heartbeatsReadyFlag.get() && !Strings.isNullOrEmpty(workerHeartbeats.get_supervisor_id())) {
             heartbeatsRecoveryStrategy.reportNodeId(workerHeartbeats.get_supervisor_id());
         }
@@ -1927,8 +1982,7 @@
     }
 
     private Set<List<Integer>> aliveExecutors(String topoId, Set<List<Integer>> allExecutors, Assignment assignment) {
-        return heartbeatsCache.getAliveExecutors(topoId, allExecutors, assignment,
-            ObjectReader.getInt(conf.get(DaemonConfig.NIMBUS_TASK_LAUNCH_SECS)));
+        return heartbeatsCache.getAliveExecutors(topoId, allExecutors, assignment, getTopologyLaunchHeartbeatTimeoutSec(topoId));
     }
 
     private List<List<Integer>> computeExecutors(String topoId, StormBase base, Map<String, Object> topoConf,
@@ -2473,7 +2527,7 @@
                 totalAssignmentsChangedNodes.putAll(assignmentChangedNodes(existingAssignment, assignment));
             }
             notifySupervisorsAssignments(newAssignments, assignmentsDistributer, totalAssignmentsChangedNodes,
-                    basicSupervisorDetailsMap);
+                                        basicSupervisorDetailsMap, getMetricsRegistry());
 
             Map<String, Collection<WorkerSlot>> addedSlots = new HashMap<>();
             for (Entry<String, Assignment> entry : newAssignments.entrySet()) {
@@ -2508,6 +2562,35 @@
         base.set_principal((String) topoConf.get(Config.TOPOLOGY_SUBMITTER_PRINCIPAL));
     }
 
+    // Topology may set custom heartbeat timeout.
+    private int getTopologyHeartbeatTimeoutSecs(Map<String, Object> topoConf) {
+        int defaultNimbusTimeout = ObjectReader.getInt(conf.get(DaemonConfig.NIMBUS_TASK_TIMEOUT_SECS));
+        if (topoConf.containsKey(Config.TOPOLOGY_WORKER_TIMEOUT_SECS)) {
+            int topoTimeout = ObjectReader.getInt(topoConf.get(Config.TOPOLOGY_WORKER_TIMEOUT_SECS));
+            topoTimeout = Math.max(topoTimeout, defaultNimbusTimeout);
+            return topoTimeout;
+        }
+
+        return defaultNimbusTimeout;
+    }
+
+    private int getTopologyHeartbeatTimeoutSecs(String topoId) {
+        try {
+            Map<String, Object> topoConf = tryReadTopoConf(topoId, topoCache);
+            return getTopologyHeartbeatTimeoutSecs(topoConf);
+        } catch (Exception e) {
+            // contain any exception
+            LOG.warn("Exception when getting heartbeat timeout.", e.getMessage());
+            return ObjectReader.getInt(conf.get(DaemonConfig.NIMBUS_TASK_TIMEOUT_SECS));
+        }
+    }
+
+    private int getTopologyLaunchHeartbeatTimeoutSec(String topoId) {
+        int nimbusLaunchTimeout = ObjectReader.getInt(conf.get(DaemonConfig.NIMBUS_TASK_LAUNCH_SECS));
+        int topoHeartbeatTimeoutSecs = getTopologyHeartbeatTimeoutSecs(topoId);
+        return Math.max(nimbusLaunchTimeout, topoHeartbeatTimeoutSecs);
+    }
+
     private void startTopology(String topoName, String topoId, TopologyStatus initStatus, String owner,
                                String principal, Map<String, Object> topoConf, StormTopology stormTopology)
         throws KeyNotFoundException, AuthorizationException, IOException, InvalidTopologyException {
@@ -2819,6 +2902,7 @@
         if (resources != null) {
             ret.set_used_mem(resources.getUsedMem());
             ret.set_used_cpu(resources.getUsedCpu());
+            ret.set_used_generic_resources(resources.getUsedGenericResources());
             if (isFragmented(resources)) {
                 final double availableCpu = resources.getAvailableCpu();
                 if (availableCpu < 0) {
@@ -2918,9 +3002,11 @@
                 summary.set_requested_memonheap(resources.getRequestedMemOnHeap());
                 summary.set_requested_memoffheap(resources.getRequestedMemOffHeap());
                 summary.set_requested_cpu(resources.getRequestedCpu());
+                summary.set_requested_generic_resources(resources.getRequestedGenericResources());
                 summary.set_assigned_memonheap(resources.getAssignedMemOnHeap());
                 summary.set_assigned_memoffheap(resources.getAssignedMemOffHeap());
                 summary.set_assigned_cpu(resources.getAssignedCpu());
+                summary.set_assigned_generic_resources(resources.getAssignedGenericResources());
             }
             try {
                 summary.set_replication_count(getBlobReplicationCount(ConfigUtils.masterStormCodeKey(topoId)));
@@ -3058,6 +3144,7 @@
             if (!(Boolean) conf.getOrDefault(DaemonConfig.STORM_TOPOLOGY_CLASSPATH_BEGINNING_ENABLED, false)) {
                 topoConf.remove(Config.TOPOLOGY_CLASSPATH_BEGINNING);
             }
+
             String topoVersionString = topology.get_storm_version();
             if (topoVersionString == null) {
                 topoVersionString = (String) conf.getOrDefault(Config.SUPERVISOR_WORKER_DEFAULT_VERSION, VersionInfo.getVersion());
@@ -4129,6 +4216,8 @@
                 topoPageInfo.set_assigned_regular_off_heap_memory(resources.getAssignedNonSharedMemOffHeap());
                 topoPageInfo.set_assigned_shared_on_heap_memory(resources.getAssignedSharedMemOnHeap());
                 topoPageInfo.set_assigned_regular_on_heap_memory(resources.getAssignedNonSharedMemOnHeap());
+                topoPageInfo.set_assigned_generic_resources(resources.getAssignedGenericResources());
+                topoPageInfo.set_requested_generic_resources(resources.getRequestedGenericResources());
             }
             int launchTimeSecs = common.launchTimeSecs;
             topoPageInfo.set_name(topoName);
@@ -4705,11 +4794,11 @@
         String id = hb.get_storm_id();
         try {
             Map<String, Object> topoConf = tryReadTopoConf(id, topoCache);
-            topoConf = Utils.merge(conf, topoConf);
             String topoName = (String) topoConf.get(Config.TOPOLOGY_NAME);
             checkAuthorization(topoName, topoConf, "sendSupervisorWorkerHeartbeat");
             if (isLeader()) {
-                updateCachedHeartbeatsFromWorker(hb);
+                int heartbeatTimeoutSecs = getTopologyHeartbeatTimeoutSecs(topoConf);
+                updateCachedHeartbeatsFromWorker(hb, heartbeatTimeoutSecs);
             }
         } catch (Exception e) {
             LOG.warn("Send HB exception. (topology id='{}')", id, e);
diff --git a/storm-server/src/main/java/org/apache/storm/daemon/nimbus/TopologyActions.java b/storm-server/src/main/java/org/apache/storm/daemon/nimbus/TopologyActions.java
index 05f9996..4c160cd 100644
--- a/storm-server/src/main/java/org/apache/storm/daemon/nimbus/TopologyActions.java
+++ b/storm-server/src/main/java/org/apache/storm/daemon/nimbus/TopologyActions.java
@@ -16,7 +16,7 @@
  * Actions that can be done to a topology in nimbus.
  */
 public enum TopologyActions {
-    STARTUP,
+    GAIN_LEADERSHIP,
     INACTIVATE,
     ACTIVATE,
     REBALANCE,
diff --git a/storm-server/src/main/java/org/apache/storm/daemon/nimbus/TopologyResources.java b/storm-server/src/main/java/org/apache/storm/daemon/nimbus/TopologyResources.java
index f0db842..0daa7c7 100644
--- a/storm-server/src/main/java/org/apache/storm/daemon/nimbus/TopologyResources.java
+++ b/storm-server/src/main/java/org/apache/storm/daemon/nimbus/TopologyResources.java
@@ -13,13 +13,16 @@
 package org.apache.storm.daemon.nimbus;
 
 import java.util.Collection;
+import java.util.HashMap;
 import java.util.Map;
+
 import org.apache.storm.generated.Assignment;
 import org.apache.storm.generated.NodeInfo;
 import org.apache.storm.generated.WorkerResources;
 import org.apache.storm.scheduler.SchedulerAssignment;
 import org.apache.storm.scheduler.TopologyDetails;
 import org.apache.storm.scheduler.WorkerSlot;
+import org.apache.storm.scheduler.resource.normalization.NormalizedResourceRequest;
 
 public final class TopologyResources {
     private final double requestedMemOnHeap;
@@ -29,6 +32,7 @@
     private final double requestedNonSharedMemOnHeap;
     private final double requestedNonSharedMemOffHeap;
     private final double requestedCpu;
+    private Map<String, Double> requestedGenericResources;
     private double assignedMemOnHeap;
     private double assignedMemOffHeap;
     private double assignedSharedMemOnHeap;
@@ -36,6 +40,7 @@
     private double assignedNonSharedMemOnHeap;
     private double assignedNonSharedMemOffHeap;
     private double assignedCpu;
+    private Map<String, Double> assignedGenericResources;
 
     private TopologyResources(TopologyDetails td, Collection<WorkerResources> workers,
                               Map<String, Double> nodeIdToSharedOffHeapNode) {
@@ -46,6 +51,7 @@
         requestedNonSharedMemOnHeap = td.getRequestedNonSharedOnHeap();
         requestedNonSharedMemOffHeap = td.getRequestedNonSharedOffHeap();
         requestedCpu = td.getTotalRequestedCpu();
+        requestedGenericResources = td.getTotalRequestedGenericResources();
         assignedMemOnHeap = 0.0;
         assignedMemOffHeap = 0.0;
         assignedSharedMemOnHeap = 0.0;
@@ -53,6 +59,7 @@
         assignedNonSharedMemOnHeap = 0.0;
         assignedNonSharedMemOffHeap = 0.0;
         assignedCpu = 0.0;
+        assignedGenericResources = new HashMap<>();
 
         if (workers != null) {
             for (WorkerResources resources : workers) {
@@ -72,6 +79,7 @@
                     assignedNonSharedMemOffHeap -= resources.get_shared_mem_off_heap();
                 }
             }
+            assignedGenericResources = computeAssignedGenericResources(workers);
         }
 
         if (nodeIdToSharedOffHeapNode != null) {
@@ -81,6 +89,15 @@
         }
     }
 
+    private Map<String, Double> computeAssignedGenericResources(Collection<WorkerResources> workers) {
+        Map<String, Double> genericResources = new HashMap<>();
+        for (WorkerResources worker : workers) {
+            genericResources = NormalizedResourceRequest.addResourceMap(genericResources, worker.get_resources());
+        }
+        NormalizedResourceRequest.removeNonGenericResources(genericResources);
+        return genericResources;
+    }
+
     public TopologyResources(TopologyDetails td, SchedulerAssignment assignment) {
         this(td, getWorkerResources(assignment), getNodeIdToSharedOffHeapNode(assignment));
     }
@@ -90,7 +107,7 @@
     }
 
     public TopologyResources() {
-        this(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0);
+        this(0, 0, 0, 0, 0, 0, 0, new HashMap<>(), 0, 0, 0, 0, 0, 0, 0, new HashMap<>());
     }
 
     protected TopologyResources(
@@ -101,13 +118,15 @@
         double requestedNonSharedMemOnHeap,
         double requestedNonSharedMemOffHeap,
         double requestedCpu,
+        Map<String, Double> requestedGenericResources,
         double assignedMemOnHeap,
         double assignedMemOffHeap,
         double assignedSharedMemOnHeap,
         double assignedSharedMemOffHeap,
         double assignedNonSharedMemOnHeap,
         double assignedNonSharedMemOffHeap,
-        double assignedCpu) {
+        double assignedCpu,
+        Map<String, Double> assignedGenericResources) {
         this.requestedMemOnHeap = requestedMemOnHeap;
         this.requestedMemOffHeap = requestedMemOffHeap;
         this.requestedSharedMemOnHeap = requestedSharedMemOnHeap;
@@ -115,6 +134,7 @@
         this.requestedNonSharedMemOnHeap = requestedNonSharedMemOnHeap;
         this.requestedNonSharedMemOffHeap = requestedNonSharedMemOffHeap;
         this.requestedCpu = requestedCpu;
+        this.requestedGenericResources = requestedGenericResources;
         this.assignedMemOnHeap = assignedMemOnHeap;
         this.assignedMemOffHeap = assignedMemOffHeap;
         this.assignedSharedMemOnHeap = assignedSharedMemOnHeap;
@@ -122,6 +142,7 @@
         this.assignedNonSharedMemOnHeap = assignedNonSharedMemOnHeap;
         this.assignedNonSharedMemOffHeap = assignedNonSharedMemOffHeap;
         this.assignedCpu = assignedCpu;
+        this.assignedGenericResources = assignedGenericResources;
     }
 
     private static Collection<WorkerResources> getWorkerResources(SchedulerAssignment assignment) {
@@ -246,6 +267,14 @@
         this.assignedNonSharedMemOffHeap = assignedNonSharedMemOffHeap;
     }
 
+    public Map<String, Double> getAssignedGenericResources() {
+        return new HashMap<>(assignedGenericResources);
+    }
+
+    public Map<String, Double> getRequestedGenericResources() {
+        return new HashMap<>(requestedGenericResources);
+    }
+
     /**
      * Add the values in other to this and return a combined resources object.
      * @param other the other resources to add to this
@@ -260,12 +289,14 @@
             requestedNonSharedMemOnHeap + other.requestedNonSharedMemOnHeap,
             requestedNonSharedMemOffHeap + other.requestedNonSharedMemOffHeap,
             requestedCpu + other.requestedCpu,
+            NormalizedResourceRequest.addResourceMap(requestedGenericResources, other.requestedGenericResources),
             assignedMemOnHeap + other.assignedMemOnHeap,
             assignedMemOffHeap + other.assignedMemOffHeap,
             assignedSharedMemOnHeap + other.assignedSharedMemOnHeap,
             assignedSharedMemOffHeap + other.assignedSharedMemOffHeap,
             assignedNonSharedMemOnHeap + other.assignedNonSharedMemOnHeap,
             assignedNonSharedMemOffHeap + other.assignedNonSharedMemOffHeap,
-            assignedCpu + other.assignedCpu);
+            assignedCpu + other.assignedCpu,
+            NormalizedResourceRequest.addResourceMap(assignedGenericResources, other.assignedGenericResources));
     }
 }
diff --git a/storm-server/src/main/java/org/apache/storm/daemon/supervisor/Container.java b/storm-server/src/main/java/org/apache/storm/daemon/supervisor/Container.java
index a25faad..c09b6f2 100644
--- a/storm-server/src/main/java/org/apache/storm/daemon/supervisor/Container.java
+++ b/storm-server/src/main/java/org/apache/storm/daemon/supervisor/Container.java
@@ -35,6 +35,7 @@
 import java.util.Map;
 import java.util.Optional;
 import java.util.Set;
+import org.apache.commons.lang.StringUtils;
 import org.apache.storm.Config;
 import org.apache.storm.DaemonConfig;
 import org.apache.storm.container.ResourceIsolationInterface;
@@ -49,6 +50,7 @@
 import org.apache.storm.metricstore.WorkerMetricsProcessor;
 import org.apache.storm.utils.ConfigUtils;
 import org.apache.storm.utils.LocalState;
+import org.apache.storm.utils.ObjectReader;
 import org.apache.storm.utils.ServerConfigUtils;
 import org.apache.storm.utils.ServerUtils;
 import org.apache.storm.utils.Utils;
@@ -89,6 +91,7 @@
     protected ContainerMemoryTracker containerMemoryTracker;
     private long lastMetricProcessTime = 0L;
     private Timer.Context shutdownTimer = null;
+    private String cachedUser;
 
     /**
      * Create a new Container.
@@ -418,6 +421,13 @@
         }
         data.put(DaemonConfig.LOGS_USERS, logsUsers.toArray());
 
+        if (topoConf.get(Config.TOPOLOGY_WORKER_TIMEOUT_SECS) != null) {
+            int topoTimeout = ObjectReader.getInt(topoConf.get(Config.TOPOLOGY_WORKER_TIMEOUT_SECS));
+            int defaultWorkerTimeout = ObjectReader.getInt(conf.get(Config.SUPERVISOR_WORKER_TIMEOUT_SECS));
+            topoTimeout = Math.max(topoTimeout, defaultWorkerTimeout);
+            data.put(Config.TOPOLOGY_WORKER_TIMEOUT_SECS, topoTimeout);
+        }
+
         File file = ServerConfigUtils.getLogMetaDataFile(conf, topologyId, port);
 
         Yaml yaml = new Yaml();
@@ -520,20 +530,36 @@
      * @throws IOException on any error
      */
     protected String getWorkerUser() throws IOException {
+        if (cachedUser != null) {
+            return cachedUser;
+        }
+
         LOG.info("GET worker-user for {}", workerId);
         File file = new File(ConfigUtils.workerUserFile(conf, workerId));
-
         if (ops.fileExists(file)) {
-            return ops.slurpString(file).trim();
-        } else if (assignment != null && assignment.is_set_owner()) {
-            return assignment.get_owner();
+            cachedUser = ops.slurpString(file).trim();
+            if (!StringUtils.isBlank(cachedUser)) {
+                return cachedUser;
+            }
         }
+
+        if (assignment != null && assignment.is_set_owner()) {
+            cachedUser = assignment.get_owner();
+            if (!StringUtils.isBlank(cachedUser)) {
+                return cachedUser;
+            }
+        }
+
         if (ConfigUtils.isLocalMode(conf)) {
-            return System.getProperty("user.name");
+            cachedUser = System.getProperty("user.name");
+            return cachedUser;
         } else {
             File f = new File(ConfigUtils.workerArtifactsRoot(conf));
             if (f.exists()) {
-                return Files.getOwner(f.toPath()).getName();
+                cachedUser = Files.getOwner(f.toPath()).getName();
+                if (!StringUtils.isBlank(cachedUser)) {
+                    return cachedUser;
+                }
             }
             throw new IllegalStateException("Could not recover the user for " + workerId);
         }
diff --git a/storm-server/src/main/java/org/apache/storm/daemon/supervisor/Slot.java b/storm-server/src/main/java/org/apache/storm/daemon/supervisor/Slot.java
index c8e6f19..85e5f9a 100644
--- a/storm-server/src/main/java/org/apache/storm/daemon/supervisor/Slot.java
+++ b/storm-server/src/main/java/org/apache/storm/daemon/supervisor/Slot.java
@@ -12,8 +12,10 @@
 
 package org.apache.storm.daemon.supervisor;
 
+import com.google.common.annotations.VisibleForTesting;
 import java.io.IOException;
 import java.util.Collections;
+import java.util.Comparator;
 import java.util.HashMap;
 import java.util.HashSet;
 import java.util.Iterator;
@@ -36,13 +38,13 @@
 import org.apache.storm.generated.LocalAssignment;
 import org.apache.storm.generated.ProfileAction;
 import org.apache.storm.generated.ProfileRequest;
+import org.apache.storm.generated.WorkerResources;
 import org.apache.storm.localizer.AsyncLocalizer;
 import org.apache.storm.localizer.BlobChangingCallback;
 import org.apache.storm.localizer.GoodToGo;
 import org.apache.storm.localizer.LocallyCachedBlob;
 import org.apache.storm.metricstore.WorkerMetricsProcessor;
 import org.apache.storm.scheduler.ISupervisor;
-import org.apache.storm.shade.com.google.common.annotations.VisibleForTesting;
 import org.apache.storm.utils.EnumUtil;
 import org.apache.storm.utils.LocalState;
 import org.apache.storm.utils.ObjectReader;
@@ -174,9 +176,80 @@
     }
 
     /**
-     * Decide the equivalence of two local assignments, ignoring the order of executors
-     * This is different from #equal method.
-     * @param first Local assignment A
+     * This method compares WorkerResources while considering any resources are NULL to be 0.0
+     *
+     * @param first  WorkerResources A
+     * @param second WorkerResources B
+     * @return True if A and B are equivalent, treating the absent resources as 0.0
+     */
+    @VisibleForTesting
+    static boolean customWorkerResourcesEquality(WorkerResources first, WorkerResources second) {
+        if (first == null) {
+            return false;
+        }
+        if (first == second) {
+            return true;
+        }
+        if (first.equals(second)) {
+            return true;
+        }
+
+        if (first.get_cpu() != second.get_cpu()) {
+            return false;
+        }
+        if (first.get_mem_on_heap() != second.get_mem_on_heap()) {
+            return false;
+        }
+        if (first.get_mem_off_heap() != second.get_mem_off_heap()) {
+            return false;
+        }
+        if (first.get_shared_mem_off_heap() != second.get_shared_mem_off_heap()) {
+            return false;
+        }
+        if (first.get_shared_mem_on_heap() != second.get_shared_mem_on_heap()) {
+            return false;
+        }
+        if (!customResourceMapEquality(first.get_resources(), second.get_resources())) {
+            return false;
+        }
+        if (!customResourceMapEquality(first.get_shared_resources(), second.get_shared_resources())) {
+            return false;
+        }
+        return true;
+    }
+
+    /**
+     * This method compares Resource Maps while considering any resources are NULL to be 0.0
+     *
+     * @param firstMap  Resource Map A
+     * @param secondMap Resource Map B
+     * @return True if A and B are equivalent, treating the absent resources as 0.0
+     */
+    private static boolean customResourceMapEquality(Map<String, Double> firstMap, Map<String, Double> secondMap) {
+        if (firstMap == null && secondMap == null) {
+            return true;
+        }
+        if (firstMap == null) {
+            firstMap = new HashMap<>();
+        }
+        if (secondMap == null) {
+            secondMap = new HashMap<>();
+        }
+
+        Set<String> keys = new HashSet<>(firstMap.keySet());
+        keys.addAll(secondMap.keySet());
+        for (String key : keys) {
+            if (firstMap.getOrDefault(key, 0.0).doubleValue() != secondMap.getOrDefault(key, 0.0).doubleValue()) {
+                return false;
+            }
+        }
+        return true;
+    }
+
+    /**
+     * Decide the equivalence of two local assignments, ignoring the order of executors This is different from #equal method.
+     *
+     * @param first  Local assignment A
      * @param second Local assignment B
      * @return True if A and B are equivalent, ignoring the order of the executors
      */
@@ -196,9 +269,9 @@
                         return true;
                     }
                     if (firstHasResources && secondHasResources) {
-                        if (first.get_resources().equals(second.get_resources())) {
-                            return true;
-                        }
+                        WorkerResources firstResources = first.get_resources();
+                        WorkerResources secondResources = second.get_resources();
+                        return customWorkerResourcesEquality(firstResources, secondResources);
                     }
                 }
             }
@@ -667,7 +740,8 @@
         LSWorkerHeartbeat hb = dynamicState.container.readHeartbeat();
         if (hb != null) {
             long hbAgeMs = (Time.currentTimeSecs() - hb.get_time_secs()) * 1000;
-            if (hbAgeMs <= staticState.hbTimeoutMs) {
+            long hbTimeoutMs = getHbTimeoutMs(staticState, dynamicState);
+            if (hbAgeMs <= hbTimeoutMs) {
                 return dynamicState.withState(MachineState.RUNNING);
             }
         }
@@ -681,9 +755,11 @@
         dynamicState = updateAssignmentIfNeeded(dynamicState);
 
         long timeDiffms = (Time.currentTimeMillis() - dynamicState.startTime);
-        if (timeDiffms > staticState.firstHbTimeoutMs) {
+        long hbFirstTimeoutMs = getFirstHbTimeoutMs(staticState, dynamicState);
+        if (timeDiffms > hbFirstTimeoutMs) {
+            staticState.slotMetrics.numWorkerStartTimedOut.mark();
             LOG.warn("SLOT {}: Container {} failed to launch in {} ms.", staticState.port, dynamicState.container,
-                     staticState.firstHbTimeoutMs);
+                    hbFirstTimeoutMs);
             return killContainerFor(KillReason.HB_TIMEOUT, dynamicState, staticState);
         }
 
@@ -745,8 +821,9 @@
         }
 
         long timeDiffMs = (Time.currentTimeSecs() - hb.get_time_secs()) * 1000;
-        if (timeDiffMs > staticState.hbTimeoutMs) {
-            LOG.warn("SLOT {}: HB is too old {} > {}", staticState.port, timeDiffMs, staticState.hbTimeoutMs);
+        long hbTimeoutMs = getHbTimeoutMs(staticState, dynamicState);
+        if (timeDiffMs > hbTimeoutMs) {
+            LOG.warn("SLOT {}: HB is too old {} > {}", staticState.port, timeDiffMs, hbTimeoutMs);
             return killContainerFor(KillReason.HB_TIMEOUT, dynamicState, staticState);
         }
 
@@ -833,6 +910,30 @@
         return dynamicState.state;
     }
 
+    /*
+     * Get worker heartbeat timeout time in ms. Use topology specified timeout if provided.
+     */
+    private static long getHbTimeoutMs(StaticState staticState, DynamicState dynamicState) {
+        long hbTimeoutMs = staticState.hbTimeoutMs;
+        Map<String, Object> topoConf = dynamicState.container.topoConf;
+
+        if (topoConf != null && topoConf.containsKey(Config.TOPOLOGY_WORKER_TIMEOUT_SECS)) {
+            long topoHbTimeoutMs = ObjectReader.getInt(topoConf.get(Config.TOPOLOGY_WORKER_TIMEOUT_SECS)) * 1000;
+            topoHbTimeoutMs = Math.max(topoHbTimeoutMs, hbTimeoutMs);
+            hbTimeoutMs = topoHbTimeoutMs;
+        }
+
+        return hbTimeoutMs;
+    }
+
+    /*
+     * Get worker heartbeat timeout when waiting for worker to start.
+     * If topology specific timeout if set, ensure first heartbeat timeout >= topology specific timeout.
+     */
+    private static long getFirstHbTimeoutMs(StaticState staticState, DynamicState dynamicState) {
+        return Math.max(getHbTimeoutMs(staticState, dynamicState), staticState.firstHbTimeoutMs);
+    }
+
     /**
      * Set a new assignment asynchronously.
      * @param newAssignment the new assignment for this slot to run, null to run nothing
diff --git a/storm-server/src/main/java/org/apache/storm/daemon/supervisor/SlotMetrics.java b/storm-server/src/main/java/org/apache/storm/daemon/supervisor/SlotMetrics.java
index f8e13fd..8b2f5f1 100644
--- a/storm-server/src/main/java/org/apache/storm/daemon/supervisor/SlotMetrics.java
+++ b/storm-server/src/main/java/org/apache/storm/daemon/supervisor/SlotMetrics.java
@@ -26,6 +26,7 @@
 class SlotMetrics {
 
     final Meter numWorkersLaunched;
+    final Meter numWorkerStartTimedOut;
     final Map<Slot.KillReason, Meter> numWorkersKilledFor;
     final Timer workerLaunchDuration;
     final Map<Slot.MachineState, Meter> transitionIntoState;
@@ -34,6 +35,7 @@
 
     SlotMetrics(StormMetricsRegistry metricsRegistry) {
         numWorkersLaunched = metricsRegistry.registerMeter("supervisor:num-workers-launched");
+        numWorkerStartTimedOut = metricsRegistry.registerMeter("supervisor:num-worker-start-timed-out");
         numWorkersKilledFor = Collections.unmodifiableMap(EnumUtil.toEnumMap(Slot.KillReason.class,
             killReason -> metricsRegistry.registerMeter("supervisor:num-workers-killed-" + killReason.toString())));
         workerLaunchDuration = metricsRegistry.registerTimer("supervisor:worker-launch-duration");
diff --git a/storm-server/src/main/java/org/apache/storm/daemon/supervisor/SupervisorUtils.java b/storm-server/src/main/java/org/apache/storm/daemon/supervisor/SupervisorUtils.java
index a0d0397..5e4ce3f 100644
--- a/storm-server/src/main/java/org/apache/storm/daemon/supervisor/SupervisorUtils.java
+++ b/storm-server/src/main/java/org/apache/storm/daemon/supervisor/SupervisorUtils.java
@@ -117,10 +117,6 @@
         return _instance.readWorkerHeartbeatImpl(conf, workerId);
     }
 
-    public static boolean isWorkerHbTimedOut(int now, LSWorkerHeartbeat whb, Map<String, Object> conf) {
-        return _instance.isWorkerHbTimedOutImpl(now, whb, conf);
-    }
-
     public Map<String, LSWorkerHeartbeat> readWorkerHeartbeatsImpl(Map<String, Object> conf) {
         Map<String, LSWorkerHeartbeat> workerHeartbeats = new HashMap<>();
 
@@ -143,8 +139,4 @@
             return null;
         }
     }
-
-    private boolean isWorkerHbTimedOutImpl(int now, LSWorkerHeartbeat whb, Map<String, Object> conf) {
-        return (now - whb.get_time_secs()) > ObjectReader.getInt(conf.get(Config.SUPERVISOR_WORKER_TIMEOUT_SECS));
-    }
 }
diff --git a/storm-server/src/main/java/org/apache/storm/localizer/AsyncLocalizer.java b/storm-server/src/main/java/org/apache/storm/localizer/AsyncLocalizer.java
index eaa6384..48ad7e8 100644
--- a/storm-server/src/main/java/org/apache/storm/localizer/AsyncLocalizer.java
+++ b/storm-server/src/main/java/org/apache/storm/localizer/AsyncLocalizer.java
@@ -94,7 +94,8 @@
     private final ConcurrentHashMap<String, CompletableFuture<Void>> topologyBasicDownloaded = new ConcurrentHashMap<>();
     private final Path localBaseDir;
     private final int blobDownloadRetries;
-    private final ScheduledExecutorService execService;
+    private final ScheduledExecutorService downloadExecService;
+    private final ScheduledExecutorService taskExecService;
     private final long cacheCleanupPeriod;
     private final StormMetricsRegistry metricsRegistry;
     // cleanup
@@ -119,13 +120,14 @@
         cacheCleanupPeriod = ObjectReader.getInt(conf.get(
             DaemonConfig.SUPERVISOR_LOCALIZER_CACHE_CLEANUP_INTERVAL_MS), 30 * 1000).longValue();
 
-        // if we needed we could make config for update thread pool size
-        int threadPoolSize = ObjectReader.getInt(conf.get(DaemonConfig.SUPERVISOR_BLOBSTORE_DOWNLOAD_THREAD_COUNT), 5);
         blobDownloadRetries = ObjectReader.getInt(conf.get(
             DaemonConfig.SUPERVISOR_BLOBSTORE_DOWNLOAD_MAX_RETRIES), 3);
 
-        execService = Executors.newScheduledThreadPool(threadPoolSize,
-                                                       new ThreadFactoryBuilder().setNameFormat("AsyncLocalizer Executor - %d").build());
+        int downloadThreadPoolSize = ObjectReader.getInt(conf.get(DaemonConfig.SUPERVISOR_BLOBSTORE_DOWNLOAD_THREAD_COUNT), 5);
+        downloadExecService = Executors.newScheduledThreadPool(downloadThreadPoolSize,
+                new ThreadFactoryBuilder().setNameFormat("AsyncLocalizer Download Executor - %d").build());
+        taskExecService = Executors.newScheduledThreadPool(3,
+                new ThreadFactoryBuilder().setNameFormat("AsyncLocalizer Task Executor - %d").build());
         reconstructLocalizedResources();
 
         symlinksDisabled = (boolean) conf.getOrDefault(Config.DISABLE_SYMLINKS, false);
@@ -212,7 +214,7 @@
             blobPending.compute(topologyId, (tid, old) -> {
                 CompletableFuture<Void> ret = old;
                 if (ret == null) {
-                    ret = CompletableFuture.supplyAsync(new DownloadBlobs(pna, cb), execService);
+                    ret = CompletableFuture.supplyAsync(new DownloadBlobs(pna, cb), taskExecService);
                 } else {
                     try {
                         addReferencesToBlobs(pna, cb);
@@ -260,22 +262,26 @@
                     while (!done) {
                         try {
                             synchronized (blob) {
-                                long localVersion = blob.getLocalVersion();
-                                long remoteVersion = blob.getRemoteVersion(blobStore);
-                                if (localVersion != remoteVersion || !blob.isFullyDownloaded()) {
-                                    if (blob.isFullyDownloaded()) {
-                                        //Avoid case of different blob version
-                                        // when blob is not downloaded (first time download)
-                                        numBlobUpdateVersionChanged.mark();
+                                if (blob.isUsed()) {
+                                    long localVersion = blob.getLocalVersion();
+                                    long remoteVersion = blob.getRemoteVersion(blobStore);
+                                    if (localVersion != remoteVersion || !blob.isFullyDownloaded()) {
+                                        if (blob.isFullyDownloaded()) {
+                                            //Avoid case of different blob version
+                                            // when blob is not downloaded (first time download)
+                                            numBlobUpdateVersionChanged.mark();
+                                        }
+                                        Timer.Context t = singleBlobLocalizationDuration.time();
+                                        try {
+                                            long newVersion = blob.fetchUnzipToTemp(blobStore);
+                                            blob.informReferencesAndCommitNewVersion(newVersion);
+                                            t.stop();
+                                        } finally {
+                                            blob.cleanupOrphanedData();
+                                        }
                                     }
-                                    Timer.Context t = singleBlobLocalizationDuration.time();
-                                    try {
-                                        long newVersion = blob.fetchUnzipToTemp(blobStore);
-                                        blob.informReferencesAndCommitNewVersion(newVersion);
-                                        t.stop();
-                                    } finally {
-                                        blob.cleanupOrphanedData();
-                                    }
+                                } else {
+                                    LOG.debug("Skipping update of unused blob {}", blob);
                                 }
                             }
                             done = true;
@@ -290,7 +296,7 @@
                     }
                 }
                 LOG.debug("FINISHED download of {}", blob);
-            }, execService);
+            }, downloadExecService);
             i++;
         }
         return CompletableFuture.allOf(all);
@@ -336,14 +342,15 @@
      * Start any background threads needed.  This includes updating blobs and cleaning up unused blobs over the configured size limit.
      */
     public void start() {
-        execService.scheduleWithFixedDelay(this::updateBlobs, 30, 30, TimeUnit.SECONDS);
+        taskExecService.scheduleWithFixedDelay(this::updateBlobs, 30, 30, TimeUnit.SECONDS);
         LOG.debug("Scheduling cleanup every {} millis", cacheCleanupPeriod);
-        execService.scheduleAtFixedRate(this::cleanup, cacheCleanupPeriod, cacheCleanupPeriod, TimeUnit.MILLISECONDS);
+        taskExecService.scheduleAtFixedRate(this::cleanup, cacheCleanupPeriod, cacheCleanupPeriod, TimeUnit.MILLISECONDS);
     }
 
     @Override
     public void close() throws InterruptedException {
-        execService.shutdown();
+        downloadExecService.shutdown();
+        taskExecService.shutdown();
     }
 
     private List<LocalResource> getLocalResources(PortAndAssignment pna) throws IOException {
diff --git a/storm-server/src/main/java/org/apache/storm/localizer/LocalizedResource.java b/storm-server/src/main/java/org/apache/storm/localizer/LocalizedResource.java
index 87bd970..f984def 100644
--- a/storm-server/src/main/java/org/apache/storm/localizer/LocalizedResource.java
+++ b/storm-server/src/main/java/org/apache/storm/localizer/LocalizedResource.java
@@ -143,6 +143,9 @@
     }
 
     static void completelyRemoveUnusedUser(Path localBaseDir, String user) throws IOException {
+        Path localUserDir = getLocalUserDir(localBaseDir, user);
+        LOG.info("completelyRemoveUnusedUser {} for directory {}", user, localUserDir);
+
         Path userFileCacheDir = getLocalUserFileCacheDir(localBaseDir, user);
         // baseDir/supervisor/usercache/user1/filecache/files
         Files.deleteIfExists(getCacheDirForFiles(userFileCacheDir));
@@ -151,7 +154,7 @@
         // baseDir/supervisor/usercache/user1/filecache
         Files.deleteIfExists(userFileCacheDir);
         // baseDir/supervisor/usercache/user1
-        Files.deleteIfExists(getLocalUserDir(localBaseDir, user));
+        Files.deleteIfExists(localUserDir);
     }
 
     static List<String> getLocalizedArchiveKeys(Path localBaseDir, String user) throws IOException {
@@ -254,9 +257,12 @@
                 if (!Files.exists(parent)) {
                     //There is a race here that we can still lose
                     try {
-                        Files.createDirectory(parent);
+                        Files.createDirectories(parent);
                     } catch (FileAlreadyExistsException e) {
                         //Ignored
+                    } catch (IOException e) {
+                        LOG.error("Failed to create parent directory {}", parent, e);
+                        throw e;
                     }
                 }
                 return path;
@@ -397,7 +403,7 @@
                 }
             }
         } catch (NoSuchFileException e) {
-            LOG.warn("Nothing to cleanup with badeDir {} even though we expected there to be something there", baseDir);
+            LOG.warn("Nothing to cleanup with baseDir {} even though we expected there to be something there", baseDir);
         }
     }
 
diff --git a/storm-server/src/main/java/org/apache/storm/metric/StormMetricsRegistry.java b/storm-server/src/main/java/org/apache/storm/metric/StormMetricsRegistry.java
index cc98804..5db347e 100644
--- a/storm-server/src/main/java/org/apache/storm/metric/StormMetricsRegistry.java
+++ b/storm-server/src/main/java/org/apache/storm/metric/StormMetricsRegistry.java
@@ -68,6 +68,10 @@
         registry.removeMatching((name, metric) -> nameToMetric.containsKey(name));
     }
 
+    public Meter getMeter(String meterName) {
+        return registry.getMeters().get(meterName);
+    }
+
     public void startMetricsReporters(Map<String, Object> daemonConf) {
         reporters = MetricsUtils.getPreparableReporters(daemonConf);
         for (PreparableReporter reporter : reporters) {
diff --git a/storm-server/src/main/java/org/apache/storm/nimbus/AssignmentDistributionService.java b/storm-server/src/main/java/org/apache/storm/nimbus/AssignmentDistributionService.java
index 4f84997..4eb1bb4 100644
--- a/storm-server/src/main/java/org/apache/storm/nimbus/AssignmentDistributionService.java
+++ b/storm-server/src/main/java/org/apache/storm/nimbus/AssignmentDistributionService.java
@@ -21,9 +21,12 @@
 import java.util.concurrent.Executors;
 import java.util.concurrent.LinkedBlockingQueue;
 import java.util.concurrent.TimeUnit;
+
+import org.apache.storm.Constants;
 import org.apache.storm.DaemonConfig;
 import org.apache.storm.daemon.supervisor.Supervisor;
 import org.apache.storm.generated.SupervisorAssignments;
+import org.apache.storm.metric.StormMetricsRegistry;
 import org.apache.storm.utils.ConfigUtils;
 import org.apache.storm.utils.ObjectReader;
 import org.apache.storm.utils.SupervisorClient;
@@ -146,7 +149,8 @@
      * @param serverPort node thrift server port.
      * @param assignments the {@link org.apache.storm.generated.SupervisorAssignments}
      */
-    public void addAssignmentsForNode(String node, String host, Integer serverPort, SupervisorAssignments assignments) {
+    public void addAssignmentsForNode(String node, String host, Integer serverPort, SupervisorAssignments assignments,
+                                      StormMetricsRegistry metricsRegistry) {
         try {
             //For some reasons, we can not get supervisor port info, eg: supervisor shutdown,
             //Just skip for this scheduling round.
@@ -155,7 +159,8 @@
                 return;
             }
 
-            boolean success = nextQueue().offer(NodeAssignments.getInstance(node, host, serverPort, assignments), 5L, TimeUnit.SECONDS);
+            boolean success = nextQueue().offer(NodeAssignments.getInstance(node, host, serverPort,
+                                                assignments, metricsRegistry), 5L, TimeUnit.SECONDS);
             if (!success) {
                 LOG.warn("Discard an assignment distribution for node {} because the target sub queue is full.", node);
             }
@@ -211,17 +216,20 @@
         private String host;
         private Integer serverPort;
         private SupervisorAssignments assignments;
+        private StormMetricsRegistry metricsRegistry;
 
-        private NodeAssignments(String node, String host, Integer serverPort, SupervisorAssignments assignments) {
+        private NodeAssignments(String node, String host, Integer serverPort, SupervisorAssignments assignments,
+                                StormMetricsRegistry metricsRegistry) {
             this.node = node;
             this.host = host;
             this.serverPort = serverPort;
             this.assignments = assignments;
+            this.metricsRegistry = metricsRegistry;
         }
 
         public static NodeAssignments getInstance(String node, String host, Integer serverPort,
-                                                  SupervisorAssignments assignments) {
-            return new NodeAssignments(node, host, serverPort, assignments);
+                                                  SupervisorAssignments assignments, StormMetricsRegistry metricsRegistry) {
+            return new NodeAssignments(node, host, serverPort, assignments, metricsRegistry);
         }
 
         //supervisor assignment id/supervisor id
@@ -241,6 +249,9 @@
             return this.assignments;
         }
 
+        public StormMetricsRegistry getMetricsRegistry() {
+            return metricsRegistry;
+        }
     }
 
     /**
@@ -289,14 +300,13 @@
                     try {
                         client.getIface().sendSupervisorAssignments(assignments.getAssignments());
                     } catch (Exception e) {
-                        //just ignore the exception.
+                        assignments.getMetricsRegistry().getMeter(Constants.NIMBUS_SEND_ASSIGNMENT_EXCEPTIONS).mark();
                         LOG.error("Exception when trying to send assignments to node {}: {}", assignments.getNode(), e.getMessage());
                     }
                 } catch (Throwable e) {
                     //just ignore any error/exception.
                     LOG.error("Exception to create supervisor client for node {}: {}", assignments.getNode(), e.getMessage());
                 }
-
             }
         }
     }
diff --git a/storm-server/src/main/java/org/apache/storm/pacemaker/PacemakerServer.java b/storm-server/src/main/java/org/apache/storm/pacemaker/PacemakerServer.java
index d8364dc..19e6cbe 100644
--- a/storm-server/src/main/java/org/apache/storm/pacemaker/PacemakerServer.java
+++ b/storm-server/src/main/java/org/apache/storm/pacemaker/PacemakerServer.java
@@ -63,9 +63,8 @@
         switch (auth) {
 
             case "DIGEST":
-                Configuration loginConf = ClientAuthUtils.getConfiguration(config);
                 authMethod = ThriftNettyServerCodec.AuthMethod.DIGEST;
-                this.secret = ClientAuthUtils.makeDigestPayload(loginConf, ClientAuthUtils.LOGIN_CONTEXT_PACEMAKER_DIGEST);
+                this.secret = ClientAuthUtils.makeDigestPayload(config, ClientAuthUtils.LOGIN_CONTEXT_PACEMAKER_DIGEST);
                 if (this.secret == null) {
                     LOG.error("Can't start pacemaker server without digest secret.");
                     throw new RuntimeException("Can't start pacemaker server without digest secret.");
diff --git a/storm-server/src/main/java/org/apache/storm/scheduler/Cluster.java b/storm-server/src/main/java/org/apache/storm/scheduler/Cluster.java
index e4a9110..b5fe659 100644
--- a/storm-server/src/main/java/org/apache/storm/scheduler/Cluster.java
+++ b/storm-server/src/main/java/org/apache/storm/scheduler/Cluster.java
@@ -962,7 +962,8 @@
     public Map<String, SupervisorResources> getSupervisorsResourcesMap() {
         Map<String, SupervisorResources> ret = new HashMap<>();
         for (SupervisorDetails sd : supervisors.values()) {
-            ret.put(sd.getId(), new SupervisorResources(sd.getTotalMemory(), sd.getTotalCpu(), 0, 0));
+            ret.put(sd.getId(), new SupervisorResources(sd.getTotalMemory(), sd.getTotalCpu(), sd.getTotalGenericResources(),
+                0, 0, new HashMap<>()));
         }
         for (SchedulerAssignmentImpl assignment : assignments.values()) {
             for (Entry<WorkerSlot, WorkerResources> entry :
@@ -970,7 +971,9 @@
                 String id = entry.getKey().getNodeId();
                 SupervisorResources sr = ret.get(id);
                 if (sr == null) {
-                    sr = new SupervisorResources(0, 0, 0, 0);
+                    sr = new SupervisorResources(0, 0, new HashMap<>(),
+                        0, 0, new HashMap<>());
+
                 }
                 sr = sr.add(entry.getValue());
                 ret.put(id, sr);
@@ -981,7 +984,8 @@
                     String id = entry.getKey();
                     SupervisorResources sr = ret.get(id);
                     if (sr == null) {
-                        sr = new SupervisorResources(0, 0, 0, 0);
+                        sr = new SupervisorResources(0, 0, new HashMap<>(),
+                            0, 0, new HashMap<>());
                     }
                     sr = sr.addMem(entry.getValue());
                     ret.put(id, sr);
diff --git a/storm-server/src/main/java/org/apache/storm/scheduler/SupervisorDetails.java b/storm-server/src/main/java/org/apache/storm/scheduler/SupervisorDetails.java
index 2700871..1886273 100644
--- a/storm-server/src/main/java/org/apache/storm/scheduler/SupervisorDetails.java
+++ b/storm-server/src/main/java/org/apache/storm/scheduler/SupervisorDetails.java
@@ -17,7 +17,7 @@
 import java.util.Map;
 import java.util.Set;
 import org.apache.storm.scheduler.resource.normalization.NormalizedResourceOffer;
-import org.apache.storm.scheduler.resource.normalization.ResourceMetrics;
+import org.apache.storm.scheduler.resource.normalization.NormalizedResourceRequest;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -153,6 +153,15 @@
         return totalResources.getTotalCpu();
     }
 
+    /*
+     * Get the total Generic resources on this supervisor.
+     */
+    public Map<String, Double> getTotalGenericResources() {
+        Map<String, Double> genericResources = totalResources.toNormalizedMap();
+        NormalizedResourceRequest.removeNonGenericResources(genericResources);
+        return genericResources;
+    }
+
     /**
      * Get all resources for this Supervisor.
      */
diff --git a/storm-server/src/main/java/org/apache/storm/scheduler/SupervisorResources.java b/storm-server/src/main/java/org/apache/storm/scheduler/SupervisorResources.java
index 8ee686f..2c578d6 100644
--- a/storm-server/src/main/java/org/apache/storm/scheduler/SupervisorResources.java
+++ b/storm-server/src/main/java/org/apache/storm/scheduler/SupervisorResources.java
@@ -18,27 +18,38 @@
 
 package org.apache.storm.scheduler;
 
+import java.util.HashMap;
+import java.util.Map;
+
 import org.apache.storm.generated.WorkerResources;
+import org.apache.storm.scheduler.resource.normalization.NormalizedResourceRequest;
 
 public class SupervisorResources {
     private final double totalMem;
     private final double totalCpu;
     private final double usedMem;
     private final double usedCpu;
+    private Map<String, Double> totalGenericResources;
+    private Map<String, Double> usedGenericResources;
 
     /**
      * Constructor for a Supervisor's resources.
      *
      * @param totalMem the total mem on the supervisor
      * @param totalCpu the total CPU on the supervisor
+     * @param totalGenericResources the total generic resources on the supervisor
      * @param usedMem  the used mem on the supervisor
      * @param usedCpu  the used CPU on the supervisor
+     * @param usedGenericResources the used generic resources on the supervisor
      */
-    public SupervisorResources(double totalMem, double totalCpu, double usedMem, double usedCpu) {
+    public SupervisorResources(double totalMem, double totalCpu, Map<String, Double> totalGenericResources,
+                               double usedMem, double usedCpu, Map<String, Double> usedGenericResources) {
         this.totalMem = totalMem;
         this.totalCpu = totalCpu;
         this.usedMem = usedMem;
         this.usedCpu = usedCpu;
+        this.totalGenericResources = totalGenericResources != null ? totalGenericResources : new HashMap<>();
+        this.usedGenericResources = usedGenericResources != null ? usedGenericResources : new HashMap<>();
     }
 
     public double getUsedMem() {
@@ -65,15 +76,29 @@
         return totalMem - usedMem;
     }
 
-    SupervisorResources add(WorkerResources wr) {
+    public Map<String, Double> getTotalGenericResources() {
+        return new HashMap<>(totalGenericResources);
+    }
+
+    public Map<String, Double> getUsedGenericResources() {
+        return new HashMap<>(usedGenericResources);
+    }
+
+    public SupervisorResources add(WorkerResources wr) {
+        usedGenericResources = NormalizedResourceRequest.addResourceMap(usedGenericResources, wr.get_resources());
+        NormalizedResourceRequest.removeNonGenericResources(usedGenericResources);
+
         return new SupervisorResources(
-            totalMem,
-            totalCpu,
-            usedMem + wr.get_mem_off_heap() + wr.get_mem_on_heap(),
-            usedCpu + wr.get_cpu());
+                totalMem,
+                totalCpu,
+                getTotalGenericResources(),
+                usedMem + wr.get_mem_off_heap() + wr.get_mem_on_heap(),
+                usedCpu + wr.get_cpu(),
+                getUsedGenericResources());
     }
 
     public SupervisorResources addMem(Double value) {
-        return new SupervisorResources(totalMem, totalCpu, usedMem + value, usedCpu);
+        return new SupervisorResources(totalMem, totalCpu, getTotalGenericResources(),
+                usedMem + value, usedCpu, getUsedGenericResources());
     }
 }
diff --git a/storm-server/src/main/java/org/apache/storm/scheduler/TopologyDetails.java b/storm-server/src/main/java/org/apache/storm/scheduler/TopologyDetails.java
index 9b8298e..be20834 100644
--- a/storm-server/src/main/java/org/apache/storm/scheduler/TopologyDetails.java
+++ b/storm-server/src/main/java/org/apache/storm/scheduler/TopologyDetails.java
@@ -342,7 +342,7 @@
     }
 
     /**
-     * Get an approximate total resources needed for this topology.
+     * Get an approximate total resources needed for this topology. ignores shared memory.
      * @return the approximate total resources needed for this topology.
      */
     public NormalizedResourceRequest getApproximateTotalResources() {
@@ -354,14 +354,27 @@
     }
 
     /**
+     * Get approximate resources for given topology executors. ignores shared memory.
+     *
+     * @param execs the executors the inquiry is concerning.
+     * @return the approximate resources for the executors.
+     */
+    public NormalizedResourceRequest getApproximateResources(Set<ExecutorDetails> execs) {
+        NormalizedResourceRequest ret = new NormalizedResourceRequest();
+        execs.stream()
+            .filter(x -> hasExecInTopo(x))
+            .forEach(x -> ret.add(resourceList.get(x)));
+        return ret;
+    }
+
+    /**
      * Get the total CPU requirement for executor.
      *
      * @return generic resource mapping requirement for the executor
      */
     public Double getTotalCpuReqTask(ExecutorDetails exec) {
         if (hasExecInTopo(exec)) {
-            return resourceList
-                .get(exec).getTotalCpu();
+            return resourceList.get(exec).getTotalCpu();
         }
         return null;
     }
@@ -442,6 +455,12 @@
         return totalCpu;
     }
 
+    public Map<String, Double> getTotalRequestedGenericResources() {
+        Map<String, Double> map = getApproximateTotalResources().toNormalizedMap();
+        NormalizedResourceRequest.removeNonGenericResources(map);
+        return map;
+    }
+
     /**
      * get the resources requirements for a executor.
      *
diff --git a/storm-server/src/main/java/org/apache/storm/scheduler/resource/ResourceAwareScheduler.java b/storm-server/src/main/java/org/apache/storm/scheduler/resource/ResourceAwareScheduler.java
index ff5f526..75ae8c7 100644
--- a/storm-server/src/main/java/org/apache/storm/scheduler/resource/ResourceAwareScheduler.java
+++ b/storm-server/src/main/java/org/apache/storm/scheduler/resource/ResourceAwareScheduler.java
@@ -14,6 +14,7 @@
 
 import java.util.ArrayList;
 import java.util.Collection;
+import java.util.Collections;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
@@ -32,6 +33,8 @@
 import org.apache.storm.scheduler.Topologies;
 import org.apache.storm.scheduler.TopologyDetails;
 import org.apache.storm.scheduler.WorkerSlot;
+import org.apache.storm.scheduler.resource.normalization.NormalizedResourceOffer;
+import org.apache.storm.scheduler.resource.normalization.NormalizedResourceRequest;
 import org.apache.storm.scheduler.resource.strategies.priority.ISchedulingPriorityStrategy;
 import org.apache.storm.scheduler.resource.strategies.scheduling.IStrategy;
 import org.apache.storm.scheduler.utils.ConfigLoaderFactoryService;
@@ -68,15 +71,6 @@
         u.markTopoUnsuccess(td);
     }
 
-    private static double getCpuUsed(SchedulerAssignment assignment) {
-        return assignment.getScheduledResources().values().stream().mapToDouble((wr) -> wr.get_cpu()).sum();
-    }
-
-    private static double getMemoryUsed(SchedulerAssignment assignment) {
-        return assignment.getScheduledResources().values().stream()
-                         .mapToDouble((wr) -> wr.get_mem_on_heap() + wr.get_mem_off_heap()).sum();
-    }
-
     @Override
     public void prepare(Map<String, Object> conf) {
         this.conf = conf;
@@ -161,23 +155,28 @@
                 Config.TOPOLOGY_RAS_ONE_COMPONENT_PER_WORKER);
         }
 
+        TopologySchedulingResources topologySchedulingResources = new TopologySchedulingResources(workingState, td);
         final IStrategy finalRasStrategy = rasStrategy;
         for (int i = 0; i < maxSchedulingAttempts; i++) {
             SingleTopologyCluster toSchedule = new SingleTopologyCluster(workingState, td.getId());
             try {
                 SchedulingResult result = null;
-                Future<SchedulingResult> schedulingFuture = backgroundScheduling.submit(
-                    () -> finalRasStrategy.schedule(toSchedule, td)
-                );
-                try {
-                    result = schedulingFuture.get(schedulingTimeoutSeconds, TimeUnit.SECONDS);
-                } catch (TimeoutException te) {
-                    markFailedTopology(topologySubmitter, cluster, td, "Scheduling took too long for "
-                            + td.getId() + " using strategy " + rasStrategy.getClass().getName() + " timeout after "
-                            + schedulingTimeoutSeconds + " seconds using config "
-                            + DaemonConfig.SCHEDULING_TIMEOUT_SECONDS_PER_TOPOLOGY + ".");
-                    schedulingFuture.cancel(true);
-                    return;
+                topologySchedulingResources.resetRemaining();
+                if (topologySchedulingResources.canSchedule()) {
+                    Future<SchedulingResult> schedulingFuture = backgroundScheduling.submit(
+                        () -> finalRasStrategy.schedule(toSchedule, td));
+                    try {
+                        result = schedulingFuture.get(schedulingTimeoutSeconds, TimeUnit.SECONDS);
+                    } catch (TimeoutException te) {
+                        markFailedTopology(topologySubmitter, cluster, td, "Scheduling took too long for "
+                                + td.getId() + " using strategy " + rasStrategy.getClass().getName() + " timeout after "
+                                + schedulingTimeoutSeconds + " seconds using config "
+                                + DaemonConfig.SCHEDULING_TIMEOUT_SECONDS_PER_TOPOLOGY + ".");
+                        schedulingFuture.cancel(true);
+                        return;
+                    }
+                } else {
+                    result = SchedulingResult.failure(SchedulingStatus.FAIL_NOT_ENOUGH_RESOURCES, "");
                 }
                 LOG.debug("scheduling result: {}", result);
                 if (result == null) {
@@ -195,26 +194,20 @@
                         boolean evictedSomething = false;
                         LOG.debug("attempting to make space for topo {} from user {}", td.getName(), td.getTopologySubmitter());
                         int tdIndex = reversedList.indexOf(td);
-                        double cpuNeeded = td.getTotalRequestedCpu();
-                        double memoryNeeded = td.getTotalRequestedMemOffHeap() + td.getTotalRequestedMemOnHeap();
-                        SchedulerAssignment assignment = cluster.getAssignmentById(td.getId());
-                        if (assignment != null) {
-                            cpuNeeded -= getCpuUsed(assignment);
-                            memoryNeeded -= getMemoryUsed(assignment);
-                        }
+                        topologySchedulingResources.setRemainingRequiredResources(toSchedule, td);
+
                         for (int index = 0; index < tdIndex; index++) {
                             TopologyDetails topologyEvict = reversedList.get(index);
                             SchedulerAssignment evictAssignemnt = workingState.getAssignmentById(topologyEvict.getId());
                             if (evictAssignemnt != null && !evictAssignemnt.getSlots().isEmpty()) {
                                 Collection<WorkerSlot> workersToEvict = workingState.getUsedSlotsByTopologyId(topologyEvict.getId());
+                                topologySchedulingResources.adjustResourcesForEvictedTopology(toSchedule, topologyEvict);
 
                                 LOG.debug("Evicting Topology {} with workers: {} from user {}", topologyEvict.getName(), workersToEvict,
-                                          topologyEvict.getTopologySubmitter());
-                                cpuNeeded -= getCpuUsed(evictAssignemnt);
-                                memoryNeeded -= getMemoryUsed(evictAssignemnt);
+                                    topologyEvict.getTopologySubmitter());
                                 evictedSomething = true;
                                 nodes.freeSlots(workersToEvict);
-                                if (cpuNeeded <= 0 && memoryNeeded <= 0) {
+                                if (topologySchedulingResources.canSchedule()) {
                                     //We evicted enough topologies to have a hope of scheduling, so try it now, and don't evict more
                                     // than is needed
                                     break;
@@ -225,15 +218,7 @@
                         if (!evictedSomething) {
                             StringBuilder message = new StringBuilder();
                             message.append("Not enough resources to schedule ");
-                            if (memoryNeeded > 0 || cpuNeeded > 0) {
-                                if (memoryNeeded > 0) {
-                                    message.append(memoryNeeded).append(" MB ");
-                                }
-                                if (cpuNeeded > 0) {
-                                    message.append(cpuNeeded).append("% CPU ");
-                                }
-                                message.append("needed even after evicting lower priority topologies. ");
-                            }
+                            message.append(topologySchedulingResources.getRemainingRequiredResourcesMessage());
                             message.append(result.getErrorMessage());
                             markFailedTopology(topologySubmitter, cluster, td, message.toString());
                             return;
@@ -247,13 +232,174 @@
                 }
             } catch (Exception ex) {
                 markFailedTopology(topologySubmitter, cluster, td,
-                                   "Internal Error - Exception thrown when scheduling. Please check logs for details", ex);
+                        "Internal Error - Exception thrown when scheduling. Please check logs for details", ex);
                 return;
             }
         }
         markFailedTopology(topologySubmitter, cluster, td, "Failed to schedule within " + maxSchedulingAttempts + " attempts");
     }
 
+    /*
+     * Class for tracking resources for scheduling a topology.
+     *
+     * Ideally we would simply track NormalizedResources, but shared topology memory complicates things.
+     * Topologies with shared memory may use more than the SharedMemoryLowerBound, and topologyRequiredResources
+     * ignores shared memory.
+     *
+     * Resources are tracked in two ways:
+     * 1) AvailableResources. Track cluster available resources and required topology resources.
+     * 2) RemainingRequiredResources. Start with required topology resources, and deduct for partially scheduled and evicted topologies.
+     */
+    private class TopologySchedulingResources {
+        boolean remainingResourcesAreSet;
+
+        NormalizedResourceOffer clusterAvailableResources;
+        NormalizedResourceRequest topologyRequiredResources;
+        NormalizedResourceRequest topologyScheduledResources;
+
+        double clusterAvailableMemory;
+        double topologyRequiredNonSharedMemory;
+        double topologySharedMemoryLowerBound;
+
+        NormalizedResourceOffer remainingRequiredTopologyResources;
+        double remainingRequiredTopologyMemory;
+        double topologyScheduledMemory;
+
+        TopologySchedulingResources(Cluster cluster, TopologyDetails td) {
+            remainingResourcesAreSet = false;
+
+            // available resources (lower bound since blacklisted supervisors do not contribute)
+            clusterAvailableResources = cluster.getNonBlacklistedClusterAvailableResources(Collections.emptyList());
+            clusterAvailableMemory = clusterAvailableResources.getTotalMemoryMb();
+            // required resources
+            topologyRequiredResources = td.getApproximateTotalResources();
+            topologyRequiredNonSharedMemory = td.getRequestedNonSharedOffHeap() + td.getRequestedNonSharedOnHeap();
+            topologySharedMemoryLowerBound = td.getRequestedSharedOffHeap() + td.getRequestedSharedOnHeap();
+            // partially scheduled topology resources
+            setScheduledTopologyResources(cluster, td);
+        }
+
+        void setScheduledTopologyResources(Cluster cluster, TopologyDetails td) {
+            SchedulerAssignment assignment = cluster.getAssignmentById(td.getId());
+            if (assignment != null) {
+                topologyScheduledResources = td.getApproximateResources(assignment.getExecutors());
+                topologyScheduledMemory = computeScheduledTopologyMemory(cluster, td);
+            } else {
+                topologyScheduledResources = new NormalizedResourceRequest();
+                topologyScheduledMemory = 0;
+            }
+        }
+
+        boolean canSchedule() {
+            return canScheduleAvailable() && canScheduleRemainingRequired();
+        }
+
+        boolean canScheduleAvailable() {
+            NormalizedResourceOffer availableResources = new NormalizedResourceOffer(clusterAvailableResources);
+            availableResources.add(topologyScheduledResources);
+            boolean insufficientResources = availableResources.remove(topologyRequiredResources);
+            if (insufficientResources) {
+                return false;
+            }
+
+            double availableMemory = clusterAvailableMemory + topologyScheduledMemory;
+            double totalRequiredTopologyMemory = topologyRequiredNonSharedMemory + topologySharedMemoryLowerBound;
+            return (availableMemory >= totalRequiredTopologyMemory);
+        }
+
+        boolean canScheduleRemainingRequired() {
+            if (!remainingResourcesAreSet) {
+                return true;
+            }
+            if (remainingRequiredTopologyResources.areAnyOverZero() || (remainingRequiredTopologyMemory > 0)) {
+                return false;
+            }
+
+            return true;
+        }
+
+        // Set remainingRequiredResources following failed scheduling.
+        void setRemainingRequiredResources(Cluster cluster, TopologyDetails td) {
+            remainingResourcesAreSet = true;
+            setScheduledTopologyResources(cluster, td);
+
+            remainingRequiredTopologyResources = new NormalizedResourceOffer();
+            remainingRequiredTopologyResources.add(topologyRequiredResources);
+            remainingRequiredTopologyResources.remove(topologyScheduledResources);
+
+            remainingRequiredTopologyMemory = (topologyRequiredNonSharedMemory + topologySharedMemoryLowerBound)
+                    - (topologyScheduledMemory);
+        }
+
+        // Adjust remainingRequiredResources after evicting topology
+        void adjustResourcesForEvictedTopology(Cluster cluster, TopologyDetails evict) {
+            SchedulerAssignment assignment = cluster.getAssignmentById(evict.getId());
+            if (assignment != null) {
+                NormalizedResourceRequest evictResources = evict.getApproximateResources(assignment.getExecutors());
+                double topologyScheduledMemory = computeScheduledTopologyMemory(cluster, evict);
+
+                clusterAvailableResources.add(evictResources);
+                clusterAvailableMemory += topologyScheduledMemory;
+                remainingRequiredTopologyResources.remove(evictResources);
+                remainingRequiredTopologyMemory -= topologyScheduledMemory;
+            }
+        }
+
+        void resetRemaining() {
+            remainingResourcesAreSet = false;
+            remainingRequiredTopologyMemory = 0;
+        }
+
+        private double getMemoryUsed(SchedulerAssignment assignment) {
+            return assignment.getScheduledResources().values().stream()
+                    .mapToDouble((wr) -> wr.get_mem_on_heap() + wr.get_mem_off_heap()).sum();
+        }
+
+        // Get total memory for scheduled topology, including all shared memory
+        private double computeScheduledTopologyMemory(Cluster cluster, TopologyDetails td) {
+            SchedulerAssignment assignment = cluster.getAssignmentById(td.getId());
+            double scheduledTopologyMemory = 0;
+            // node shared memory
+            if (assignment != null) {
+                for (double mem : assignment.getNodeIdToTotalSharedOffHeapNodeMemory().values()) {
+                    scheduledTopologyMemory += mem;
+                }
+                // worker memory (shared & unshared)
+                scheduledTopologyMemory += getMemoryUsed(assignment);
+            }
+
+            return scheduledTopologyMemory;
+        }
+
+        String getRemainingRequiredResourcesMessage() {
+            StringBuilder message = new StringBuilder();
+            message.append("After evicting lower priority topologies: ");
+
+            NormalizedResourceOffer clusterRemainingAvailableResources = new NormalizedResourceOffer();
+            clusterRemainingAvailableResources.add(clusterAvailableResources);
+            clusterRemainingAvailableResources.remove(topologyScheduledResources);
+
+            double memoryNeeded = remainingRequiredTopologyMemory;
+            double cpuNeeded = remainingRequiredTopologyResources.getTotalCpu();
+            if (memoryNeeded > 0) {
+                message.append("Additional Memory Required: ").append(memoryNeeded).append(" MB ");
+                message.append("(Available: ").append(clusterRemainingAvailableResources.getTotalMemoryMb()).append(" MB). ");
+            }
+            if (cpuNeeded > 0) {
+                message.append("Additional CPU Required: ").append(cpuNeeded).append("% CPU ");
+                message.append("(Available: ").append(clusterRemainingAvailableResources.getTotalCpu()).append(" % CPU).");
+            }
+            if (remainingRequiredTopologyResources.getNormalizedResources().anyNonCpuOverZero()) {
+                message.append(" Additional Topology Required Resources: ");
+                message.append(remainingRequiredTopologyResources.getNormalizedResources().toString());
+                message.append(" Cluster Available Resources: ");
+                message.append(clusterRemainingAvailableResources.getNormalizedResources().toString());
+                message.append(".  ");
+            }
+            return message.toString();
+        }
+    }
+
     /**
      * Get User wrappers around cluster.
      *
diff --git a/storm-server/src/main/java/org/apache/storm/scheduler/resource/normalization/NormalizedResourceOffer.java b/storm-server/src/main/java/org/apache/storm/scheduler/resource/normalization/NormalizedResourceOffer.java
index b8a6431..4eb16f4 100644
--- a/storm-server/src/main/java/org/apache/storm/scheduler/resource/normalization/NormalizedResourceOffer.java
+++ b/storm-server/src/main/java/org/apache/storm/scheduler/resource/normalization/NormalizedResourceOffer.java
@@ -92,12 +92,18 @@
         totalMemoryMb -= other.getTotalMemoryMb();
         if (totalMemoryMb < 0.0) {
             negativeResources = true;
-            resourceMetrics.getNegativeResourceEventsMeter().mark();
+            if (resourceMetrics != null) {
+                resourceMetrics.getNegativeResourceEventsMeter().mark();
+            }
             totalMemoryMb = 0.0;
         }
         return negativeResources;
     }
 
+    public boolean remove(NormalizedResourcesWithMemory other) {
+        return remove(other, null);
+    }
+
     /**
      * Remove the resources in other from this.
      * @param other the resources to be removed.
diff --git a/storm-server/src/main/java/org/apache/storm/scheduler/resource/normalization/NormalizedResourceRequest.java b/storm-server/src/main/java/org/apache/storm/scheduler/resource/normalization/NormalizedResourceRequest.java
index 478a8be..0a9c7f0 100644
--- a/storm-server/src/main/java/org/apache/storm/scheduler/resource/normalization/NormalizedResourceRequest.java
+++ b/storm-server/src/main/java/org/apache/storm/scheduler/resource/normalization/NormalizedResourceRequest.java
@@ -178,6 +178,43 @@
         return ret;
     }
 
+    /*
+     * return map with non generic resources removed
+     */
+    public static void removeNonGenericResources(Map<String, Double> map) {
+        map.remove(Constants.COMMON_ONHEAP_MEMORY_RESOURCE_NAME);
+        map.remove(Constants.COMMON_OFFHEAP_MEMORY_RESOURCE_NAME);
+        map.remove(Constants.COMMON_TOTAL_MEMORY_RESOURCE_NAME);
+        map.remove(Constants.COMMON_CPU_RESOURCE_NAME);
+    }
+
+    /*
+     * return a map that is the sum of resources1 + resources2
+     */
+    public static Map<String, Double> addResourceMap(Map<String, Double> resources1, Map<String, Double> resources2) {
+        Map<String,Double> sum = new HashMap<>(resources1);
+        for (Map.Entry<String,Double> me : resources2.entrySet()) {
+            Double cur = sum.getOrDefault(me.getKey(), 0.0) + me.getValue();
+            sum.put(me.getKey(), cur);
+        }
+        return sum;
+    }
+
+    /*
+     * return a map that is the difference of resources1 - resources2
+     */
+    public static Map<String, Double> subtractResourceMap(Map<String, Double> resource1, Map<String, Double> resource2) {
+        if (resource1 == null || resource2 == null) {
+            return new HashMap<>();
+        }
+        Map<String, Double> difference = new HashMap<>(resource1);
+        for (Map.Entry<String,Double> me : resource2.entrySet()) {
+            Double sub = difference.getOrDefault(me.getKey(), 0.0) - me.getValue();
+            difference.put(me.getKey(), sub);
+        }
+        return difference;
+    }
+
     public double getOnHeapMemoryMb() {
         return onHeap;
     }
diff --git a/storm-server/src/main/java/org/apache/storm/scheduler/resource/normalization/NormalizedResources.java b/storm-server/src/main/java/org/apache/storm/scheduler/resource/normalization/NormalizedResources.java
index 01eba1e..ea77b1a 100644
--- a/storm-server/src/main/java/org/apache/storm/scheduler/resource/normalization/NormalizedResources.java
+++ b/storm-server/src/main/java/org/apache/storm/scheduler/resource/normalization/NormalizedResources.java
@@ -135,7 +135,9 @@
         this.cpu -= other.cpu;
         if (cpu < 0.0) {
             ret = true;
-            resourceMetrics.getNegativeResourceEventsMeter().mark();
+            if (resourceMetrics != null) {
+                resourceMetrics.getNegativeResourceEventsMeter().mark();
+            }
             cpu = 0.0;
         }
         int otherLength = other.otherResources.length;
@@ -144,7 +146,9 @@
             otherResources[i] -= other.otherResources[i];
             if (otherResources[i] < 0.0) {
                 ret = true;
-                resourceMetrics.getNegativeResourceEventsMeter().mark();
+                if (resourceMetrics != null) {
+                    resourceMetrics.getNegativeResourceEventsMeter().mark();
+                }
                 otherResources[i]  = 0.0;
             }
         }
@@ -402,16 +406,32 @@
         }
     }
 
-    /**
-     * Are any of the resources positive.
-     * @return true of any of the resources are positive.  False if they are all <= 0.
-     */
-    public boolean areAnyOverZero() {
+    private boolean areAnyOverZero(boolean skipCpuCheck) {
         for (int i = 0; i < otherResources.length; i++) {
             if (otherResources[i] > 0) {
                 return true;
             }
         }
-        return cpu > 0;
+        if (skipCpuCheck) {
+            return false;
+        } else {
+            return cpu > 0;
+        }
+    }
+
+    /**
+     * Are any of the resources positive.
+     * @return true of any of the resources are positive.  False if they are all <= 0.
+     */
+    public boolean areAnyOverZero() {
+        return areAnyOverZero(false);
+    }
+
+    /**
+     * Are any of the non cpu resources positive.
+     * @return true of any of the non cpu resources are positive.  False if they are all <= 0.
+     */
+    public boolean anyNonCpuOverZero() {
+        return areAnyOverZero(true);
     }
 }
diff --git a/storm-server/src/main/java/org/apache/storm/scheduler/resource/strategies/priority/FIFOSchedulingPriorityStrategy.java b/storm-server/src/main/java/org/apache/storm/scheduler/resource/strategies/priority/FIFOSchedulingPriorityStrategy.java
index 0076c75..29a0584 100644
--- a/storm-server/src/main/java/org/apache/storm/scheduler/resource/strategies/priority/FIFOSchedulingPriorityStrategy.java
+++ b/storm-server/src/main/java/org/apache/storm/scheduler/resource/strategies/priority/FIFOSchedulingPriorityStrategy.java
@@ -55,8 +55,7 @@
     }
 
     /**
-     * Comparator that sorts topologies by priority and then by submission time.
-     * First sort by Topology Priority, if there is a tie for topology priority, topology uptime is used to sort.
+     * Comparator that sorts topologies by submission time.
      */
     private static class TopologyBySubmissionTimeComparator implements Comparator<TopologyDetails> {
 
diff --git a/storm-server/src/main/java/org/apache/storm/scheduler/resource/strategies/scheduling/BaseResourceAwareStrategy.java b/storm-server/src/main/java/org/apache/storm/scheduler/resource/strategies/scheduling/BaseResourceAwareStrategy.java
index 9cfaabc..6ab379f 100644
--- a/storm-server/src/main/java/org/apache/storm/scheduler/resource/strategies/scheduling/BaseResourceAwareStrategy.java
+++ b/storm-server/src/main/java/org/apache/storm/scheduler/resource/strategies/scheduling/BaseResourceAwareStrategy.java
@@ -129,7 +129,7 @@
         } else {
             String comp = td.getExecutorToComponent().get(exec);
             NormalizedResourceRequest requestedResources = td.getTotalResources(exec);
-            LOG.error("Not Enough Resources to schedule Task {} - {} {}", exec, comp, requestedResources);
+            LOG.warn("Not Enough Resources to schedule Task {} - {} {}", exec, comp, requestedResources);
             return false;
         }
     }
diff --git a/storm-server/src/main/java/org/apache/storm/scheduler/resource/strategies/scheduling/ConstraintSolverStrategy.java b/storm-server/src/main/java/org/apache/storm/scheduler/resource/strategies/scheduling/ConstraintSolverStrategy.java
index cacba7a..81994ba 100644
--- a/storm-server/src/main/java/org/apache/storm/scheduler/resource/strategies/scheduling/ConstraintSolverStrategy.java
+++ b/storm-server/src/main/java/org/apache/storm/scheduler/resource/strategies/scheduling/ConstraintSolverStrategy.java
@@ -320,51 +320,89 @@
         return GenericResourceAwareStrategy.sortObjectResourcesImpl(allResources, exec, topologyDetails, existingScheduleFunc);
     }
 
-    // Backtracking algorithm does not take into account the ordering of executors in worker to reduce traversal space
+    /**
+     * Try to schedule till successful or till limits (backtrack count or time) have been exceeded.
+     *
+     * @param state terminal state of the executor assignment.
+     * @return SolverResult with success attribute set to true or false indicting whether ALL executors were assigned.
+     */
     @VisibleForTesting
     protected SolverResult backtrackSearch(SearcherState state) {
-        state.incStatesSearched();
-        if (state.areSearchLimitsExceeded()) {
-            LOG.warn("Limits Exceeded");
-            return new SolverResult(state, false);
+        long         startTimeMilli     = System.currentTimeMillis();
+        int          maxExecCnt         = state.getExecSize();
+
+        // following three are state information at each "execIndex" level
+        int[]        progressIdxForExec = new int[maxExecCnt];
+        RasNode[]    nodeForExec        = new RasNode[maxExecCnt];
+        WorkerSlot[] workerSlotForExec  = new WorkerSlot[maxExecCnt];
+
+        for (int i = 0; i < maxExecCnt ; i++) {
+            progressIdxForExec[i] = -1;
         }
+        LOG.info("backtrackSearch: will assign {} executors", maxExecCnt);
 
-        if (Thread.currentThread().isInterrupted()) {
-            return new SolverResult(state, false);
-        }
+        OUTERMOST_LOOP:
+        for (int loopCnt = 0 ; true ; loopCnt++) {
+            LOG.debug("backtrackSearch: loopCnt = {}, state.execIndex = {}", loopCnt, state.execIndex);
+            if (state.areSearchLimitsExceeded()) {
+                LOG.warn("backtrackSearch: Search limits exceeded");
+                return new SolverResult(state, false);
+            }
 
-        ExecutorDetails exec = state.currentExec();
-        Iterable<String> sortedNodes = sortAllNodes(state.td, exec, favoredNodeIds, unFavoredNodeIds);
+            if (Thread.currentThread().isInterrupted()) {
+                return new SolverResult(state, false);
+            }
 
-        for (String nodeId: sortedNodes) {
-            RasNode node = nodes.get(nodeId);
-            for (WorkerSlot workerSlot : node.getSlotsAvailableToScheduleOn()) {
-                if (isExecAssignmentToWorkerValid(workerSlot, state)) {
+            int execIndex = state.execIndex;
+
+            ExecutorDetails exec = state.currentExec();
+            Iterable<String> sortedNodesIter = sortAllNodes(state.td, exec, favoredNodeIds, unFavoredNodeIds);
+
+            int progressIdx = -1;
+            for (String nodeId : sortedNodesIter) {
+                RasNode node = nodes.get(nodeId);
+                for (WorkerSlot workerSlot : node.getSlotsAvailableToScheduleOn()) {
+                    progressIdx++;
+                    if (progressIdx <= progressIdxForExec[execIndex]) {
+                        continue;
+                    }
+                    progressIdxForExec[execIndex]++;
+                    LOG.debug("backtrackSearch: loopCnt = {}, state.execIndex = {}, node/slot-ordinal = {}, nodeId = {}",
+                        loopCnt, execIndex, progressIdx, nodeId);
+
+                    if (!isExecAssignmentToWorkerValid(workerSlot, state)) {
+                        continue;
+                    }
+
+                    state.incStatesSearched();
                     state.tryToSchedule(execToComp, node, workerSlot);
-
                     if (state.areAllExecsScheduled()) {
                         //Everything is scheduled correctly, so no need to search any more.
+                        LOG.info("backtrackSearch: AllExecsScheduled at loopCnt = {} in {} milliseconds, elapsedtime in state={}",
+                            loopCnt, System.currentTimeMillis() - startTimeMilli, Time.currentTimeMillis() - state.startTimeMillis);
                         return new SolverResult(state, true);
                     }
-
-                    SolverResult results = backtrackSearch(state.nextExecutor());
-                    if (results.success) {
-                        //We found a good result we are done.
-                        return results;
-                    }
-
-                    if (state.areSearchLimitsExceeded()) {
-                        //No need to search more it is not going to help.
-                        return new SolverResult(state, false);
-                    }
-
-                    //backtracking (If we ever get here there really isn't a lot of hope that we will find a scheduling)
-                    state.backtrack(execToComp, node, workerSlot);
+                    state = state.nextExecutor();
+                    nodeForExec[execIndex] = node;
+                    workerSlotForExec[execIndex] = workerSlot;
+                    LOG.debug("backtrackSearch: Assigned execId={} to node={}, node/slot-ordinal={} at loopCnt={}",
+                        execIndex, nodeId, progressIdx, loopCnt);
+                    continue OUTERMOST_LOOP;
                 }
             }
+            // if here, then the executor was not assigned, backtrack;
+            LOG.debug("backtrackSearch: Failed to schedule execId = {} at loopCnt = {}", execIndex, loopCnt);
+            if (execIndex == 0) {
+                break;
+            } else {
+                state.backtrack(execToComp, nodeForExec[execIndex - 1], workerSlotForExec[execIndex - 1]);
+                progressIdxForExec[execIndex] = -1;
+            }
         }
-        //Tried all of the slots and none of them worked.
-        return new SolverResult(state, false);
+        boolean success = state.areAllExecsScheduled();
+        LOG.info("backtrackSearch: Scheduled={} in {} milliseconds, elapsedtime in state={}",
+            success, System.currentTimeMillis() - startTimeMilli, Time.currentTimeMillis() - state.startTimeMillis);
+        return new SolverResult(state, success);
     }
 
     /**
@@ -531,6 +569,10 @@
             return statesSearched;
         }
 
+        public int getExecSize() {
+            return execs.size();
+        }
+
         public boolean areSearchLimitsExceeded() {
             return statesSearched > maxStatesSearched || Time.currentTimeMillis() > maxEndTimeMs;
         }
diff --git a/storm-server/src/test/java/org/apache/storm/daemon/supervisor/SlotTest.java b/storm-server/src/test/java/org/apache/storm/daemon/supervisor/SlotTest.java
index 9dfe2d7..1cfea8a 100644
--- a/storm-server/src/test/java/org/apache/storm/daemon/supervisor/SlotTest.java
+++ b/storm-server/src/test/java/org/apache/storm/daemon/supervisor/SlotTest.java
@@ -15,10 +15,13 @@
 import static org.hamcrest.CoreMatchers.is;
 import static org.hamcrest.CoreMatchers.nullValue;
 
+import com.google.common.collect.Maps;
 import java.util.ArrayList;
 import java.util.Collections;
+import java.util.HashMap;
 import java.util.HashSet;
 import java.util.List;
+import java.util.Map;
 import java.util.Set;
 import java.util.concurrent.CompletableFuture;
 import java.util.concurrent.TimeUnit;
@@ -77,6 +80,14 @@
         return resources;
     }
 
+    static WorkerResources mkWorkerResources(Double cpu, Double mem_on_heap, Double mem_off_heap, Map<String, Double> resources) {
+        WorkerResources workerResources = mkWorkerResources(cpu, mem_on_heap, mem_off_heap);
+        if (resources != null) {
+            workerResources.set_resources(resources);
+        }
+        return workerResources;
+    }
+
     static LSWorkerHeartbeat mkWorkerHB(String id, int port, List<ExecutorInfo> exec, Integer timeSecs) {
         LSWorkerHeartbeat ret = new LSWorkerHeartbeat();
         ret.set_topology_id(id);
@@ -108,12 +119,64 @@
     }
 
     @Test
-    public void testEquivilant() {
+    public void testWorkerResourceEquality() {
+        WorkerResources resourcesRNull = mkWorkerResources(100.0, 100.0, 100.0, null);
+        WorkerResources resourcesREmpty = mkWorkerResources(100.0, 100.0, 100.0, Maps.newHashMap());
+        assertTrue(Slot.customWorkerResourcesEquality(resourcesRNull,resourcesREmpty));
+
+        Map resources = new HashMap<String, Double>();
+        resources.put("network.resource.units", 0.0);
+        WorkerResources resourcesRNetwork = mkWorkerResources(100.0, 100.0, 100.0,resources);
+        assertTrue(Slot.customWorkerResourcesEquality(resourcesREmpty, resourcesRNetwork));
+
+
+        Map resourcesNetwork = new HashMap<String, Double>();
+        resourcesNetwork.put("network.resource.units", 50.0);
+        WorkerResources resourcesRNetworkNonZero = mkWorkerResources(100.0, 100.0, 100.0,resourcesNetwork);
+        assertFalse(Slot.customWorkerResourcesEquality(resourcesREmpty, resourcesRNetworkNonZero));
+
+        Map resourcesNetworkOne = new HashMap<String, Double>();
+        resourcesNetworkOne.put("network.resource.units", 50.0);
+        WorkerResources resourcesRNetworkOne = mkWorkerResources(100.0, 100.0, 100.0,resourcesNetworkOne);
+        assertTrue(Slot.customWorkerResourcesEquality(resourcesRNetworkOne, resourcesRNetworkNonZero));
+
+        Map resourcesNetworkTwo = new HashMap<String, Double>();
+        resourcesNetworkTwo.put("network.resource.units", 100.0);
+        WorkerResources resourcesRNetworkTwo = mkWorkerResources(100.0, 100.0, 100.0,resourcesNetworkTwo);
+        assertFalse(Slot.customWorkerResourcesEquality(resourcesRNetworkOne, resourcesRNetworkTwo));
+
+        WorkerResources resourcesCpuNull = mkWorkerResources(null, 100.0,100.0);
+        WorkerResources resourcesCPUZero = mkWorkerResources(0.0, 100.0,100.0);
+        assertTrue(Slot.customWorkerResourcesEquality(resourcesCpuNull, resourcesCPUZero));
+
+        WorkerResources resourcesOnHeapMemNull = mkWorkerResources(100.0, null,100.0);
+        WorkerResources resourcesOnHeapMemZero = mkWorkerResources(100.0, 0.0,100.0);
+        assertTrue(Slot.customWorkerResourcesEquality(resourcesOnHeapMemNull, resourcesOnHeapMemZero));
+
+        WorkerResources resourcesOffHeapMemNull = mkWorkerResources(100.0, 100.0,null);
+        WorkerResources resourcesOffHeapMemZero = mkWorkerResources(100.0, 100.0,0.0);
+        assertTrue(Slot.customWorkerResourcesEquality(resourcesOffHeapMemNull, resourcesOffHeapMemZero));
+
+    }
+
+    @Test
+    public void testEquivalent() {
         LocalAssignment a = mkLocalAssignment("A", mkExecutorInfoList(1, 2, 3, 4, 5), mkWorkerResources(100.0, 100.0, 100.0));
         LocalAssignment aResized = mkLocalAssignment("A", mkExecutorInfoList(1, 2, 3, 4, 5), mkWorkerResources(100.0, 200.0, 100.0));
         LocalAssignment b = mkLocalAssignment("B", mkExecutorInfoList(1, 2, 3, 4, 5, 6), mkWorkerResources(100.0, 100.0, 100.0));
         LocalAssignment bReordered = mkLocalAssignment("B", mkExecutorInfoList(6, 5, 4, 3, 2, 1), mkWorkerResources(100.0, 100.0, 100.0));
 
+        LocalAssignment c = mkLocalAssignment("C", mkExecutorInfoList(188, 261),mkWorkerResources(400.0,10000.0,0.0));
+
+        WorkerResources workerResources = mkWorkerResources(400.0, 10000.0, 0.0);
+        Map<String, Double> additionalResources = workerResources.get_resources();
+        if( additionalResources == null) additionalResources = new HashMap<>();
+        additionalResources.put("network.resource.units", 0.0);
+
+        workerResources.set_resources(additionalResources);
+        LocalAssignment cReordered = mkLocalAssignment("C", mkExecutorInfoList(188, 261), workerResources);
+
+        assertTrue(Slot.equivalent(c,cReordered));
         assertTrue(Slot.equivalent(null, null));
         assertTrue(Slot.equivalent(a, a));
         assertTrue(Slot.equivalent(b, bReordered));
diff --git a/storm-server/src/test/java/org/apache/storm/localizer/AsyncLocalizerTest.java b/storm-server/src/test/java/org/apache/storm/localizer/AsyncLocalizerTest.java
index 550c808..f97c868 100644
--- a/storm-server/src/test/java/org/apache/storm/localizer/AsyncLocalizerTest.java
+++ b/storm-server/src/test/java/org/apache/storm/localizer/AsyncLocalizerTest.java
@@ -18,7 +18,6 @@
 import java.io.FileInputStream;
 import java.io.IOException;
 import java.io.InputStream;
-import java.io.OutputStream;
 import java.nio.file.Files;
 import java.nio.file.Path;
 import java.nio.file.Paths;
@@ -28,13 +27,12 @@
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
-import java.util.UUID;
 import java.util.concurrent.CompletableFuture;
 import java.util.concurrent.ConcurrentHashMap;
 import java.util.concurrent.ConcurrentMap;
 import java.util.concurrent.Future;
 import java.util.concurrent.TimeUnit;
-import org.apache.commons.io.FileUtils;
+
 import org.apache.commons.io.IOUtils;
 import org.apache.storm.Config;
 import org.apache.storm.DaemonConfig;
@@ -53,13 +51,12 @@
 import org.apache.storm.generated.SettableBlobMeta;
 import org.apache.storm.generated.StormTopology;
 import org.apache.storm.security.auth.DefaultPrincipalToLocal;
+import org.apache.storm.testing.TmpPath;
 import org.apache.storm.utils.ConfigUtils;
 import org.apache.storm.utils.ReflectionUtils;
 import org.apache.storm.utils.ServerUtils;
 import org.apache.storm.utils.Time;
 import org.apache.storm.utils.Utils;
-import org.junit.After;
-import org.junit.Before;
 import org.junit.Test;
 import org.mockito.Mockito;
 import org.slf4j.Logger;
@@ -86,68 +83,69 @@
 
 public class AsyncLocalizerTest {
     private static final Logger LOG = LoggerFactory.getLogger(AsyncLocalizerTest.class);
+
     private final String user1 = "user1";
     private final String user2 = "user2";
     private final String user3 = "user3";
-    //From LocalizerTest
-    private File baseDir;
-    private ClientBlobStore mockblobstore = mock(ClientBlobStore.class);
 
-    private static String getTestLocalizerRoot() {
-        File f = new File("./target/" + Thread.currentThread().getStackTrace()[2].getMethodName() + "/localizer/");
-        f.deleteOnExit();
-        return f.getPath();
-    }
+    private ClientBlobStore mockBlobStore = mock(ClientBlobStore.class);
 
     @Test
     public void testRequestDownloadBaseTopologyBlobs() throws Exception {
-        final String topoId = "TOPO";
-        final String user = "user";
-        LocalAssignment la = new LocalAssignment();
-        la.set_topology_id(topoId);
-        la.set_owner(user);
-        ExecutorInfo ei = new ExecutorInfo();
-        ei.set_task_start(1);
-        ei.set_task_end(1);
-        la.add_to_executors(ei);
-        final int port = 8080;
-        final String stormLocal = "./target/DOWNLOAD-TEST/storm-local/";
-        ClientBlobStore blobStore = mock(ClientBlobStore.class);
-        Map<String, Object> conf = new HashMap<>();
-        conf.put(DaemonConfig.SUPERVISOR_BLOBSTORE, ClientBlobStore.class.getName());
-        conf.put(Config.STORM_PRINCIPAL_TO_LOCAL_PLUGIN, DefaultPrincipalToLocal.class.getName());
-        conf.put(Config.STORM_CLUSTER_MODE, "distributed");
-        conf.put(Config.STORM_LOCAL_DIR, stormLocal);
-        AdvancedFSOps ops = mock(AdvancedFSOps.class);
-        ReflectionUtils mockedRU = mock(ReflectionUtils.class);
-        ServerUtils mockedU = mock(ServerUtils.class);
+        ReflectionUtils mockedReflectionUtils = mock(ReflectionUtils.class);
+        ServerUtils mockedServerUtils = mock(ServerUtils.class);
 
-        AsyncLocalizer bl = spy(new AsyncLocalizer(conf, ops, getTestLocalizerRoot(), new StormMetricsRegistry()));
-        LocallyCachedTopologyBlob jarBlob = mock(LocallyCachedTopologyBlob.class);
-        doReturn(jarBlob).when(bl).getTopoJar(topoId, la.get_owner());
-        when(jarBlob.getLocalVersion()).thenReturn(-1L);
-        when(jarBlob.getRemoteVersion(any())).thenReturn(100L);
-        when(jarBlob.fetchUnzipToTemp(any())).thenReturn(100L);
+        ReflectionUtils previousReflectionUtils = ReflectionUtils.setInstance(mockedReflectionUtils);
+        ServerUtils previousServerUtils = ServerUtils.setInstance(mockedServerUtils);
 
-        LocallyCachedTopologyBlob codeBlob = mock(LocallyCachedTopologyBlob.class);
-        doReturn(codeBlob).when(bl).getTopoCode(topoId, la.get_owner());
-        when(codeBlob.getLocalVersion()).thenReturn(-1L);
-        when(codeBlob.getRemoteVersion(any())).thenReturn(200L);
-        when(codeBlob.fetchUnzipToTemp(any())).thenReturn(200L);
+        // cannot use automatic resource management here in this try because the AsyncLocalizer depends on a config map,
+        // which should take the storm local dir, and that storm local dir is declared in the try-with-resources.
+        AsyncLocalizer victim = null;
 
-        LocallyCachedTopologyBlob confBlob = mock(LocallyCachedTopologyBlob.class);
-        doReturn(confBlob).when(bl).getTopoConf(topoId, la.get_owner());
-        when(confBlob.getLocalVersion()).thenReturn(-1L);
-        when(confBlob.getRemoteVersion(any())).thenReturn(300L);
-        when(confBlob.fetchUnzipToTemp(any())).thenReturn(300L);
+        try (TmpPath stormRoot = new TmpPath(); TmpPath localizerRoot = new TmpPath()) {
 
-        ReflectionUtils origRU = ReflectionUtils.setInstance(mockedRU);
-        ServerUtils origUtils = ServerUtils.setInstance(mockedU);
-        try {
-            when(mockedRU.newInstanceImpl(ClientBlobStore.class)).thenReturn(blobStore);
+            Map<String, Object> conf = new HashMap<>();
+            conf.put(DaemonConfig.SUPERVISOR_BLOBSTORE, ClientBlobStore.class.getName());
+            conf.put(Config.STORM_PRINCIPAL_TO_LOCAL_PLUGIN, DefaultPrincipalToLocal.class.getName());
+            conf.put(Config.STORM_CLUSTER_MODE, "distributed");
+            conf.put(Config.STORM_LOCAL_DIR, stormRoot.getPath());
 
-            PortAndAssignment pna = new PortAndAssignmentImpl(port, la);
-            Future<Void> f = bl.requestDownloadBaseTopologyBlobs(pna, null);
+            AdvancedFSOps ops = AdvancedFSOps.make(conf);
+
+            victim = spy(new AsyncLocalizer(conf, ops, localizerRoot.getPath(), new StormMetricsRegistry()));
+
+            final String topoId = "TOPO";
+
+            final LocalAssignment localAssignment = constructLocalAssignment(topoId, "user");
+            final int port = 8080;
+
+            ClientBlobStore blobStore = mock(ClientBlobStore.class);
+
+            LocallyCachedTopologyBlob jarBlob = mock(LocallyCachedTopologyBlob.class);
+            doReturn(jarBlob).when(victim).getTopoJar(topoId, localAssignment.get_owner());
+            when(jarBlob.getLocalVersion()).thenReturn(-1L);
+            when(jarBlob.getRemoteVersion(any())).thenReturn(100L);
+            when(jarBlob.fetchUnzipToTemp(any())).thenReturn(100L);
+            when(jarBlob.isUsed()).thenReturn(true);
+
+            LocallyCachedTopologyBlob codeBlob = mock(LocallyCachedTopologyBlob.class);
+            doReturn(codeBlob).when(victim).getTopoCode(topoId, localAssignment.get_owner());
+            when(codeBlob.getLocalVersion()).thenReturn(-1L);
+            when(codeBlob.getRemoteVersion(any())).thenReturn(200L);
+            when(codeBlob.fetchUnzipToTemp(any())).thenReturn(200L);
+            when(codeBlob.isUsed()).thenReturn(true);
+
+            LocallyCachedTopologyBlob confBlob = mock(LocallyCachedTopologyBlob.class);
+            doReturn(confBlob).when(victim).getTopoConf(topoId, localAssignment.get_owner());
+            when(confBlob.getLocalVersion()).thenReturn(-1L);
+            when(confBlob.getRemoteVersion(any())).thenReturn(300L);
+            when(confBlob.fetchUnzipToTemp(any())).thenReturn(300L);
+            when(confBlob.isUsed()).thenReturn(true);
+
+            when(mockedReflectionUtils.newInstanceImpl(ClientBlobStore.class)).thenReturn(blobStore);
+
+            PortAndAssignment pna = new PortAndAssignmentImpl(port, localAssignment);
+            Future<Void> f = victim.requestDownloadBaseTopologyBlobs(pna, null);
             f.get(20, TimeUnit.SECONDS);
 
             verify(jarBlob).fetchUnzipToTemp(any());
@@ -161,92 +159,88 @@
             verify(confBlob).fetchUnzipToTemp(any());
             verify(confBlob).informReferencesAndCommitNewVersion(300L);
             verify(confBlob).cleanupOrphanedData();
+
         } finally {
-            bl.close();
-            ReflectionUtils.setInstance(origRU);
-            ServerUtils.setInstance(origUtils);
+            ReflectionUtils.setInstance(previousReflectionUtils);
+            ServerUtils.setInstance(previousServerUtils);
+
+            if (victim != null) {
+                victim.close();
+            }
         }
     }
 
     @Test
     public void testRequestDownloadTopologyBlobs() throws Exception {
-        final String topoId = "TOPO-12345";
-        final String user = "user";
-        LocalAssignment la = new LocalAssignment();
-        la.set_topology_id(topoId);
-        la.set_owner(user);
-        ExecutorInfo ei = new ExecutorInfo();
-        ei.set_task_start(1);
-        ei.set_task_end(1);
-        la.add_to_executors(ei);
-        final String topoName = "TOPO";
-        final int port = 8080;
-        final String simpleLocalName = "simple.txt";
-        final String simpleKey = "simple";
+        ConfigUtils mockedConfigUtils = mock(ConfigUtils.class);
+        ConfigUtils previousConfigUtils = ConfigUtils.setInstance(mockedConfigUtils);
 
-        final String stormLocal = "/tmp/storm-local/";
-        final File userDir = new File(stormLocal, user);
-        final String stormRoot = stormLocal + topoId + "/";
+        AsyncLocalizer victim = null;
 
-        final String localizerRoot = getTestLocalizerRoot();
-        final String simpleCurrentLocalFile = localizerRoot + "/usercache/" + user + "/filecache/files/simple.current";
+        try (TmpPath stormLocal = new TmpPath(); TmpPath localizerRoot = new TmpPath()) {
 
-        final StormTopology st = new StormTopology();
-        st.set_spouts(new HashMap<>());
-        st.set_bolts(new HashMap<>());
-        st.set_state_spouts(new HashMap<>());
+            Map<String, Object> conf = new HashMap<>();
+            conf.put(Config.STORM_LOCAL_DIR, stormLocal.getPath());
 
-        Map<String, Map<String, Object>> topoBlobMap = new HashMap<>();
-        Map<String, Object> simple = new HashMap<>();
-        simple.put("localname", simpleLocalName);
-        simple.put("uncompress", false);
-        topoBlobMap.put(simpleKey, simple);
+            AdvancedFSOps ops = AdvancedFSOps.make(conf);
+            StormMetricsRegistry metricsRegistry = new StormMetricsRegistry();
 
-        Map<String, Object> conf = new HashMap<>();
-        conf.put(Config.STORM_LOCAL_DIR, stormLocal);
-        AdvancedFSOps ops = mock(AdvancedFSOps.class);
-        ConfigUtils mockedCU = mock(ConfigUtils.class);
+            victim = spy(new AsyncLocalizer(conf, ops, localizerRoot.getPath(), metricsRegistry));
 
-        Map<String, Object> topoConf = new HashMap<>(conf);
-        topoConf.put(Config.TOPOLOGY_BLOBSTORE_MAP, topoBlobMap);
-        topoConf.put(Config.TOPOLOGY_NAME, topoName);
+            final String topoId = "TOPO-12345";
+            final String user = "user";
 
-        List<LocalizedResource> localizedList = new ArrayList<>();
-        StormMetricsRegistry metricsRegistry = new StormMetricsRegistry();
-        LocalizedResource simpleLocal = new LocalizedResource(simpleKey, Paths.get(localizerRoot), false, ops, conf, user, metricsRegistry);
-        localizedList.add(simpleLocal);
+            final Path userDir = Paths.get(stormLocal.getPath(), user);
+            final Path topologyDirRoot = Paths.get(stormLocal.getPath(), topoId);
 
-        AsyncLocalizer bl = spy(new AsyncLocalizer(conf, ops, localizerRoot, metricsRegistry));
-        ConfigUtils orig = ConfigUtils.setInstance(mockedCU);
-        try {
-            when(mockedCU.supervisorStormDistRootImpl(conf, topoId)).thenReturn(stormRoot);
-            when(mockedCU.readSupervisorStormConfImpl(conf, topoId)).thenReturn(topoConf);
-            when(mockedCU.readSupervisorTopologyImpl(conf, topoId, ops)).thenReturn(st);
+            final String simpleLocalName = "simple.txt";
+            final String simpleKey = "simple";
+            Map<String, Map<String, Object>> topoBlobMap = new HashMap<>();
+            Map<String, Object> simple = new HashMap<>();
+            simple.put("localname", simpleLocalName);
+            simple.put("uncompress", false);
+            topoBlobMap.put(simpleKey, simple);
+
+            final int port = 8080;
+
+            Map<String, Object> topoConf = new HashMap<>(conf);
+            topoConf.put(Config.TOPOLOGY_BLOBSTORE_MAP, topoBlobMap);
+            topoConf.put(Config.TOPOLOGY_NAME, "TOPO");
+
+            List<LocalizedResource> localizedList = new ArrayList<>();
+            LocalizedResource simpleLocal = new LocalizedResource(simpleKey, localizerRoot.getFile().toPath(), false, ops, conf, user, metricsRegistry);
+            localizedList.add(simpleLocal);
+
+            when(mockedConfigUtils.supervisorStormDistRootImpl(conf, topoId)).thenReturn(topologyDirRoot.toString());
+            when(mockedConfigUtils.readSupervisorStormConfImpl(conf, topoId)).thenReturn(topoConf);
+            when(mockedConfigUtils.readSupervisorTopologyImpl(conf, topoId, ops)).thenReturn(constructEmptyStormTopology());
 
             //Write the mocking backwards so the actual method is not called on the spy object
-            doReturn(CompletableFuture.supplyAsync(() -> null)).when(bl)
-                                                               .requestDownloadBaseTopologyBlobs(any(), eq(null));
-            doReturn(userDir).when(bl).getLocalUserFileCacheDir(user);
-            doReturn(localizedList).when(bl).getBlobs(any(List.class), any(), any());
+            doReturn(CompletableFuture.supplyAsync(() -> null)).when(victim)
+                    .requestDownloadBaseTopologyBlobs(any(), eq(null));
 
-            Future<Void> f = bl.requestDownloadTopologyBlobs(la, port, null);
+            Files.createDirectories(topologyDirRoot);
+
+            doReturn(userDir.toFile()).when(victim).getLocalUserFileCacheDir(user);
+            doReturn(localizedList).when(victim).getBlobs(any(List.class), any(), any());
+
+            Future<Void> f = victim.requestDownloadTopologyBlobs(constructLocalAssignment(topoId, user), port, null);
             f.get(20, TimeUnit.SECONDS);
             // We should be done now...
 
-            verify(bl).getLocalUserFileCacheDir(user);
+            verify(victim).getLocalUserFileCacheDir(user);
 
-            verify(ops).fileExists(userDir);
-            verify(ops).forceMkdir(userDir);
+            assertTrue(ops.fileExists(userDir));
 
-            verify(bl).getBlobs(any(List.class), any(), any());
+            verify(victim).getBlobs(any(List.class), any(), any());
 
-            verify(ops).createSymlink(new File(stormRoot, simpleLocalName), new File(simpleCurrentLocalFile));
+            // symlink was created
+            assertTrue(Files.isSymbolicLink(topologyDirRoot.resolve(simpleLocalName)));
+
         } finally {
-            try {
-                ConfigUtils.setInstance(orig);
-                bl.close();
-            } catch (Throwable e) {
-                LOG.error("ERROR trying to close an object", e);
+            ConfigUtils.setInstance(previousConfigUtils);
+            if (victim != null) {
+                victim.close();
             }
         }
     }
@@ -255,226 +249,220 @@
     @Test
     public void testRequestDownloadTopologyBlobsLocalMode() throws Exception {
         // tests download of topology blobs in local mode on a topology without resources folder
-        final String topoId = "TOPO-12345";
-        final String user = "user";
-        LocalAssignment la = new LocalAssignment();
-        la.set_topology_id(topoId);
-        la.set_owner(user);
-        ExecutorInfo ei = new ExecutorInfo();
-        ei.set_task_start(1);
-        ei.set_task_end(1);
-        la.add_to_executors(ei);
-        final String topoName = "TOPO";
-        final int port = 8080;
-        final String simpleLocalName = "simple.txt";
-        final String simpleKey = "simple";
 
-        final String stormLocal = "/tmp/storm-local/";
-        final File userDir = new File(stormLocal, user);
-        final String stormRoot = stormLocal + topoId + "/";
+        ConfigUtils mockedConfigUtils = mock(ConfigUtils.class);
+        ServerUtils mockedServerUtils = mock(ServerUtils.class);
 
-        final String localizerRoot = getTestLocalizerRoot();
+        ConfigUtils previousConfigUtils = ConfigUtils.setInstance(mockedConfigUtils);
+        ServerUtils previousServerUtils = ServerUtils.setInstance(mockedServerUtils);
 
-        final StormTopology st = new StormTopology();
-        st.set_spouts(new HashMap<>());
-        st.set_bolts(new HashMap<>());
-        st.set_state_spouts(new HashMap<>());
+        AsyncLocalizer victim = null;
 
-        Map<String, Map<String, Object>> topoBlobMap = new HashMap<>();
-        Map<String, Object> simple = new HashMap<>();
-        simple.put("localname", simpleLocalName);
-        simple.put("uncompress", false);
-        topoBlobMap.put(simpleKey, simple);
+        try (TmpPath stormLocal = new TmpPath(); TmpPath localizerRoot = new TmpPath()) {
 
-        Map<String, Object> conf = new HashMap<>();
-        conf.put(Config.STORM_LOCAL_DIR, stormLocal);
-        conf.put(Config.STORM_CLUSTER_MODE, "local");
-        AdvancedFSOps ops = mock(AdvancedFSOps.class);
-        ConfigUtils mockedCU = mock(ConfigUtils.class);
-        ServerUtils mockedSU = mock(ServerUtils.class);
+            Map<String, Object> conf = new HashMap<>();
+            conf.put(Config.STORM_LOCAL_DIR, stormLocal.getPath());
+            conf.put(Config.STORM_CLUSTER_MODE, "local");
 
-        Map<String, Object> topoConf = new HashMap<>(conf);
-        topoConf.put(Config.TOPOLOGY_BLOBSTORE_MAP, topoBlobMap);
-        topoConf.put(Config.TOPOLOGY_NAME, topoName);
+            StormMetricsRegistry metricsRegistry = new StormMetricsRegistry();
 
-        List<LocalizedResource> localizedList = new ArrayList<>();
-        StormMetricsRegistry metricsRegistry = new StormMetricsRegistry();
-        LocalizedResource simpleLocal = new LocalizedResource(simpleKey, Paths.get(localizerRoot), false, ops, conf, user, metricsRegistry);
-        localizedList.add(simpleLocal);
+            AdvancedFSOps ops = AdvancedFSOps.make(conf);
 
-        AsyncLocalizer bl = spy(new AsyncLocalizer(conf, ops, localizerRoot, metricsRegistry));
-        ConfigUtils orig = ConfigUtils.setInstance(mockedCU);
-        ServerUtils origSU = ServerUtils.setInstance(mockedSU);
+            victim = spy(new AsyncLocalizer(conf, ops, localizerRoot.getPath(), metricsRegistry));
 
-        try {
-            when(mockedCU.supervisorStormDistRootImpl(conf, topoId)).thenReturn(stormRoot);
-            when(mockedCU.readSupervisorStormConfImpl(conf, topoId)).thenReturn(topoConf);
-            when(mockedCU.readSupervisorTopologyImpl(conf, topoId, ops)).thenReturn(st);
+            final String topoId = "TOPO-12345";
+            final String user = "user";
 
-            doReturn(mockblobstore).when(bl).getClientBlobStore();
-            doReturn(userDir).when(bl).getLocalUserFileCacheDir(user);
-            doReturn(localizedList).when(bl).getBlobs(any(List.class), any(), any());
-            doReturn(mock(OutputStream.class)).when(ops).getOutputStream(any());
+            final int port = 8080;
+
+            final Path userDir = Paths.get(stormLocal.getPath(), user);
+            final Path stormRoot = Paths.get(stormLocal.getPath(), topoId);
+
+            final String simpleLocalName = "simple.txt";
+            final String simpleKey = "simple";
+            Map<String, Map<String, Object>> topoBlobMap = new HashMap<>();
+            Map<String, Object> simple = new HashMap<>();
+            simple.put("localname", simpleLocalName);
+            simple.put("uncompress", false);
+            topoBlobMap.put(simpleKey, simple);
+
+
+            Map<String, Object> topoConf = new HashMap<>(conf);
+            topoConf.put(Config.TOPOLOGY_BLOBSTORE_MAP, topoBlobMap);
+            topoConf.put(Config.TOPOLOGY_NAME, "TOPO");
+
+            List<LocalizedResource> localizedList = new ArrayList<>();
+            LocalizedResource simpleLocal = new LocalizedResource(simpleKey, localizerRoot.getFile().toPath(), false, ops, conf, user, metricsRegistry);
+            localizedList.add(simpleLocal);
+
+            when(mockedConfigUtils.supervisorStormDistRootImpl(conf, topoId)).thenReturn(stormRoot.toString());
+            when(mockedConfigUtils.readSupervisorStormConfImpl(conf, topoId)).thenReturn(topoConf);
+            when(mockedConfigUtils.readSupervisorTopologyImpl(conf, topoId, ops)).thenReturn(constructEmptyStormTopology());
+
+            doReturn(mockBlobStore).when(victim).getClientBlobStore();
+            doReturn(userDir.toFile()).when(victim).getLocalUserFileCacheDir(user);
+            doReturn(localizedList).when(victim).getBlobs(any(List.class), any(), any());
 
             ReadableBlobMeta blobMeta = new ReadableBlobMeta();
             blobMeta.set_version(1);
-            doReturn(blobMeta).when(mockblobstore).getBlobMeta(any());
-            when(mockblobstore.getBlob(any())).thenAnswer(invocation -> new TestInputStreamWithMeta(LOCAL_MODE_JAR_VERSION));
+            doReturn(blobMeta).when(mockBlobStore).getBlobMeta(any());
+            when(mockBlobStore.getBlob(any())).thenAnswer(invocation -> new TestInputStreamWithMeta(LOCAL_MODE_JAR_VERSION));
 
-            Future<Void> f = bl.requestDownloadTopologyBlobs(la, port, null);
+            Future<Void> f = victim.requestDownloadTopologyBlobs(constructLocalAssignment(topoId, user), port, null);
             f.get(20, TimeUnit.SECONDS);
 
-            verify(bl).getLocalUserFileCacheDir(user);
+            verify(victim).getLocalUserFileCacheDir(user);
 
-            verify(ops).fileExists(userDir);
-            verify(ops).forceMkdir(userDir);
+            assertTrue(ops.fileExists(userDir));
 
-            verify(bl).getBlobs(any(List.class), any(), any());
+            verify(victim).getBlobs(any(List.class), any(), any());
 
-            Path extractionDir = Paths.get(stormRoot,
-                    LocallyCachedTopologyBlob.TopologyBlobType.TOPO_JAR.getTempExtractionDir(LOCAL_MODE_JAR_VERSION));
-
-            // make sure resources dir is created.
-            verify(ops).forceMkdir(extractionDir);
+            // make sure resources directory after blob version commit is created.
+            Path extractionDir = stormRoot.resolve(LocallyCachedTopologyBlob.TopologyBlobType.TOPO_JAR.getExtractionDir());
+            assertTrue(ops.fileExists(extractionDir));
 
         } finally {
-            try {
-                ConfigUtils.setInstance(orig);
-                ServerUtils.setInstance(origSU);
-                bl.close();
-            } catch (Throwable e) {
-                LOG.error("ERROR trying to close an object", e);
+            ConfigUtils.setInstance(previousConfigUtils);
+            ServerUtils.setInstance(previousServerUtils);
+
+            if (victim != null) {
+                victim.close();
             }
         }
     }
 
-    @Before
-    public void setUp() throws Exception {
-        baseDir = new File(System.getProperty("java.io.tmpdir") + "/blob-store-localizer-test-" + UUID.randomUUID());
-        if (!baseDir.mkdir()) {
-            throw new IOException("failed to create base directory");
-        }
+    private LocalAssignment constructLocalAssignment(String topoId, String owner) {
+        return constructLocalAssignment(topoId, owner,
+                Collections.singletonList(new ExecutorInfo(1, 1))
+        );
     }
 
-    @After
-    public void tearDown() throws Exception {
-        try {
-            FileUtils.deleteDirectory(baseDir);
-        } catch (IOException ignore) {
-        }
+    private LocalAssignment constructLocalAssignment(String topoId, String owner, List<ExecutorInfo> executorInfos) {
+        LocalAssignment assignment = new LocalAssignment(topoId, executorInfos);
+        assignment.set_owner(owner);
+        return assignment;
     }
 
-    ;
+    private StormTopology constructEmptyStormTopology() {
+        StormTopology topology = new StormTopology();
+        topology.set_spouts(new HashMap<>());
+        topology.set_bolts(new HashMap<>());
+        topology.set_state_spouts(new HashMap<>());
+        return topology;
+    }
 
-    protected String joinPath(String... pathList) {
+    private String joinPath(String... pathList) {
         return Joiner.on(File.separator).join(pathList);
     }
 
-    public String constructUserCacheDir(String base, String user) {
+    private String constructUserCacheDir(String base, String user) {
         return joinPath(base, USERCACHE, user);
     }
 
-    public String constructExpectedFilesDir(String base, String user) {
+    private String constructExpectedFilesDir(String base, String user) {
         return joinPath(constructUserCacheDir(base, user), LocalizedResource.FILECACHE, LocalizedResource.FILESDIR);
     }
 
-    public String constructExpectedArchivesDir(String base, String user) {
+    private String constructExpectedArchivesDir(String base, String user) {
         return joinPath(constructUserCacheDir(base, user), LocalizedResource.FILECACHE, LocalizedResource.ARCHIVESDIR);
     }
 
     @Test
     public void testDirPaths() throws Exception {
-        Map<String, Object> conf = new HashMap();
-        AsyncLocalizer localizer = new TestLocalizer(conf, baseDir.toString());
+        try (TmpPath tmp = new TmpPath()) {
+            Map<String, Object> conf = new HashMap();
+            AsyncLocalizer localizer = new TestLocalizer(conf, tmp.getPath());
 
-        String expectedDir = constructUserCacheDir(baseDir.toString(), user1);
-        assertEquals("get local user dir doesn't return right value",
-                     expectedDir, localizer.getLocalUserDir(user1).toString());
+            String expectedDir = constructUserCacheDir(tmp.getPath(), user1);
+            assertEquals("get local user dir doesn't return right value",
+                         expectedDir, localizer.getLocalUserDir(user1).toString());
 
-        String expectedFileDir = joinPath(expectedDir, LocalizedResource.FILECACHE);
-        assertEquals("get local user file dir doesn't return right value",
-                     expectedFileDir, localizer.getLocalUserFileCacheDir(user1).toString());
+            String expectedFileDir = joinPath(expectedDir, LocalizedResource.FILECACHE);
+            assertEquals("get local user file dir doesn't return right value",
+                         expectedFileDir, localizer.getLocalUserFileCacheDir(user1).toString());
+        }
     }
 
     @Test
     public void testReconstruct() throws Exception {
-        Map<String, Object> conf = new HashMap<>();
+        try (TmpPath tmp = new TmpPath()){
+            Map<String, Object> conf = new HashMap<>();
 
-        String expectedFileDir1 = constructExpectedFilesDir(baseDir.toString(), user1);
-        String expectedArchiveDir1 = constructExpectedArchivesDir(baseDir.toString(), user1);
-        String expectedFileDir2 = constructExpectedFilesDir(baseDir.toString(), user2);
-        String expectedArchiveDir2 = constructExpectedArchivesDir(baseDir.toString(), user2);
+            String expectedFileDir1 = constructExpectedFilesDir(tmp.getPath(), user1);
+            String expectedArchiveDir1 = constructExpectedArchivesDir(tmp.getPath(), user1);
+            String expectedFileDir2 = constructExpectedFilesDir(tmp.getPath(), user2);
+            String expectedArchiveDir2 = constructExpectedArchivesDir(tmp.getPath(), user2);
 
-        String key1 = "testfile1.txt";
-        String key2 = "testfile2.txt";
-        String key3 = "testfile3.txt";
-        String key4 = "testfile4.txt";
+            String key1 = "testfile1.txt";
+            String key2 = "testfile2.txt";
+            String key3 = "testfile3.txt";
+            String key4 = "testfile4.txt";
 
-        String archive1 = "archive1";
-        String archive2 = "archive2";
+            String archive1 = "archive1";
+            String archive2 = "archive2";
 
-        File user1file1 = new File(expectedFileDir1, key1 + LocalizedResource.CURRENT_BLOB_SUFFIX);
-        File user1file2 = new File(expectedFileDir1, key2 + LocalizedResource.CURRENT_BLOB_SUFFIX);
-        File user2file3 = new File(expectedFileDir2, key3 + LocalizedResource.CURRENT_BLOB_SUFFIX);
-        File user2file4 = new File(expectedFileDir2, key4 + LocalizedResource.CURRENT_BLOB_SUFFIX);
+            File user1file1 = new File(expectedFileDir1, key1 + LocalizedResource.CURRENT_BLOB_SUFFIX);
+            File user1file2 = new File(expectedFileDir1, key2 + LocalizedResource.CURRENT_BLOB_SUFFIX);
+            File user2file3 = new File(expectedFileDir2, key3 + LocalizedResource.CURRENT_BLOB_SUFFIX);
+            File user2file4 = new File(expectedFileDir2, key4 + LocalizedResource.CURRENT_BLOB_SUFFIX);
 
-        File user1archive1 = new File(expectedArchiveDir1, archive1 + LocalizedResource.CURRENT_BLOB_SUFFIX);
-        File user2archive2 = new File(expectedArchiveDir2, archive2 + LocalizedResource.CURRENT_BLOB_SUFFIX);
-        File user1archive1file = new File(user1archive1, "file1");
-        File user2archive2file = new File(user2archive2, "file2");
+            File user1archive1 = new File(expectedArchiveDir1, archive1 + LocalizedResource.CURRENT_BLOB_SUFFIX);
+            File user2archive2 = new File(expectedArchiveDir2, archive2 + LocalizedResource.CURRENT_BLOB_SUFFIX);
+            File user1archive1file = new File(user1archive1, "file1");
+            File user2archive2file = new File(user2archive2, "file2");
 
-        // setup some files/dirs to emulate supervisor restart
-        assertTrue("Failed setup filecache dir1", new File(expectedFileDir1).mkdirs());
-        assertTrue("Failed setup filecache dir2", new File(expectedFileDir2).mkdirs());
-        assertTrue("Failed setup file1", user1file1.createNewFile());
-        assertTrue("Failed setup file2", user1file2.createNewFile());
-        assertTrue("Failed setup file3", user2file3.createNewFile());
-        assertTrue("Failed setup file4", user2file4.createNewFile());
-        assertTrue("Failed setup archive dir1", user1archive1.mkdirs());
-        assertTrue("Failed setup archive dir2", user2archive2.mkdirs());
-        assertTrue("Failed setup file in archivedir1", user1archive1file.createNewFile());
-        assertTrue("Failed setup file in archivedir2", user2archive2file.createNewFile());
+            // setup some files/dirs to emulate supervisor restart
+            assertTrue("Failed setup filecache dir1", new File(expectedFileDir1).mkdirs());
+            assertTrue("Failed setup filecache dir2", new File(expectedFileDir2).mkdirs());
+            assertTrue("Failed setup file1", user1file1.createNewFile());
+            assertTrue("Failed setup file2", user1file2.createNewFile());
+            assertTrue("Failed setup file3", user2file3.createNewFile());
+            assertTrue("Failed setup file4", user2file4.createNewFile());
+            assertTrue("Failed setup archive dir1", user1archive1.mkdirs());
+            assertTrue("Failed setup archive dir2", user2archive2.mkdirs());
+            assertTrue("Failed setup file in archivedir1", user1archive1file.createNewFile());
+            assertTrue("Failed setup file in archivedir2", user2archive2file.createNewFile());
 
-        TestLocalizer localizer = new TestLocalizer(conf, baseDir.toString());
+            TestLocalizer localizer = new TestLocalizer(conf, tmp.getPath());
 
-        ArrayList<LocalResource> arrUser1Keys = new ArrayList<>();
-        arrUser1Keys.add(new LocalResource(key1, false, false));
-        arrUser1Keys.add(new LocalResource(archive1, true, false));
-        LocalAssignment topo1 = new LocalAssignment("topo1", Collections.emptyList());
-        topo1.set_owner(user1);
-        localizer.addReferences(arrUser1Keys, new PortAndAssignmentImpl(1, topo1), null);
+            ArrayList<LocalResource> arrUser1Keys = new ArrayList<>();
+            arrUser1Keys.add(new LocalResource(key1, false, false));
+            arrUser1Keys.add(new LocalResource(archive1, true, false));
+            LocalAssignment topo1 = constructLocalAssignment("topo1", user1, Collections.emptyList());
+            localizer.addReferences(arrUser1Keys, new PortAndAssignmentImpl(1, topo1), null);
 
-        ConcurrentMap<String, LocalizedResource> lrsrcFiles = localizer.getUserFiles().get(user1);
-        ConcurrentMap<String, LocalizedResource> lrsrcArchives = localizer.getUserArchives().get(user1);
-        assertEquals("local resource set size wrong", 3, lrsrcFiles.size() + lrsrcArchives.size());
-        LocalizedResource key1rsrc = lrsrcFiles.get(key1);
-        assertNotNull("Local resource doesn't exist but should", key1rsrc);
-        assertEquals("key doesn't match", key1, key1rsrc.getKey());
-        assertEquals("references doesn't match " + key1rsrc.getDependencies(), true, key1rsrc.isUsed());
-        LocalizedResource key2rsrc = lrsrcFiles.get(key2);
-        assertNotNull("Local resource doesn't exist but should", key2rsrc);
-        assertEquals("key doesn't match", key2, key2rsrc.getKey());
-        assertEquals("refcount doesn't match " + key2rsrc.getDependencies(), false, key2rsrc.isUsed());
-        LocalizedResource archive1rsrc = lrsrcArchives.get(archive1);
-        assertNotNull("Local resource doesn't exist but should", archive1rsrc);
-        assertEquals("key doesn't match", archive1, archive1rsrc.getKey());
-        assertEquals("refcount doesn't match " + archive1rsrc.getDependencies(), true, archive1rsrc.isUsed());
+            ConcurrentMap<String, LocalizedResource> lrsrcFiles = localizer.getUserFiles().get(user1);
+            ConcurrentMap<String, LocalizedResource> lrsrcArchives = localizer.getUserArchives().get(user1);
+            assertEquals("local resource set size wrong", 3, lrsrcFiles.size() + lrsrcArchives.size());
+            LocalizedResource key1rsrc = lrsrcFiles.get(key1);
+            assertNotNull("Local resource doesn't exist but should", key1rsrc);
+            assertEquals("key doesn't match", key1, key1rsrc.getKey());
+            assertEquals("references doesn't match " + key1rsrc.getDependencies(), true, key1rsrc.isUsed());
+            LocalizedResource key2rsrc = lrsrcFiles.get(key2);
+            assertNotNull("Local resource doesn't exist but should", key2rsrc);
+            assertEquals("key doesn't match", key2, key2rsrc.getKey());
+            assertEquals("refcount doesn't match " + key2rsrc.getDependencies(), false, key2rsrc.isUsed());
+            LocalizedResource archive1rsrc = lrsrcArchives.get(archive1);
+            assertNotNull("Local resource doesn't exist but should", archive1rsrc);
+            assertEquals("key doesn't match", archive1, archive1rsrc.getKey());
+            assertEquals("refcount doesn't match " + archive1rsrc.getDependencies(), true, archive1rsrc.isUsed());
 
-        ConcurrentMap<String, LocalizedResource> lrsrcFiles2 = localizer.getUserFiles().get(user2);
-        ConcurrentMap<String, LocalizedResource> lrsrcArchives2 = localizer.getUserArchives().get(user2);
-        assertEquals("local resource set size wrong", 3, lrsrcFiles2.size() + lrsrcArchives2.size());
-        LocalizedResource key3rsrc = lrsrcFiles2.get(key3);
-        assertNotNull("Local resource doesn't exist but should", key3rsrc);
-        assertEquals("key doesn't match", key3, key3rsrc.getKey());
-        assertEquals("refcount doesn't match " + key3rsrc.getDependencies(), false, key3rsrc.isUsed());
-        LocalizedResource key4rsrc = lrsrcFiles2.get(key4);
-        assertNotNull("Local resource doesn't exist but should", key4rsrc);
-        assertEquals("key doesn't match", key4, key4rsrc.getKey());
-        assertEquals("refcount doesn't match " + key4rsrc.getDependencies(), false, key4rsrc.isUsed());
-        LocalizedResource archive2rsrc = lrsrcArchives2.get(archive2);
-        assertNotNull("Local resource doesn't exist but should", archive2rsrc);
-        assertEquals("key doesn't match", archive2, archive2rsrc.getKey());
-        assertEquals("refcount doesn't match " + archive2rsrc.getDependencies(), false, archive2rsrc.isUsed());
+            ConcurrentMap<String, LocalizedResource> lrsrcFiles2 = localizer.getUserFiles().get(user2);
+            ConcurrentMap<String, LocalizedResource> lrsrcArchives2 = localizer.getUserArchives().get(user2);
+            assertEquals("local resource set size wrong", 3, lrsrcFiles2.size() + lrsrcArchives2.size());
+            LocalizedResource key3rsrc = lrsrcFiles2.get(key3);
+            assertNotNull("Local resource doesn't exist but should", key3rsrc);
+            assertEquals("key doesn't match", key3, key3rsrc.getKey());
+            assertEquals("refcount doesn't match " + key3rsrc.getDependencies(), false, key3rsrc.isUsed());
+            LocalizedResource key4rsrc = lrsrcFiles2.get(key4);
+            assertNotNull("Local resource doesn't exist but should", key4rsrc);
+            assertEquals("key doesn't match", key4, key4rsrc.getKey());
+            assertEquals("refcount doesn't match " + key4rsrc.getDependencies(), false, key4rsrc.isUsed());
+            LocalizedResource archive2rsrc = lrsrcArchives2.get(archive2);
+            assertNotNull("Local resource doesn't exist but should", archive2rsrc);
+            assertEquals("key doesn't match", archive2, archive2rsrc.getKey());
+            assertEquals("refcount doesn't match " + archive2rsrc.getDependencies(), false, archive2rsrc.isUsed());
+        }
     }
 
     @Test
@@ -513,7 +501,7 @@
             // Windows should set this to false cause symlink in compressed file doesn't work properly.
             supportSymlinks = false;
         }
-        try (Time.SimulatedTime st = new Time.SimulatedTime()) {
+        try (Time.SimulatedTime st = new Time.SimulatedTime(); TmpPath tmp = new TmpPath()) {
 
             Map<String, Object> conf = new HashMap<>();
             // set clean time really high so doesn't kick in
@@ -522,16 +510,16 @@
             String key1 = archiveFile.getName();
             String topo1 = "topo1";
             LOG.info("About to create new AsyncLocalizer...");
-            TestLocalizer localizer = new TestLocalizer(conf, baseDir.toString());
+            TestLocalizer localizer = new TestLocalizer(conf, tmp.getPath());
             // set really small so will do cleanup
             localizer.setTargetCacheSize(1);
             LOG.info("created AsyncLocalizer...");
 
             ReadableBlobMeta rbm = new ReadableBlobMeta();
             rbm.set_settable(new SettableBlobMeta(WORLD_EVERYTHING));
-            when(mockblobstore.getBlobMeta(key1)).thenReturn(rbm);
+            when(mockBlobStore.getBlobMeta(key1)).thenReturn(rbm);
 
-            when(mockblobstore.getBlob(key1)).thenReturn(new TestInputStreamWithMeta(new
+            when(mockBlobStore.getBlob(key1)).thenReturn(new TestInputStreamWithMeta(new
                                                                                          FileInputStream(archiveFile.getAbsolutePath()),
                                                                                      0, archiveFile.length()));
 
@@ -539,15 +527,14 @@
             Time.advanceTime(10);
             File user1Dir = localizer.getLocalUserFileCacheDir(user1);
             assertTrue("failed to create user dir", user1Dir.mkdirs());
-            LocalAssignment topo1Assignment = new LocalAssignment(topo1, Collections.emptyList());
-            topo1Assignment.set_owner(user1);
+            LocalAssignment topo1Assignment = constructLocalAssignment(topo1, user1, Collections.emptyList());
             PortAndAssignment topo1Pna = new PortAndAssignmentImpl(1, topo1Assignment);
             LocalizedResource lrsrc = localizer.getBlob(new LocalResource(key1, true, false), topo1Pna, null);
             Time.advanceTime(10);
             long timeAfter = Time.currentTimeMillis();
             Time.advanceTime(10);
 
-            String expectedUserDir = joinPath(baseDir.toString(), USERCACHE, user1);
+            String expectedUserDir = joinPath(tmp.getPath(), USERCACHE, user1);
             String expectedFileDir = joinPath(expectedUserDir, LocalizedResource.FILECACHE, LocalizedResource.ARCHIVESDIR);
             assertTrue("user filecache dir not created", new File(expectedFileDir).exists());
             File keyFile = new File(expectedFileDir, key1 + ".0");
@@ -602,36 +589,35 @@
 
     @Test
     public void testBasic() throws Exception {
-        try (Time.SimulatedTime st = new Time.SimulatedTime()) {
+        try (Time.SimulatedTime st = new Time.SimulatedTime(); TmpPath tmp = new TmpPath()) {
             Map<String, Object> conf = new HashMap();
             // set clean time really high so doesn't kick in
             conf.put(DaemonConfig.SUPERVISOR_LOCALIZER_CACHE_CLEANUP_INTERVAL_MS, 60 * 60 * 1000);
 
             String key1 = "key1";
             String topo1 = "topo1";
-            TestLocalizer localizer = new TestLocalizer(conf, baseDir.toString());
+            TestLocalizer localizer = new TestLocalizer(conf, tmp.getPath());
             // set really small so will do cleanup
             localizer.setTargetCacheSize(1);
 
             ReadableBlobMeta rbm = new ReadableBlobMeta();
             rbm.set_settable(new SettableBlobMeta(WORLD_EVERYTHING));
-            when(mockblobstore.getBlobMeta(key1)).thenReturn(rbm);
+            when(mockBlobStore.getBlobMeta(key1)).thenReturn(rbm);
 
-            when(mockblobstore.getBlob(key1)).thenReturn(new TestInputStreamWithMeta(1));
+            when(mockBlobStore.getBlob(key1)).thenReturn(new TestInputStreamWithMeta(1));
 
             long timeBefore = Time.currentTimeMillis();
             Time.advanceTime(10);
             File user1Dir = localizer.getLocalUserFileCacheDir(user1);
             assertTrue("failed to create user dir", user1Dir.mkdirs());
             Time.advanceTime(10);
-            LocalAssignment topo1Assignment = new LocalAssignment(topo1, Collections.emptyList());
-            topo1Assignment.set_owner(user1);
+            LocalAssignment topo1Assignment = constructLocalAssignment(topo1, user1, Collections.emptyList());
             PortAndAssignment topo1Pna = new PortAndAssignmentImpl(1, topo1Assignment);
             LocalizedResource lrsrc = localizer.getBlob(new LocalResource(key1, false, false), topo1Pna, null);
             long timeAfter = Time.currentTimeMillis();
             Time.advanceTime(10);
 
-            String expectedUserDir = joinPath(baseDir.toString(), USERCACHE, user1);
+            String expectedUserDir = joinPath(tmp.getPath(), USERCACHE, user1);
             String expectedFileDir = joinPath(expectedUserDir, LocalizedResource.FILECACHE, LocalizedResource.FILESDIR);
             assertTrue("user filecache dir not created", new File(expectedFileDir).exists());
             File keyFile = new File(expectedFileDir, key1 + ".current");
@@ -678,7 +664,7 @@
 
     @Test
     public void testMultipleKeysOneUser() throws Exception {
-        try (Time.SimulatedTime st = new Time.SimulatedTime()) {
+        try (Time.SimulatedTime st = new Time.SimulatedTime(); TmpPath tmp = new TmpPath()) {
             Map<String, Object> conf = new HashMap<>();
             // set clean time really high so doesn't kick in
             conf.put(DaemonConfig.SUPERVISOR_LOCALIZER_CACHE_CLEANUP_INTERVAL_MS, 60 * 60 * 1_000);
@@ -687,32 +673,31 @@
             String topo1 = "topo1";
             String key2 = "key2";
             String key3 = "key3";
-            TestLocalizer localizer = new TestLocalizer(conf, baseDir.toString());
+            TestLocalizer localizer = new TestLocalizer(conf, tmp.getPath());
             // set to keep 2 blobs (each of size 34)
             localizer.setTargetCacheSize(68);
 
             ReadableBlobMeta rbm = new ReadableBlobMeta();
             rbm.set_settable(new SettableBlobMeta(WORLD_EVERYTHING));
-            when(mockblobstore.getBlobMeta(anyString())).thenReturn(rbm);
-            when(mockblobstore.isRemoteBlobExists(anyString())).thenReturn(true);
-            when(mockblobstore.getBlob(key1)).thenReturn(new TestInputStreamWithMeta(0));
-            when(mockblobstore.getBlob(key2)).thenReturn(new TestInputStreamWithMeta(0));
-            when(mockblobstore.getBlob(key3)).thenReturn(new TestInputStreamWithMeta(0));
+            when(mockBlobStore.getBlobMeta(anyString())).thenReturn(rbm);
+            when(mockBlobStore.isRemoteBlobExists(anyString())).thenReturn(true);
+            when(mockBlobStore.getBlob(key1)).thenReturn(new TestInputStreamWithMeta(0));
+            when(mockBlobStore.getBlob(key2)).thenReturn(new TestInputStreamWithMeta(0));
+            when(mockBlobStore.getBlob(key3)).thenReturn(new TestInputStreamWithMeta(0));
 
             List<LocalResource> keys = Arrays.asList(new LocalResource(key1, false, false),
                     new LocalResource(key2, false, false), new LocalResource(key3, false, false));
             File user1Dir = localizer.getLocalUserFileCacheDir(user1);
             assertTrue("failed to create user dir", user1Dir.mkdirs());
 
-            LocalAssignment topo1Assignment = new LocalAssignment(topo1, Collections.emptyList());
-            topo1Assignment.set_owner(user1);
+            LocalAssignment topo1Assignment = constructLocalAssignment(topo1, user1, Collections.emptyList());
             PortAndAssignment topo1Pna = new PortAndAssignmentImpl(1, topo1Assignment);
             List<LocalizedResource> lrsrcs = localizer.getBlobs(keys, topo1Pna, null);
             LocalizedResource lrsrc = lrsrcs.get(0);
             LocalizedResource lrsrc2 = lrsrcs.get(1);
             LocalizedResource lrsrc3 = lrsrcs.get(2);
 
-            String expectedFileDir = joinPath(baseDir.toString(), USERCACHE, user1,
+            String expectedFileDir = joinPath(tmp.getPath(), USERCACHE, user1,
                                               LocalizedResource.FILECACHE, LocalizedResource.FILESDIR);
             assertTrue("user filecache dir not created", new File(expectedFileDir).exists());
             File keyFile = new File(expectedFileDir, key1 + LocalizedResource.CURRENT_BLOB_SUFFIX);
@@ -782,31 +767,32 @@
 
     @Test(expected = AuthorizationException.class)
     public void testFailAcls() throws Exception {
-        Map<String, Object> conf = new HashMap();
-        // set clean time really high so doesn't kick in
-        conf.put(DaemonConfig.SUPERVISOR_LOCALIZER_CACHE_CLEANUP_INTERVAL_MS, 60 * 60 * 1000);
-        // enable blobstore acl validation
-        conf.put(Config.STORM_BLOBSTORE_ACL_VALIDATION_ENABLED, true);
+        try (TmpPath tmp = new TmpPath()) {
+            Map<String, Object> conf = new HashMap();
+            // set clean time really high so doesn't kick in
+            conf.put(DaemonConfig.SUPERVISOR_LOCALIZER_CACHE_CLEANUP_INTERVAL_MS, 60 * 60 * 1000);
+            // enable blobstore acl validation
+            conf.put(Config.STORM_BLOBSTORE_ACL_VALIDATION_ENABLED, true);
 
-        String topo1 = "topo1";
-        String key1 = "key1";
-        TestLocalizer localizer = new TestLocalizer(conf, baseDir.toString());
+            String topo1 = "topo1";
+            String key1 = "key1";
+            TestLocalizer localizer = new TestLocalizer(conf, tmp.getPath());
 
-        ReadableBlobMeta rbm = new ReadableBlobMeta();
-        // set acl so user doesn't have read access
-        AccessControl acl = new AccessControl(AccessControlType.USER, BlobStoreAclHandler.ADMIN);
-        acl.set_name(user1);
-        rbm.set_settable(new SettableBlobMeta(Arrays.asList(acl)));
-        when(mockblobstore.getBlobMeta(anyString())).thenReturn(rbm);
-        when(mockblobstore.getBlob(key1)).thenReturn(new TestInputStreamWithMeta(1));
-        File user1Dir = localizer.getLocalUserFileCacheDir(user1);
-        assertTrue("failed to create user dir", user1Dir.mkdirs());
+            ReadableBlobMeta rbm = new ReadableBlobMeta();
+            // set acl so user doesn't have read access
+            AccessControl acl = new AccessControl(AccessControlType.USER, BlobStoreAclHandler.ADMIN);
+            acl.set_name(user1);
+            rbm.set_settable(new SettableBlobMeta(Arrays.asList(acl)));
+            when(mockBlobStore.getBlobMeta(anyString())).thenReturn(rbm);
+            when(mockBlobStore.getBlob(key1)).thenReturn(new TestInputStreamWithMeta(1));
+            File user1Dir = localizer.getLocalUserFileCacheDir(user1);
+            assertTrue("failed to create user dir", user1Dir.mkdirs());
 
-        LocalAssignment topo1Assignment = new LocalAssignment(topo1, Collections.emptyList());
-        topo1Assignment.set_owner(user1);
-        PortAndAssignment topo1Pna = new PortAndAssignmentImpl(1, topo1Assignment);
-        // This should throw AuthorizationException because auth failed
-        localizer.getBlob(new LocalResource(key1, false, false), topo1Pna, null);
+            LocalAssignment topo1Assignment = constructLocalAssignment(topo1, user1, Collections.emptyList());
+            PortAndAssignment topo1Pna = new PortAndAssignmentImpl(1, topo1Assignment);
+            // This should throw AuthorizationException because auth failed
+            localizer.getBlob(new LocalResource(key1, false, false), topo1Pna, null);
+        }
     }
 
     @Test(expected = KeyNotFoundException.class)
@@ -824,163 +810,162 @@
 
     @Test
     public void testMultipleUsers() throws Exception {
-        Map<String, Object> conf = new HashMap<>();
-        // set clean time really high so doesn't kick in
-        conf.put(DaemonConfig.SUPERVISOR_LOCALIZER_CACHE_CLEANUP_INTERVAL_MS, 60 * 60 * 1000);
+        try (TmpPath tmp = new TmpPath()){
+            Map<String, Object> conf = new HashMap<>();
+            // set clean time really high so doesn't kick in
+            conf.put(DaemonConfig.SUPERVISOR_LOCALIZER_CACHE_CLEANUP_INTERVAL_MS, 60 * 60 * 1000);
 
-        String topo1 = "topo1";
-        String topo2 = "topo2";
-        String topo3 = "topo3";
-        String key1 = "key1";
-        String key2 = "key2";
-        String key3 = "key3";
-        TestLocalizer localizer = new TestLocalizer(conf, baseDir.toString());
-        // set to keep 2 blobs (each of size 34)
-        localizer.setTargetCacheSize(68);
+            String topo1 = "topo1";
+            String topo2 = "topo2";
+            String topo3 = "topo3";
+            String key1 = "key1";
+            String key2 = "key2";
+            String key3 = "key3";
+            TestLocalizer localizer = new TestLocalizer(conf, tmp.getPath());
+            // set to keep 2 blobs (each of size 34)
+            localizer.setTargetCacheSize(68);
 
-        ReadableBlobMeta rbm = new ReadableBlobMeta();
-        rbm.set_settable(new SettableBlobMeta(WORLD_EVERYTHING));
-        when(mockblobstore.getBlobMeta(anyString())).thenReturn(rbm);
-        //thenReturn always returns the same object, which is already consumed by the time User3 tries to getBlob!
-        when(mockblobstore.getBlob(key1)).thenAnswer((i) -> new TestInputStreamWithMeta(1));
-        when(mockblobstore.getBlob(key2)).thenReturn(new TestInputStreamWithMeta(1));
-        when(mockblobstore.getBlob(key3)).thenReturn(new TestInputStreamWithMeta(1));
+            ReadableBlobMeta rbm = new ReadableBlobMeta();
+            rbm.set_settable(new SettableBlobMeta(WORLD_EVERYTHING));
+            when(mockBlobStore.getBlobMeta(anyString())).thenReturn(rbm);
+            //thenReturn always returns the same object, which is already consumed by the time User3 tries to getBlob!
+            when(mockBlobStore.getBlob(key1)).thenAnswer((i) -> new TestInputStreamWithMeta(1));
+            when(mockBlobStore.getBlob(key2)).thenReturn(new TestInputStreamWithMeta(1));
+            when(mockBlobStore.getBlob(key3)).thenReturn(new TestInputStreamWithMeta(1));
 
-        File user1Dir = localizer.getLocalUserFileCacheDir(user1);
-        assertTrue("failed to create user dir", user1Dir.mkdirs());
-        File user2Dir = localizer.getLocalUserFileCacheDir(user2);
-        assertTrue("failed to create user dir", user2Dir.mkdirs());
-        File user3Dir = localizer.getLocalUserFileCacheDir(user3);
-        assertTrue("failed to create user dir", user3Dir.mkdirs());
+            File user1Dir = localizer.getLocalUserFileCacheDir(user1);
+            assertTrue("failed to create user dir", user1Dir.mkdirs());
+            File user2Dir = localizer.getLocalUserFileCacheDir(user2);
+            assertTrue("failed to create user dir", user2Dir.mkdirs());
+            File user3Dir = localizer.getLocalUserFileCacheDir(user3);
+            assertTrue("failed to create user dir", user3Dir.mkdirs());
 
-        LocalAssignment topo1Assignment = new LocalAssignment(topo1, Collections.emptyList());
-        topo1Assignment.set_owner(user1);
-        PortAndAssignment topo1Pna = new PortAndAssignmentImpl(1, topo1Assignment);
-        LocalizedResource lrsrc = localizer.getBlob(new LocalResource(key1, false, false), topo1Pna, null);
+            LocalAssignment topo1Assignment = constructLocalAssignment(topo1, user1, Collections.emptyList());
+            PortAndAssignment topo1Pna = new PortAndAssignmentImpl(1, topo1Assignment);
+            LocalizedResource lrsrc = localizer.getBlob(new LocalResource(key1, false, false), topo1Pna, null);
 
-        LocalAssignment topo2Assignment = new LocalAssignment(topo2, Collections.emptyList());
-        topo2Assignment.set_owner(user2);
-        PortAndAssignment topo2Pna = new PortAndAssignmentImpl(2, topo2Assignment);
-        LocalizedResource lrsrc2 = localizer.getBlob(new LocalResource(key2, false, false), topo2Pna, null);
+            LocalAssignment topo2Assignment = constructLocalAssignment(topo2, user2, Collections.emptyList());
+            PortAndAssignment topo2Pna = new PortAndAssignmentImpl(2, topo2Assignment);
+            LocalizedResource lrsrc2 = localizer.getBlob(new LocalResource(key2, false, false), topo2Pna, null);
 
-        LocalAssignment topo3Assignment = new LocalAssignment(topo3, Collections.emptyList());
-        topo3Assignment.set_owner(user3);
-        PortAndAssignment topo3Pna = new PortAndAssignmentImpl(3, topo3Assignment);
-        LocalizedResource lrsrc3 = localizer.getBlob(new LocalResource(key3, false, false), topo3Pna, null);
+            LocalAssignment topo3Assignment = constructLocalAssignment(topo3, user3, Collections.emptyList());
+            PortAndAssignment topo3Pna = new PortAndAssignmentImpl(3, topo3Assignment);
+            LocalizedResource lrsrc3 = localizer.getBlob(new LocalResource(key3, false, false), topo3Pna, null);
 
-        // make sure we support different user reading same blob
-        LocalizedResource lrsrc1_user3 = localizer.getBlob(new LocalResource(key1, false, false), topo3Pna, null);
+            // make sure we support different user reading same blob
+            LocalizedResource lrsrc1_user3 = localizer.getBlob(new LocalResource(key1, false, false), topo3Pna, null);
 
-        String expectedUserDir1 = joinPath(baseDir.toString(), USERCACHE, user1);
-        String expectedFileDirUser1 = joinPath(expectedUserDir1, LocalizedResource.FILECACHE, LocalizedResource.FILESDIR);
-        String expectedFileDirUser2 = joinPath(baseDir.toString(), USERCACHE, user2,
-                                               LocalizedResource.FILECACHE, LocalizedResource.FILESDIR);
-        String expectedFileDirUser3 = joinPath(baseDir.toString(), USERCACHE, user3,
-                                               LocalizedResource.FILECACHE, LocalizedResource.FILESDIR);
-        assertTrue("user filecache dir user1 not created", new File(expectedFileDirUser1).exists());
-        assertTrue("user filecache dir user2 not created", new File(expectedFileDirUser2).exists());
-        assertTrue("user filecache dir user3 not created", new File(expectedFileDirUser3).exists());
+            String expectedUserDir1 = joinPath(tmp.getPath(), USERCACHE, user1);
+            String expectedFileDirUser1 = joinPath(expectedUserDir1, LocalizedResource.FILECACHE, LocalizedResource.FILESDIR);
+            String expectedFileDirUser2 = joinPath(tmp.getPath(), USERCACHE, user2,
+                    LocalizedResource.FILECACHE, LocalizedResource.FILESDIR);
+            String expectedFileDirUser3 = joinPath(tmp.getPath(), USERCACHE, user3,
+                    LocalizedResource.FILECACHE, LocalizedResource.FILESDIR);
+            assertTrue("user filecache dir user1 not created", new File(expectedFileDirUser1).exists());
+            assertTrue("user filecache dir user2 not created", new File(expectedFileDirUser2).exists());
+            assertTrue("user filecache dir user3 not created", new File(expectedFileDirUser3).exists());
 
-        File keyFile = new File(expectedFileDirUser1, key1 + LocalizedResource.CURRENT_BLOB_SUFFIX);
-        File keyFile2 = new File(expectedFileDirUser2, key2 + LocalizedResource.CURRENT_BLOB_SUFFIX);
-        File keyFile3 = new File(expectedFileDirUser3, key3 + LocalizedResource.CURRENT_BLOB_SUFFIX);
-        File keyFile1user3 = new File(expectedFileDirUser3, key1 + LocalizedResource.CURRENT_BLOB_SUFFIX);
+            File keyFile = new File(expectedFileDirUser1, key1 + LocalizedResource.CURRENT_BLOB_SUFFIX);
+            File keyFile2 = new File(expectedFileDirUser2, key2 + LocalizedResource.CURRENT_BLOB_SUFFIX);
+            File keyFile3 = new File(expectedFileDirUser3, key3 + LocalizedResource.CURRENT_BLOB_SUFFIX);
+            File keyFile1user3 = new File(expectedFileDirUser3, key1 + LocalizedResource.CURRENT_BLOB_SUFFIX);
 
-        assertTrue("blob not created", keyFile.exists());
-        assertTrue("blob not created", keyFile2.exists());
-        assertTrue("blob not created", keyFile3.exists());
-        assertTrue("blob not created", keyFile1user3.exists());
+            assertTrue("blob not created", keyFile.exists());
+            assertTrue("blob not created", keyFile2.exists());
+            assertTrue("blob not created", keyFile3.exists());
+            assertTrue("blob not created", keyFile1user3.exists());
 
-        //Should assert file size
-        assertEquals("size doesn't match", 34, lrsrc.getSizeOnDisk());
-        assertEquals("size doesn't match", 34, lrsrc2.getSizeOnDisk());
-        assertEquals("size doesn't match", 34, lrsrc3.getSizeOnDisk());
-        //This was 0 byte in test
-        assertEquals("size doesn't match", 34, lrsrc1_user3.getSizeOnDisk());
+            //Should assert file size
+            assertEquals("size doesn't match", 34, lrsrc.getSizeOnDisk());
+            assertEquals("size doesn't match", 34, lrsrc2.getSizeOnDisk());
+            assertEquals("size doesn't match", 34, lrsrc3.getSizeOnDisk());
+            //This was 0 byte in test
+            assertEquals("size doesn't match", 34, lrsrc1_user3.getSizeOnDisk());
 
-        ConcurrentMap<String, LocalizedResource> lrsrcSet = localizer.getUserFiles().get(user1);
-        assertEquals("local resource set size wrong", 1, lrsrcSet.size());
-        ConcurrentMap<String, LocalizedResource> lrsrcSet2 = localizer.getUserFiles().get(user2);
-        assertEquals("local resource set size wrong", 1, lrsrcSet2.size());
-        ConcurrentMap<String, LocalizedResource> lrsrcSet3 = localizer.getUserFiles().get(user3);
-        assertEquals("local resource set size wrong", 2, lrsrcSet3.size());
+            ConcurrentMap<String, LocalizedResource> lrsrcSet = localizer.getUserFiles().get(user1);
+            assertEquals("local resource set size wrong", 1, lrsrcSet.size());
+            ConcurrentMap<String, LocalizedResource> lrsrcSet2 = localizer.getUserFiles().get(user2);
+            assertEquals("local resource set size wrong", 1, lrsrcSet2.size());
+            ConcurrentMap<String, LocalizedResource> lrsrcSet3 = localizer.getUserFiles().get(user3);
+            assertEquals("local resource set size wrong", 2, lrsrcSet3.size());
 
-        localizer.removeBlobReference(lrsrc.getKey(), topo1Pna, false);
-        // should remove key1
-        localizer.cleanup();
+            localizer.removeBlobReference(lrsrc.getKey(), topo1Pna, false);
+            // should remove key1
+            localizer.cleanup();
 
-        lrsrcSet = localizer.getUserFiles().get(user1);
-        lrsrcSet3 = localizer.getUserFiles().get(user3);
-        assertNull("user set should be null", lrsrcSet);
-        assertFalse("blob dir not deleted", new File(expectedFileDirUser1).exists());
-        assertFalse("blob dir not deleted", new File(expectedUserDir1).exists());
-        assertEquals("local resource set size wrong", 2, lrsrcSet3.size());
+            lrsrcSet = localizer.getUserFiles().get(user1);
+            lrsrcSet3 = localizer.getUserFiles().get(user3);
+            assertNull("user set should be null", lrsrcSet);
+            assertFalse("blob dir not deleted", new File(expectedFileDirUser1).exists());
+            assertFalse("blob dir not deleted", new File(expectedUserDir1).exists());
+            assertEquals("local resource set size wrong", 2, lrsrcSet3.size());
 
-        assertTrue("blob deleted", keyFile2.exists());
-        assertFalse("blob not deleted", keyFile.exists());
-        assertTrue("blob deleted", keyFile3.exists());
-        assertTrue("blob deleted", keyFile1user3.exists());
+            assertTrue("blob deleted", keyFile2.exists());
+            assertFalse("blob not deleted", keyFile.exists());
+            assertTrue("blob deleted", keyFile3.exists());
+            assertTrue("blob deleted", keyFile1user3.exists());
+        }
     }
 
     @Test
     public void testUpdate() throws Exception {
-        Map<String, Object> conf = new HashMap<>();
-        // set clean time really high so doesn't kick in
-        conf.put(DaemonConfig.SUPERVISOR_LOCALIZER_CACHE_CLEANUP_INTERVAL_MS, 60 * 60 * 1000);
+        try (TmpPath tmp = new TmpPath()) {
+            Map<String, Object> conf = new HashMap<>();
+            // set clean time really high so doesn't kick in
+            conf.put(DaemonConfig.SUPERVISOR_LOCALIZER_CACHE_CLEANUP_INTERVAL_MS, 60 * 60 * 1000);
 
-        String key1 = "key1";
-        String topo1 = "topo1";
-        String topo2 = "topo2";
-        TestLocalizer localizer = new TestLocalizer(conf, baseDir.toString());
+            String key1 = "key1";
+            String topo1 = "topo1";
+            String topo2 = "topo2";
+            TestLocalizer localizer = new TestLocalizer(conf, tmp.getPath());
 
-        ReadableBlobMeta rbm = new ReadableBlobMeta();
-        rbm.set_version(1);
-        rbm.set_settable(new SettableBlobMeta(WORLD_EVERYTHING));
-        when(mockblobstore.getBlobMeta(key1)).thenReturn(rbm);
-        when(mockblobstore.getBlob(key1)).thenReturn(new TestInputStreamWithMeta(1));
+            ReadableBlobMeta rbm = new ReadableBlobMeta();
+            rbm.set_version(1);
+            rbm.set_settable(new SettableBlobMeta(WORLD_EVERYTHING));
+            when(mockBlobStore.getBlobMeta(key1)).thenReturn(rbm);
+            when(mockBlobStore.getBlob(key1)).thenReturn(new TestInputStreamWithMeta(1));
 
-        File user1Dir = localizer.getLocalUserFileCacheDir(user1);
-        assertTrue("failed to create user dir", user1Dir.mkdirs());
-        LocalAssignment topo1Assignment = new LocalAssignment(topo1, Collections.emptyList());
-        topo1Assignment.set_owner(user1);
-        PortAndAssignment topo1Pna = new PortAndAssignmentImpl(1, topo1Assignment);
-        LocalizedResource lrsrc = localizer.getBlob(new LocalResource(key1, false, false), topo1Pna, null);
+            File user1Dir = localizer.getLocalUserFileCacheDir(user1);
+            assertTrue("failed to create user dir", user1Dir.mkdirs());
+            LocalAssignment topo1Assignment = constructLocalAssignment(topo1, user1, Collections.emptyList());
+            PortAndAssignment topo1Pna = new PortAndAssignmentImpl(1, topo1Assignment);
+            LocalizedResource lrsrc = localizer.getBlob(new LocalResource(key1, false, false), topo1Pna, null);
 
-        String expectedUserDir = joinPath(baseDir.toString(), USERCACHE, user1);
-        String expectedFileDir = joinPath(expectedUserDir, LocalizedResource.FILECACHE, LocalizedResource.FILESDIR);
-        assertTrue("user filecache dir not created", new File(expectedFileDir).exists());
-        Path keyVersionFile = Paths.get(expectedFileDir, key1 + ".version");
-        File keyFileCurrentSymlink = new File(expectedFileDir, key1 + LocalizedResource.CURRENT_BLOB_SUFFIX);
-        assertTrue("blob not created", keyFileCurrentSymlink.exists());
-        File versionFile = new File(expectedFileDir, key1 + LocalizedResource.BLOB_VERSION_SUFFIX);
-        assertTrue("blob version file not created", versionFile.exists());
-        assertEquals("blob version not correct", 1, LocalizedResource.localVersionOfBlob(keyVersionFile));
+            String expectedUserDir = joinPath(tmp.getPath(), USERCACHE, user1);
+            String expectedFileDir = joinPath(expectedUserDir, LocalizedResource.FILECACHE, LocalizedResource.FILESDIR);
+            assertTrue("user filecache dir not created", new File(expectedFileDir).exists());
+            Path keyVersionFile = Paths.get(expectedFileDir, key1 + ".version");
+            File keyFileCurrentSymlink = new File(expectedFileDir, key1 + LocalizedResource.CURRENT_BLOB_SUFFIX);
+            assertTrue("blob not created", keyFileCurrentSymlink.exists());
+            File versionFile = new File(expectedFileDir, key1 + LocalizedResource.BLOB_VERSION_SUFFIX);
+            assertTrue("blob version file not created", versionFile.exists());
+            assertEquals("blob version not correct", 1, LocalizedResource.localVersionOfBlob(keyVersionFile));
 
-        ConcurrentMap<String, LocalizedResource> lrsrcSet = localizer.getUserFiles().get(user1);
-        assertEquals("local resource set size wrong", 1, lrsrcSet.size());
+            ConcurrentMap<String, LocalizedResource> lrsrcSet = localizer.getUserFiles().get(user1);
+            assertEquals("local resource set size wrong", 1, lrsrcSet.size());
 
-        // test another topology getting blob with updated version - it should update version now
-        rbm.set_version(2);
-        when(mockblobstore.getBlob(key1)).thenReturn(new TestInputStreamWithMeta(2));
+            // test another topology getting blob with updated version - it should update version now
+            rbm.set_version(2);
+            when(mockBlobStore.getBlob(key1)).thenReturn(new TestInputStreamWithMeta(2));
 
-        LocalAssignment topo2Assignment = new LocalAssignment(topo2, Collections.emptyList());
-        topo2Assignment.set_owner(user1);
-        PortAndAssignment topo2Pna = new PortAndAssignmentImpl(1, topo2Assignment);
-        localizer.getBlob(new LocalResource(key1, false, false), topo2Pna, null);
-        assertTrue("blob version file not created", versionFile.exists());
-        assertEquals("blob version not correct", 2, LocalizedResource.localVersionOfBlob(keyVersionFile));
-        assertTrue("blob file with version 2 not created", new File(expectedFileDir, key1 + ".2").exists());
+            LocalAssignment topo2Assignment = constructLocalAssignment(topo2, user1, Collections.emptyList());
+            PortAndAssignment topo2Pna = new PortAndAssignmentImpl(1, topo2Assignment);
+            localizer.getBlob(new LocalResource(key1, false, false), topo2Pna, null);
+            assertTrue("blob version file not created", versionFile.exists());
+            assertEquals("blob version not correct", 2, LocalizedResource.localVersionOfBlob(keyVersionFile));
+            assertTrue("blob file with version 2 not created", new File(expectedFileDir, key1 + ".2").exists());
 
-        // now test regular updateBlob
-        rbm.set_version(3);
-        when(mockblobstore.getBlob(key1)).thenReturn(new TestInputStreamWithMeta(3));
+            // now test regular updateBlob
+            rbm.set_version(3);
+            when(mockBlobStore.getBlob(key1)).thenReturn(new TestInputStreamWithMeta(3));
 
-        ArrayList<LocalResource> arr = new ArrayList<>();
-        arr.add(new LocalResource(key1, false, false));
-        localizer.updateBlobs();
-        assertTrue("blob version file not created", versionFile.exists());
-        assertEquals("blob version not correct", 3, LocalizedResource.localVersionOfBlob(keyVersionFile));
-        assertTrue("blob file with version 3 not created", new File(expectedFileDir, key1 + ".3").exists());
+            ArrayList<LocalResource> arr = new ArrayList<>();
+            arr.add(new LocalResource(key1, false, false));
+            localizer.updateBlobs();
+            assertTrue("blob version file not created", versionFile.exists());
+            assertEquals("blob version not correct", 3, LocalizedResource.localVersionOfBlob(keyVersionFile));
+            assertTrue("blob file with version 3 not created", new File(expectedFileDir, key1 + ".3").exists());
+        }
     }
 
     @Test
@@ -989,9 +974,9 @@
         PortAndAssignment pna = new PortAndAssignmentImpl(1, la);
         PortAndAssignment tpna = new TimePortAndAssignment(pna, new Timer());
 
-        assertTrue(pna.equals(tpna));
-        assertTrue(tpna.equals(pna));
-        assertTrue(pna.hashCode() == tpna.hashCode());
+        assertEquals(pna, tpna);
+        assertEquals(tpna, pna);
+        assertEquals(pna.hashCode(), tpna.hashCode());
     }
 
     class TestLocalizer extends AsyncLocalizer {
@@ -1002,7 +987,7 @@
 
         @Override
         protected ClientBlobStore getClientBlobStore() {
-            return mockblobstore;
+            return mockBlobStore;
         }
 
         synchronized void addReferences(List<LocalResource> localresource, PortAndAssignment pna, BlobChangingCallback cb) {
diff --git a/storm-server/src/test/java/org/apache/storm/scheduler/resource/strategies/scheduling/TestConstraintSolverStrategy.java b/storm-server/src/test/java/org/apache/storm/scheduler/resource/strategies/scheduling/TestConstraintSolverStrategy.java
index f43868d..f245d90 100644
--- a/storm-server/src/test/java/org/apache/storm/scheduler/resource/strategies/scheduling/TestConstraintSolverStrategy.java
+++ b/storm-server/src/test/java/org/apache/storm/scheduler/resource/strategies/scheduling/TestConstraintSolverStrategy.java
@@ -195,12 +195,12 @@
                 @Override
                 protected SolverResult backtrackSearch(SearcherState state) {
                     //Each time we try to schedule a new component simulate taking 1 second longer
-                    Time.advanceTime(1_000);
+                    Time.advanceTime(1_001);
                     return super.backtrackSearch(state);
 
                 }
             };
-            basicFailureTest(Config.TOPOLOGY_RAS_CONSTRAINT_MAX_TIME_SECS, 2, cs);
+            basicFailureTest(Config.TOPOLOGY_RAS_CONSTRAINT_MAX_TIME_SECS, 1, cs);
         }
     }
 
@@ -215,6 +215,8 @@
         // Add 1 topology with large number of executors and constraints. Too many can cause a java.lang.StackOverflowError
         Config config = createCSSClusterConfig(10, 10, 0, null);
         config.put(Config.TOPOLOGY_RAS_CONSTRAINT_MAX_STATE_SEARCH, 50000);
+        config.put(Config.TOPOLOGY_RAS_CONSTRAINT_MAX_TIME_SECS, 120);
+        config.put(DaemonConfig.SCHEDULING_TIMEOUT_SECONDS_PER_TOPOLOGY, 120);
 
         List<List<String>> constraints = new LinkedList<>();
         addContraints("spout-0", "spout-0", constraints);
@@ -237,15 +239,9 @@
         scheduler.schedule(topologies, cluster);
 
         boolean scheduleSuccess = isStatusSuccess(cluster.getStatus(topo.getId()));
-
-        if (parallelismMultiplier == 1) {
-            Assert.assertTrue(scheduleSuccess);
-        } else if (parallelismMultiplier == 20) {
-            // For default JVM, scheduling currently fails due to StackOverflow.
-            // For now just log the results of the test. Change to assert when StackOverflow issue is fixed.
-            LOG.info("testScheduleLargeExecutorCount scheduling {} with {}x executor multiplier", scheduleSuccess ? "succeeds" : "fails",
-                    parallelismMultiplier);
-        }
+        LOG.info("testScheduleLargeExecutorCount scheduling {} with {}x executor multiplier", scheduleSuccess ? "succeeds" : "fails",
+                parallelismMultiplier);
+        Assert.assertTrue(scheduleSuccess);
     }
 
     @Test
diff --git a/storm-server/src/test/java/org/apache/storm/scheduler/resource/strategies/scheduling/TestGenericResourceAwareStrategy.java b/storm-server/src/test/java/org/apache/storm/scheduler/resource/strategies/scheduling/TestGenericResourceAwareStrategy.java
index 033b3cf..6996ec7 100644
--- a/storm-server/src/test/java/org/apache/storm/scheduler/resource/strategies/scheduling/TestGenericResourceAwareStrategy.java
+++ b/storm-server/src/test/java/org/apache/storm/scheduler/resource/strategies/scheduling/TestGenericResourceAwareStrategy.java
@@ -29,6 +29,7 @@
 import java.util.TreeMap;
 import java.util.concurrent.atomic.AtomicLong;
 import org.apache.storm.Config;
+import org.apache.storm.DaemonConfig;
 import org.apache.storm.generated.StormTopology;
 import org.apache.storm.generated.WorkerResources;
 import org.apache.storm.scheduler.Cluster;
@@ -236,6 +237,71 @@
         assertEquals(expectedScheduling, foundScheduling);
     }
 
+    private TopologyDetails createTestStormTopology(StormTopology stormTopology, int priority, String name, Config conf) {
+        conf.put(Config.TOPOLOGY_PRIORITY, priority);
+        conf.put(Config.TOPOLOGY_NAME, name);
+        return new TopologyDetails(name , conf, stormTopology, 0,
+                genExecsAndComps(stormTopology), currentTime, "user");
+    }
+
+    /*
+     * test requiring eviction until Generic Resource (gpu) is evicted.
+     */
+    @Test
+    public void testGrasRequiringEviction() {
+        int spoutParallelism = 3;
+        double cpuPercent = 10;
+        double memoryOnHeap = 10;
+        double memoryOffHeap = 10;
+        // Sufficient Cpu/Memory. But insufficient gpu to schedule all topologies (gpu1, noGpu, gpu2).
+
+        // gpu topology (requires 3 gpu's in total)
+        TopologyBuilder builder = new TopologyBuilder();
+        builder.setSpout("spout", new TestSpout(), spoutParallelism).addResource("gpu.count", 1.0);
+        StormTopology stormTopologyWithGpu = builder.createTopology();
+
+        // non-gpu topology
+        builder = new TopologyBuilder();
+        builder.setSpout("spout", new TestSpout(), spoutParallelism);
+        StormTopology stormTopologyNoGpu = builder.createTopology();
+
+        Config conf = createGrasClusterConfig(cpuPercent, memoryOnHeap, memoryOffHeap, null, Collections.emptyMap());
+        conf.put(DaemonConfig.RESOURCE_AWARE_SCHEDULER_MAX_TOPOLOGY_SCHEDULING_ATTEMPTS, 2);    // allow 1 round of evictions
+
+        String gpu1 = "hasGpu1";
+        String noGpu = "hasNoGpu";
+        String gpu2 = "hasGpu2";
+        TopologyDetails topo[] = {
+                createTestStormTopology(stormTopologyWithGpu, 10, gpu1, conf),
+                createTestStormTopology(stormTopologyNoGpu, 10, noGpu, conf),
+                createTestStormTopology(stormTopologyWithGpu, 9, gpu2, conf)
+        };
+        Topologies topologies = new Topologies(topo[0], topo[1]);
+
+        Map<String, Double> genericResourcesMap = new HashMap<>();
+        genericResourcesMap.put("gpu.count", 1.0);
+        Map<String, SupervisorDetails> supMap = genSupervisors(4, 4, 500, 2000, genericResourcesMap);
+        Cluster cluster = new Cluster(new INimbusTest(), new ResourceMetrics(new StormMetricsRegistry()), supMap, new HashMap<>(), topologies, conf);
+
+        // should schedule gpu1 and noGpu successfully
+        scheduler = new ResourceAwareScheduler();
+        scheduler.prepare(conf);
+        scheduler.schedule(topologies, cluster);
+        assertTopologiesFullyScheduled(cluster, gpu1);
+        assertTopologiesFullyScheduled(cluster, noGpu);
+
+        // should evict gpu1 and noGpu topologies in order to schedule gpu2 topology; then fail to reschedule gpu1 topology;
+        // then schedule noGpu topology.
+        // Scheduling used to ignore gpu resource when deciding when to stop evicting, and gpu2 would fail to schedule.
+        topologies = new Topologies(topo[0], topo[1], topo[2]);
+        cluster = new Cluster(cluster, topologies);
+        scheduler.schedule(topologies, cluster);
+        assertTopologiesNotScheduled(cluster, gpu1);
+        assertTopologiesFullyScheduled(cluster, noGpu);
+        assertTopologiesFullyScheduled(cluster, gpu2);
+    }
+
+
     @Test
     public void testAntiAffinityWithMultipleTopologies() {
         INimbus iNimbus = new INimbusTest();
diff --git a/storm-server/src/test/java/org/apache/storm/security/auth/AuthTest.java b/storm-server/src/test/java/org/apache/storm/security/auth/AuthTest.java
index 49a4641..7802390 100644
--- a/storm-server/src/test/java/org/apache/storm/security/auth/AuthTest.java
+++ b/storm-server/src/test/java/org/apache/storm/security/auth/AuthTest.java
@@ -576,7 +576,7 @@
     public void getTransportPluginThrowsRunimeTest() {
         Map<String, Object> conf = ConfigUtils.readStormConfig();
         conf.put(Config.STORM_THRIFT_TRANSPORT_PLUGIN, "null.invalid");
-        ClientAuthUtils.getTransportPlugin(ThriftConnectionType.NIMBUS, conf, null);
+        ClientAuthUtils.getTransportPlugin(ThriftConnectionType.NIMBUS, conf);
     }
 
     @Test
diff --git a/storm-webapp/src/main/java/org/apache/storm/daemon/logviewer/utils/WorkerLogs.java b/storm-webapp/src/main/java/org/apache/storm/daemon/logviewer/utils/WorkerLogs.java
index 0d5b14a..e57b626 100644
--- a/storm-webapp/src/main/java/org/apache/storm/daemon/logviewer/utils/WorkerLogs.java
+++ b/storm-webapp/src/main/java/org/apache/storm/daemon/logviewer/utils/WorkerLogs.java
@@ -25,6 +25,7 @@
 import com.codahale.metrics.Meter;
 import com.google.common.collect.Lists;
 
+import java.io.File;
 import java.io.IOException;
 import java.nio.file.Files;
 import java.nio.file.Path;
@@ -40,11 +41,15 @@
 import java.util.stream.Collectors;
 import java.util.stream.Stream;
 
+import org.apache.storm.Config;
 import org.apache.storm.daemon.supervisor.ClientSupervisorUtils;
 import org.apache.storm.daemon.supervisor.SupervisorUtils;
 import org.apache.storm.daemon.utils.PathUtil;
+import org.apache.storm.generated.LSWorkerHeartbeat;
 import org.apache.storm.metric.StormMetricsRegistry;
+import org.apache.storm.utils.LruMap;
 import org.apache.storm.utils.ObjectReader;
+import org.apache.storm.utils.ServerConfigUtils;
 import org.apache.storm.utils.Time;
 import org.apache.storm.utils.Utils;
 import org.jooq.lambda.Unchecked;
@@ -64,6 +69,7 @@
     private final Map<String, Object> stormConf;
     private final Path logRootDir;
     private final DirectoryCleaner directoryCleaner;
+    private final LruMap<String, Integer> mapTopologyIdToHeartbeatTimeout;
 
     /**
      * Constructor.
@@ -77,6 +83,7 @@
         this.logRootDir = logRootDir.toAbsolutePath().normalize();
         this.numSetPermissionsExceptions = metricsRegistry.registerMeter(ExceptionMeterNames.NUM_SET_PERMISSION_EXCEPTIONS);
         this.directoryCleaner = new DirectoryCleaner(metricsRegistry);
+        this.mapTopologyIdToHeartbeatTimeout = new LruMap<>(200);
     }
 
     /**
@@ -189,12 +196,40 @@
      */
     public Set<String> getAliveIds(int nowSecs) throws IOException {
         return SupervisorUtils.readWorkerHeartbeats(stormConf).entrySet().stream()
-                .filter(entry -> Objects.nonNull(entry.getValue())
-                        && !SupervisorUtils.isWorkerHbTimedOut(nowSecs, entry.getValue(), stormConf))
+                .filter(entry -> Objects.nonNull(entry.getValue()) && !isTimedOut(nowSecs, entry))
                 .map(Map.Entry::getKey)
                 .collect(toCollection(TreeSet::new));
     }
 
+    private boolean isTimedOut(int nowSecs, Map.Entry<String, LSWorkerHeartbeat> entry) {
+        LSWorkerHeartbeat hb = entry.getValue();
+        int workerLogTimeout = getTopologyTimeout(hb);
+        return (nowSecs - hb.get_time_secs()) >= workerLogTimeout;
+    }
+
+    private int getTopologyTimeout(LSWorkerHeartbeat hb) {
+        String topoId = hb.get_topology_id();
+        Integer cachedTimeout = mapTopologyIdToHeartbeatTimeout.get(topoId);
+        if (cachedTimeout != null) {
+            return cachedTimeout;
+        } else {
+            int timeout = getWorkerLogTimeout(stormConf, topoId, hb.get_port());
+            mapTopologyIdToHeartbeatTimeout.put(topoId, timeout);
+            return timeout;
+        }
+    }
+
+    private int getWorkerLogTimeout(Map<String, Object> conf, String topologyId, int port) {
+        int defaultWorkerLogTimeout = ObjectReader.getInt(conf.get(Config.SUPERVISOR_WORKER_TIMEOUT_SECS));
+        File file = ServerConfigUtils.getLogMetaDataFile(conf, topologyId, port);
+        Map<String, Object> map = (Map<String, Object>) Utils.readYamlFile(file.getAbsolutePath());
+        if (map == null) {
+            return defaultWorkerLogTimeout;
+        }
+
+        return (Integer) map.getOrDefault(Config.TOPOLOGY_WORKER_TIMEOUT_SECS, defaultWorkerLogTimeout);
+    }
+
     /**
      * Finds directories for specific worker ids that can be cleaned up.
      *
diff --git a/storm-webapp/src/main/java/org/apache/storm/daemon/ui/UIHelpers.java b/storm-webapp/src/main/java/org/apache/storm/daemon/ui/UIHelpers.java
index 4d37b64..3109bfe 100644
--- a/storm-webapp/src/main/java/org/apache/storm/daemon/ui/UIHelpers.java
+++ b/storm-webapp/src/main/java/org/apache/storm/daemon/ui/UIHelpers.java
@@ -35,6 +35,7 @@
 import java.util.NavigableMap;
 import java.util.Objects;
 import java.util.Set;
+import java.util.TreeMap;
 import java.util.concurrent.atomic.AtomicReference;
 import java.util.regex.Matcher;
 import java.util.regex.Pattern;
@@ -85,6 +86,7 @@
 import org.apache.storm.generated.TopologySummary;
 import org.apache.storm.generated.WorkerSummary;
 import org.apache.storm.logging.filters.AccessLoggingFilter;
+import org.apache.storm.scheduler.resource.normalization.NormalizedResourceRequest;
 import org.apache.storm.stats.StatsUtil;
 import org.apache.storm.thrift.TException;
 import org.apache.storm.utils.IVersionInfo;
@@ -632,9 +634,32 @@
                 ? StatsUtil.floatStr((supervisorUsedCpu * 100.0) / supervisorTotalCpu) : "0.0");
         result.put("bugtracker-url", conf.get(DaemonConfig.UI_PROJECT_BUGTRACKER_URL));
         result.put("central-log-url", conf.get(DaemonConfig.UI_CENTRAL_LOGGING_URL));
+
+        Map<String, Double> usedGenericResources = new HashMap<>();
+        Map<String, Double> totalGenericResources = new HashMap<>();
+        for (SupervisorSummary ss : supervisorSummaries) {
+            usedGenericResources = NormalizedResourceRequest.addResourceMap(usedGenericResources, ss.get_used_generic_resources());
+            totalGenericResources = NormalizedResourceRequest.addResourceMap(totalGenericResources, ss.get_total_resources());
+        }
+        Map<String, Double> availGenericResources = NormalizedResourceRequest
+                .subtractResourceMap(totalGenericResources, usedGenericResources);
+        result.put("availGenerics", prettifyGenericResources(availGenericResources));
+        result.put("totalGenerics", prettifyGenericResources(totalGenericResources));
         return result;
     }
 
+    private static String prettifyGenericResources(Map<String, Double> resourceMap) {
+        if (resourceMap == null) {
+            return null;
+        }
+        TreeMap<String, Double> treeGenericResources = new TreeMap<>(); // use TreeMap for deterministic ordering
+        treeGenericResources.putAll(resourceMap);
+        NormalizedResourceRequest.removeNonGenericResources(treeGenericResources);
+        return treeGenericResources.toString()
+                .replaceAll("[{}]", "")
+                .replace(",", "");
+    }
+
     /**
      * Prettify OwnerResourceSummary.
      * @param ownerResourceSummary ownerResourceSummary
@@ -745,12 +770,14 @@
                 topologySummary.get_requested_memoffheap()
                         + topologySummary.get_assigned_memonheap());
         result.put("requestedCpu", topologySummary.get_requested_cpu());
+        result.put("requestedGenericResources", prettifyGenericResources(topologySummary.get_requested_generic_resources()));
         result.put("assignedMemOnHeap", topologySummary.get_assigned_memonheap());
         result.put("assignedMemOffHeap", topologySummary.get_assigned_memoffheap());
         result.put("assignedTotalMem",
                 topologySummary.get_assigned_memoffheap()
                         + topologySummary.get_assigned_memonheap());
         result.put("assignedCpu", topologySummary.get_assigned_cpu());
+        result.put("assignedGenericResources", prettifyGenericResources(topologySummary.get_assigned_generic_resources()));
         result.put("topologyVersion", topologySummary.get_topology_version());
         result.put("stormVersion", topologySummary.get_storm_version());
         return result;
@@ -909,6 +936,15 @@
         result.put("availMem", totalMemory - supervisorSummary.get_used_mem());
         result.put("availCpu", totalCpu - supervisorSummary.get_used_cpu());
         result.put("version", supervisorSummary.get_version());
+
+        Map<String, Double> totalGenericResources = new HashMap<>(totalResources);
+        result.put("totalGenericResources", prettifyGenericResources(totalGenericResources));
+        Map<String, Double> usedGenericResources = supervisorSummary.get_used_generic_resources();
+        result.put("usedGenericResources", prettifyGenericResources(usedGenericResources));
+        Map<String, Double> availGenericResources = NormalizedResourceRequest
+                .subtractResourceMap(totalGenericResources, usedGenericResources);
+        result.put("availGenericResources", prettifyGenericResources(availGenericResources));
+
         return result;
     }
 
@@ -1165,6 +1201,9 @@
             result.put(
                     "requestedCpu",
                     commonAggregateStats.get_resources_map().get(Constants.COMMON_CPU_RESOURCE_NAME));
+            result.put(
+                    "requestedGenericResourcesComp",
+                    prettifyGenericResources(commonAggregateStats.get_resources_map()));
         }
         return result;
     }
@@ -1546,10 +1585,12 @@
         result.put("requestedSharedOnHeapMem", topologyPageInfo.get_requested_shared_on_heap_memory());
         result.put("requestedRegularOffHeapMem", topologyPageInfo.get_requested_regular_off_heap_memory());
         result.put("requestedSharedOffHeapMem", topologyPageInfo.get_requested_shared_off_heap_memory());
+        result.put("requestedGenericResources", prettifyGenericResources(topologyPageInfo.get_requested_generic_resources()));
         result.put("assignedRegularOnHeapMem", topologyPageInfo.get_assigned_regular_on_heap_memory());
         result.put("assignedSharedOnHeapMem", topologyPageInfo.get_assigned_shared_on_heap_memory());
         result.put("assignedRegularOffHeapMem", topologyPageInfo.get_assigned_regular_off_heap_memory());
         result.put("assignedSharedOffHeapMem", topologyPageInfo.get_assigned_shared_off_heap_memory());
+        result.put("assignedGenericResources", prettifyGenericResources(topologyPageInfo.get_assigned_generic_resources()));
         result.put("topologyStats", getTopologyStatsMap(topologyPageInfo.get_topology_stats()));
         List<Map> workerSummaries = new ArrayList();
         if (topologyPageInfo.is_set_workers()) {
@@ -2025,6 +2066,8 @@
                 componentPageInfo.get_resources_map().get(Constants.COMMON_OFFHEAP_MEMORY_RESOURCE_NAME));
         result.put("requestedCpu",
                 componentPageInfo.get_resources_map().get(Constants.COMMON_CPU_RESOURCE_NAME));
+        result.put("requestedGenericResources",
+                prettifyGenericResources(componentPageInfo.get_resources_map()));
 
         result.put("schedulerDisplayResource", config.get(DaemonConfig.SCHEDULER_DISPLAY_RESOURCE));
         result.put("topologyId", id);
diff --git a/storm-webapp/src/main/java/org/apache/storm/daemon/ui/WEB-INF/templates/component-page-template.html b/storm-webapp/src/main/java/org/apache/storm/daemon/ui/WEB-INF/templates/component-page-template.html
index 21a5b3a..3862a03 100644
--- a/storm-webapp/src/main/java/org/apache/storm/daemon/ui/WEB-INF/templates/component-page-template.html
+++ b/storm-webapp/src/main/java/org/apache/storm/daemon/ui/WEB-INF/templates/component-page-template.html
@@ -55,6 +55,12 @@
             Requested CPU
           </span>
         </th>
+        <th>
+          <span data-toggle="tooltip" data-placement="above" title="The generic resources requested to run a single executor of this component.">
+            Requested Generic Resources
+          </span>
+        </th>
+
         {{/schedulerDisplayResource}}
         {{#eventLogLink}}
         <th>
@@ -75,6 +81,7 @@
         <td>{{requestedMemOnHeap}}</td>
         <td>{{requestedMemOffHeap}}</td>
         <td>{{requestedCpu}}</td>
+        <td>{{requestedGenericResources}}</td>
         {{/schedulerDisplayResource}}
         {{#eventLogLink}}
         <td><a href="{{eventLogLink}}">events</a></td>
diff --git a/storm-webapp/src/main/java/org/apache/storm/daemon/ui/WEB-INF/templates/index-page-template.html b/storm-webapp/src/main/java/org/apache/storm/daemon/ui/WEB-INF/templates/index-page-template.html
index a60e578..c0fb62f 100644
--- a/storm-webapp/src/main/java/org/apache/storm/daemon/ui/WEB-INF/templates/index-page-template.html
+++ b/storm-webapp/src/main/java/org/apache/storm/daemon/ui/WEB-INF/templates/index-page-template.html
@@ -169,7 +169,16 @@
              CPU Utilization (%)
             </span>
         </th>
-      </tr>
+        <th>
+        <span data-toggle="tooltip" data-placement="top" title="Total Generic Resources in Cluster.">
+             Total Generic Resources
+            </span>
+        </th>
+        <th>
+        <span data-toggle="tooltip" data-placement="top" title="Available Generic Resources in Cluster.">
+             Available Generic Resources
+            </span>
+        </th>
       </thead>
       <tbody>
       <tr>
@@ -181,6 +190,8 @@
         <td>{{availCpu}}</td>
         <td>{{fragmentedCpu}}</td>
         <td>{{cpuAssignedPercentUtil}}</td>
+        <td>{{totalGenerics}}</td>
+        <td>{{availGenerics}}</td>
       </tr>
       </tbody>
     </table>
@@ -307,6 +318,11 @@
         </th>
         {{/schedulerDisplayResource}}
         <th>
+          <span data-toggle="tooltip" data-placement="top" title="Assigned Generic Rescources by Scheduler.">
+            Assigned Generic Resources
+          </span>
+        </th>
+        <th>
           <span data-toggle="tooltip" data-placement="left" title="This shows information from the scheduler about the latest attempt to schedule the Topology on the cluster.">
             Scheduler Info
           </span>
@@ -338,6 +354,7 @@
         {{#schedulerDisplayResource}}
         <td>{{assignedCpu}}</td>
         {{/schedulerDisplayResource}}
+        <td>{{assignedGenericResources}}</td>
         <td>{{schedulerInfo}}</td>
         <td>{{topologyVersion}}</td>
         <td>{{stormVersion}}</td>
@@ -413,6 +430,23 @@
           Avail CPU (%)
         </span>
       </th>
+
+      <th>
+        <span data-toggle="tooltip" data-placement="left" title="The generic resources capacity of a supervisor.">
+          Total Generic Resources
+        </span>
+      </th>
+      <th>
+        <span data-toggle="tooltip" data-placement="left" title="The generic resources that have been allocated.">
+          Used Generic Resources
+        </span>
+      </th>
+      <th>
+        <span data-toggle="tooltip" data-placement="left" title="The generic resources that are available.">
+          Avail Generic Resources
+        </span>
+      </th>
+
       {{/schedulerDisplayResource}}
       <th>
         <span data-toggle="tooltip" data-placement="left" title="Version">
@@ -444,6 +478,9 @@
       <td>{{totalCpu}}</td>
       <td>{{usedCpu}}</td>
       <td>{{availCpu}}</td>
+      <td>{{totalGenericResources}}</td>
+      <td>{{usedGenericResources}}</td>
+      <td>{{availGenericResources}}</td>
       {{/schedulerDisplayResource}}
       <td>{{version}}</td>
       <td>{{blacklisted}}</td>
diff --git a/storm-webapp/src/main/java/org/apache/storm/daemon/ui/WEB-INF/templates/supervisor-page-template.html b/storm-webapp/src/main/java/org/apache/storm/daemon/ui/WEB-INF/templates/supervisor-page-template.html
index 3475f12..6e16e3a 100644
--- a/storm-webapp/src/main/java/org/apache/storm/daemon/ui/WEB-INF/templates/supervisor-page-template.html
+++ b/storm-webapp/src/main/java/org/apache/storm/daemon/ui/WEB-INF/templates/supervisor-page-template.html
@@ -76,6 +76,23 @@
               Avail CPU (%)
             </span>
           </th>
+
+          <th>
+            <span data-toggle="tooltip" data-placement="left" title="The generic resources capacity of a supervisor.">
+              Total Generic Resources
+            </span>
+          </th>
+          <th>
+            <span data-toggle="tooltip" data-placement="left" title="The generic resources that have been allocated.">
+              Used Generic Resources
+            </span>
+          </th>
+          <th>
+            <span data-toggle="tooltip" data-placement="left" title="The generic resources that are available.">
+              Avail Generic Resources
+            </span>
+          </th>
+
           {{/schedulerDisplayResource}}
           <th>
             <span data-toggle="tooltip" data-placement="top" title="Version">
@@ -106,6 +123,9 @@
             <td>{{totalCpu}}</td>
             <td>{{usedCpu}}</td>
             <td>{{availCpu}}</td>
+            <td>{{totalGenericResources}}</td>
+            <td>{{usedGenericResources}}</td>
+            <td>{{availGenericResources}}</td>
             {{/schedulerDisplayResource}}
             <td>{{version}}</td>
             <td>{{blacklisted}}</td>
diff --git a/storm-webapp/src/main/java/org/apache/storm/daemon/ui/WEB-INF/templates/topology-page-template.html b/storm-webapp/src/main/java/org/apache/storm/daemon/ui/WEB-INF/templates/topology-page-template.html
index 49cb314..b67e302 100644
--- a/storm-webapp/src/main/java/org/apache/storm/daemon/ui/WEB-INF/templates/topology-page-template.html
+++ b/storm-webapp/src/main/java/org/apache/storm/daemon/ui/WEB-INF/templates/topology-page-template.html
@@ -76,6 +76,11 @@
         </th>
         {{/schedulerDisplayResource}}
         <th>
+          <span data-toggle="tooltip" data-placement="top" title="Assigned Generic Rescources by Scheduler.">
+            Assigned Generic Resources
+          </span>
+        </th>
+        <th>
           <span data-toggle="tooltip" data-placement="left" title="This shows information from the scheduler about the latest attempt to schedule the Topology on the cluster.">
             Scheduler Info
           </span>
@@ -106,6 +111,7 @@
         <td>{{assignedTotalMem}}</td>
         {{#schedulerDisplayResource}}
         <td>{{assignedCpu}}</td>
+        <td>{{assignedGenericResources}}</td>
         {{/schedulerDisplayResource}}
         <td>{{schedulerInfo}}</td>
         <td>{{topologyVersion}}</td>
@@ -150,6 +156,11 @@
             Total CPU (%)
           </span>
         </th>
+        <th>
+          <span data-toggle="tooltip" data-placement="top" title="Total Generic Resources.">
+            Total Generic Resources
+          </span>
+        </th>
       </tr>
     </thead>
     <tbody>
@@ -161,6 +172,7 @@
         <td>{{requestedSharedOffHeapMem}}</td>
         <td>{{requestedTotalMem}}</td>
         <td>{{requestedCpu}}</td>
+        <td>{{requestedGenericResources}}</td>
       </tr>
       <tr>
         <td>Assigned</td>
@@ -170,6 +182,7 @@
         <td>{{assignedSharedOffHeapMem}}</td>
         <td>{{assignedTotalMem}}</td>
         <td>{{assignedCpu}}</td>
+        <td>{{assignedGenericResources}}</td>
       </tr>
     </tbody>
   </table>
@@ -375,6 +388,11 @@
             Req CPU
           </span>
         </th>
+        <th class="header table-num">
+          <span data-toggle="tooltip" data-placement="top" title="The generic resources requested to run a single executor of this component.">
+            Req Generic
+          </span>
+        </th>
         {{/schedulerDisplayResource}}
         <th class="header table-num">
           <span data-toggle="tooltip" data-placement="top" title="The number of Tuples emitted.">
@@ -417,6 +435,7 @@
         <td>{{requestedMemOnHeap}}</td>
         <td>{{requestedMemOffHeap}}</td>
         <td>{{requestedCpu}}</td>
+        <td>{{requestedGenericResourcesComp}}</td>
         {{/schedulerDisplayResource}}
         <td>{{emitted}}</td>
         <td>{{transferred}}</td>
@@ -472,6 +491,11 @@
             Req CPU
           </span>
         </th>
+        <th class="header table-num">
+          <span data-toggle="tooltip" data-placement="top" title="The generic resources requested to run a single executor of this component.">
+            Req Generic
+          </span>
+        </th>
         {{/schedulerDisplayResource}}
         <th class="header table-num">
           <span data-toggle="tooltip" data-placement="top" title="The number of Tuples emitted.">
@@ -528,6 +552,7 @@
         <td>{{requestedMemOnHeap}}</td>
         <td>{{requestedMemOffHeap}}</td>
         <td>{{requestedCpu}}</td>
+        <td>{{requestedGenericResourcesComp}}</td>
         {{/schedulerDisplayResource}}
         <td>{{emitted}}</td>
         <td>{{transferred}}</td>